Examensarbete Fractals and Computer Graphics Meritxell Joanpere Salvad´ o

Examensarbete Fractals and Computer Graphics Meritxell Joanpere Salvad´ o
Examensarbete
Fractals and Computer Graphics
Meritxell Joanpere Salvadó
LiTH - MAT - INT - A - - 2011 / 01 - - SE
Fractals and Computer Graphics
Applied Mathematics, Linköpings Universitet
Meritxell Joanpere Salvadó
LiTH - MAT - INT - A - - 2011 / 01 - - SE
Examensarbete: 15 hp
Level: A
Supervisor: Milagros Izquierdo,
Applied Mathematics, Linköpings Universitet
Examiner: Milagros Izquierdo,
Applied Mathematics, Linköpings Universitet
Linköpings: June 2011
Abstract
Fractal geometry is a new branch of mathematics. This report presents the
tools, methods and theory required to describe this geometry. The power of
Iterated Function Systems (IFS) is introduced and applied to produce fractal
images or approximate complex estructures found in nature.
The focus of this thesis is on how fractal geometry can be used in applications
to computer graphics or to model natural objects.
Keywords: Affine Transformation, Möbius Transformation, Metric space, Metric Space of Fractals, IFS, Attractor, Collage Theorem, Fractal Dimension
and Fractal Tops.
Joanpere Salvadó, 2011.
v
vi
Acknowledgements
I would like to thank my supervisor and examiner Milagros Izquierdo, at the
division of applied mathematics, for giving me the opportunity to write my final
degree thesis about Fractals, and for her excellent guidance and her constant
feedback.
I also have to thank Aitor Villarreal for helping me with the LATEX language
and for his support over these months.
Finally, I would like to thank my family for their interest and support.
Joanpere Salvadó, 2011.
vii
viii
Nomenclature
Most of the reoccurring abbreviations and symbols are described here.
Symbols
d
(X, d)
dH
(H(X), dH )
Ω
σ
A
l
fn
pn
N
F
A
D
φ, ϕ
Metric
Metric space
Hausdorff metric
Space of fractals
Code space
Code
Alphabet
Contractivity factor
Contraction mappings
Probabilities
Cardinality (of an IFS)
IFS
Attractor of the IFS
Fractal dimension
Adress function
Abbreviations
IFS
iff
Iterated function system.
if and only if
Joanpere Salvadó, 2011.
ix
x
List of Figures
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
Some orbits of a logistic function. . . . . . . . . . . . . . . . . . .
Initial triangle. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Original triangle and the result of applying f1 . . . . . . . . . . .
Self-portrait. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Möbius transformation f (z) = z1 . . . . . . . . . . . . . . . . . .
z−i
Möbius transformation f (z) = z+1
. . . . . . . . . . . . . . . . .
Self-portrait after the translation f . . . . . . . . . . . . . . . . .
Self-portrait after the Möbius transformation f (z) = z1 . . . . .
Original self-portrait and the self-portrait after the Möebius transformation (really small close to (0, 0)). . . . . . . . . . . . . . . .
5
8
9
9
10
11
11
12
2.1
2.2
2.3
2.4
The Hausdorff distance between A and B is 1.
The Hausdorff distance between A and B is 10.
A Cauchy sequence of compact sets An . . . . .
Addresses of points in the Cantor set. . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
18
18
19
21
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
3.9
The Sierpinski triangle is self-similar. . . . .
First 4 stages in Cantor set generation. . . .
Stage 0, 1, 2 and 9 of the Koch curve. . . .
Sierpinski triangle . . . . . . . . . . . . . .
Stages 0, 1 and 2 of the Sierpinski triangle.
Stages 1, 2 and 3 of the self-portrait fractal.
Firsts iterations of the Sierpinski Carpet. .
Sierpinski pentagon . . . . . . . . . . . . . .
3 iterations of the Peano curve construction.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
23
24
24
25
25
25
26
26
26
4.1
Addresses of points for the firsts two steps of the Sierpinski triangle transformation. . . . . . . . . . . . . . . . . . . . . . . . . .
Addresses of some points of the Sierpinski triangle. . . . . . . . .
Sierpinski triangle constructed with the Deterministic Algorithm.
Fern constructed using the deterministic algorithm. . . . . . . . .
Random Iteration Algorithm for the Sierpinski triangle. . . . . .
Random Iteration Algorithm for the Sierpinski triangle. . . . . .
The result of running the fern random algorithm of program 3.2.2
for 2.000, 10.000 and 25.000 iterations respectively. . . . . . . . .
The result of running the modified random algorithm (with equal
probabilities) for 25.000 iterations. . . . . . . . . . . . . . . . . .
4.2
4.3
4.4
4.5
4.6
4.7
4.8
Joanpere Salvadó, 2011.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
12
31
32
34
35
36
36
37
38
xi
xii
List of Figures
4.9
Both pictures are the same attractor. Colors will help us to solve
the problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.10 We can approximate the attractor with an IFS . . . . . . . . . .
5.1
5.2
5.3
5.4
5.5
5.6
5.7
5.8
5.9
5.10
5.11
5.12
5.13
6.1
6.2
ln 3
The self-similarity dimension of the Sierpinski triangle is D = ln
2.
ln 4
The self-similarity dimension of the koch curve is D = ln 3 . . . .
Sierpiski triangle. . . . . . . . . . . . . . . . . . . . . . . . . . . .
Koch curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Stage 0 in the construction of the Peano curve . . . . . . . . . .
Stage 1 of the contruction of the Peano Curve . . . . . . . . . . .
Stages 2, 3 and 4 of the Peano curve. . . . . . . . . . . . . . . . .
Graph of the interpolation function. . . . . . . . . . . . . . . . .
Members of the family of fractal interpolation functions corresponding to the set of data {(0, 0), (1, 1), (2, 1), (3, 2)}, such that
each function has diferent dimension. . . . . . . . . . . . . . . . .
FTSE 100 chart of Monday, June 6 2011. . . . . . . . . . . . . .
Part of the Norway’s coastline. . . . . . . . . . . . . . . . . . . .
Cloud constructed using the Collage theorem. . . . . . . . . . . .
Clouds generated using plasma fractal method compared with
real clouds of Linköping. . . . . . . . . . . . . . . . . . . . . . . .
Fractal top produced by colour-stealing. The colours were ’stolen’
from the picture on the right. . . . . . . . . . . . . . . . . . . . .
Fractal top produced by colour-stealing. The colours were ’stolen’
from the picture on the right. . . . . . . . . . . . . . . . . . . . .
39
40
42
42
43
44
45
45
46
48
50
52
52
53
53
57
58
List of Tables
1.1
Various Orbits of f4 (x) = 4x(1 − x). . . . . . . . . . . . . . . . .
4.1
4.2
4.3
4.4
4.5
IFS code for a Sierpinski triangle
General IFS code . . . . . . . . .
Another IFS code for a Sierpinski
IFS code for a Fern . . . . . . . .
IFS code for example 3.1.1 . . . .
.
.
.
.
.
30
30
31
31
40
5.1
5.2
Dimension data for Euclidean d-cubes. . . . . . . . . . . . . . . .
IFS code for an interpolation function . . . . . . . . . . . . . . .
41
48
6.1
Another IFS code for a Sierpinski triangle . . . . . . . . . . . . .
57
Joanpere Salvadó, 2011.
. . . . .
. . . . .
triangle
. . . . .
. . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
xiii
xiv
List of Tables
Contents
0 Introduction
1 Transformations
1.1 Logistic functions . . . . . . . . .
1.2 Linear and affine transformations
1.2.1 Linear transformations . .
1.2.2 Affine transformations . .
1.3 Möbius transformations . . . . .
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
3
6
6
6
10
2 The metric space of fractals (H(X), dH )
2.1 Metric spaces and its properties . . . . . . . . . . . .
2.1.1 Metric spaces . . . . . . . . . . . . . . . . . .
2.1.2 Cauchy sequences, limits and complete metric
2.1.3 Compact spaces . . . . . . . . . . . . . . . .
2.1.4 Contraction mappings . . . . . . . . . . . . .
2.2 The metric space of fractals . . . . . . . . . . . . . .
2.2.1 The completeness of the space of fractals . .
2.2.2 Contraction mappings on the space of fractals
2.3 Adresses and code spaces . . . . . . . . . . . . . . .
2.3.1 Metrics of code space . . . . . . . . . . . . .
. . . .
. . . .
spaces
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
13
13
13
14
15
16
17
18
19
20
21
3 What is a fractal?
3.1 The Cantor Set . . . . . . . . . . . . .
3.2 Koch curve . . . . . . . . . . . . . . .
3.3 Sierpinski triangle . . . . . . . . . . .
3.4 Other examples . . . . . . . . . . . . .
3.4.1 Self-portrait fractal . . . . . . .
3.4.2 Sierpinski carpet and Sierpinski
3.4.3 Peano curve . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
pentagon
. . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
23
24
24
24
25
25
26
26
4 Iterated Function Systems
4.1 IFS . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2 IFS codes . . . . . . . . . . . . . . . . . . . . . .
4.2.1 The addresses of points on fractals . . . .
4.3 Two algorithms for computing fractals from IFS
4.3.1 The Deterministic Algorithm . . . . . . .
4.3.2 The Random Iteration Algorithm . . . . .
4.4 Collage theorem . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
27
27
29
31
32
33
33
38
Joanpere Salvadó, 2011.
xv
xvi
5 Fractal dimension and its applications
5.1 Fractal dimension . . . . . . . . . . . . . . . . . . . . . .
5.1.1 Self-similarity dimension . . . . . . . . . . . . . .
5.1.2 Box dimension . . . . . . . . . . . . . . . . . . .
5.2 Space-filling curves . . . . . . . . . . . . . . . . . . . . .
5.3 Fractal interpolation . . . . . . . . . . . . . . . . . . . .
5.3.1 The fractal dimension of interpolation functions .
5.4 Applications of fractal dimension . . . . . . . . . . . . .
5.4.1 Fractals in Stock Market . . . . . . . . . . . . . .
5.4.2 Fractals in nature . . . . . . . . . . . . . . . . .
Contents
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
41
41
41
42
44
46
49
51
51
52
6 Fractal tops
55
6.1 Fractal tops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.2 Pictures of tops: colour-stealing . . . . . . . . . . . . . . . . . . . 56
Chapter 0
Introduction
This work has been written as the final thesis of the degree ”Grau en Matemàtiques”
of the Universitat Autònoma de Barcelona. This thesis has been done at
Linköpings Universitet due to an Erasmus exchange program organizated between both universities, and has been supervised by Milagros Izquierdo.
Classical geometry provides a first approximation to the structure of physical objects. Fractal geometry is an extension of classical geometry, and can be
used to make precise models of physical structures that classical geometry was
not able to approximate, because actually mountains are not cones, clouds are
not spheres or trees are not cylinders, as Mandelbrot said.
In 1975, Benoit Mandelbrot coined the term fractal when studying selfsimilarity. He also defined fractal dimension and provided fractal examples
made with computer. Mandelbrot also defined a very well known fractal called
Mandelbrot Set. The study of self-similar objects and similar functions began
with Leibnitz in the 17th century and was intense at the end of the 19th century
and beginning of 20th century by H. Koch (koch’s curve), W.Sierpinski (Sierpinski triangle), G. Cantor (Cantor Set), H. Poincaré (attractor and dynamical
systems) and G. Julia (Julia Set), among others. M. Barnsley has developed
during the last two decades applications of fractals to computer graphics, for
instance he defined the most well known algorithm to draw ferns.
The focus of this thesis is in building fractals models used in computer
graphics to represent objects that appear in different areas: nature (forets, fern,
clouds), stock market, biology, medical computing, etc. Despite the close relationship between fractals and dynamic systems, we center our attention only on
the deformation properties of the spaces of fractals. That will allow us to approximate physical objects by fractals, beginning with one fractal and deforming
and adjusting it to get the desired approximation. This work is a study of the
so called Collage theorem and its applications on computer graphics, modelling
and analysis of data. At the same time, the Collage theorem is a typical example
of properties of complete metric spaces: approximation. In the examples in this
thesis we use the Collage Theorem to approximate fractals to target images,
natural profiles, landscapes, etc.
Joanpere Salvadó, 2011.
1
2
Chapter 0. Introduction
Chapter One deals with logistic functions and transformations, paying particular attention to affine transformations and Möbius transformations in R2 .
Chapter Two introduces the basic topological ideas that are needed to describe the space of fractals H(X). The concepts introduced include metric spaces,
openness, closedness, compactness, completeness, convergence and connectedness. Then the contraction mapping principle is explained. The principal goal
of this chapter is to present the metric space of fractals H(X). Under the right
conditions this space is complete and we can use approximation theory to find
appropiate fractals.
Once we have defined the metric space of fractals, in Chapter Three we can
define a fractal and give some examples of fractal objects. All the examples in
this chapter will show one of their properties: the self-similarity. There are non
self-similar fractals, like plasma fractals.
In Chapter Four, we learn how to generate fractals by means of simple transformations. We explain what is an iterated function system (IFS) and how it
can define a fractal. We present two different algorithms to draw fractals, the
Deterministic Algorithm and the Random Iteration Algorithm.
The Collage theorem is presented and will help us to find an IFS for a given
compact subset of R2 . This theorem allows us to find good fractals that can
represent physical objects.
Chapter Five introduces the concept of fractal dimension. The fractal dimension of a set is a number that tells how densely is a set in the space it lies.
We gives formulas to compute the fractal dimension of fractals. We also present
some applications of fractal dimension and the Collage theorem to computer
graphics, such as fractal interpolation or applications of fractals in stocks markets and nature.
Finally, in Chapter Six we introduce the new idea of fractal top and we
use computer graphics to plot beautiful pictures of fractals tops using colourstealing. Colour-stealing is a new method that has potential applications in
computer graphics and image compression. It consist in ’stealing’ colours from
an initial picture to ’paint’ the new fractal.
Chapter 1
Transformations
In this chapter we introduce the chaotic behaviour of logistic functions. This chapter also deals with transformations, with particular attention to affine and Möbius
transformations in R2 .
We use the notation
f :X→Y
to denote a function that acts on the space X to produce values in the space Y.
We also call f : X → Y a transformation from the space X to the space Y.
Definition 1.0.1. Let X be a space. A transformation on X is a function
f : X → X, which assings exactly one point f (x) ∈ X to each point x ∈ X.
We say that f is injective (one-to-one) if x, y ∈ X with f (x) = f (y) implies
x = y. Function f is called surjective (onto) if f (X) = X. We say that f is
invertible if it is injective and surjective, in this case it is possible to define a
transformation f −1 : X → X, called the inverse of f .
1.1
Logistic functions
There is a close relationship between dynamical systems and fractals.
Definition 1.1.1. A dynamical system is a transformation f : X → X on a
metric space X. The orbit of a point x0 ∈ X under the dynamical system {X; f }
is the sequence of points {xn = f n (x0 ) : n = 0, 1, 2, . . .}.
The process of determining the long term behavior of orbits of a given dynamical system is known as orbit analysis.
An example of dynamical system are the logistic functions in the space [0, 1],
that are functions of the form:
fc (x) = cx(1 − x), c > 0
Since each value of the parameter c gives a distinct function, this is really a
family of functions. Using subscripts to indicate time periods, we can write
xi+1 = Lc (xi ), an then rewrite the equation xi+1 = cxi (1 − xi ).
Joanpere Salvadó, 2011.
3
4
Chapter 1. Transformations
The fixed points of the logistic function are the solutions of the equation
fc (x) = x, that is cx(1 − x) = x. If we solve this quadratic equation we get that
one solution is x = 0 and the other is x = c−1
c . This last solution is called the
nontrivial fixed point of a logistic function.
Example 1.1.1. Consider the logistic function in the space X = [0, 1] where
c = 4, f4 (x) = 4x(1 − x). The process of iterating this function consists of
computing a sequence, as follows:
x1 = f4 (x0 )
x2 = f4 (x1 ) = f4 (f4 (x0 ))
..
.
xn = f4n (x0 )
The orbit of the point x0 ∈ X under the dynamical system {X; f4 } is the sequence
of points {xn : n = 0, 1, 2, . . .}. Applying f4 to the endpoints of its domain gives
f4 (0) = 0 and f4 (1) = 0, so all successive iterates xi for both x0 = 0 and x0 = 1
yield the value 0. Thus, we say that 0 is a fixed point of the logistic function f4 .
If we analyse the orbit of this logistic function, we see that in general there is
no pattern for a given x0 , as illustrated in Table 1.1.
x0
x1
x2
x3
x4
x5
x6
x7
x8
x9
x1 0
0.25
0.75
0.75
0.75
0.75
0.75
0.75
0.75
0.75
0.75
0.75
0.4
0.96
0.154
0.52
0.998
0.006
0.025
0.099
0.358
0.919
0.298
0.49
1.00
0.02
0.006
0.025
0.099
0.357
0.918
0.302
0.843
0.530
0.5
1
0
0
0
0
0
0
0
0
0
0.75
0.75
0.75
0.75
0.75
0.75
0.75
0.75
0.75
0.75
0.75
Table 1.1: Various Orbits of f4 (x) = 4x(1 − x).
In the particular case when x0 = 0.75 the orbit converges to the nontrivial
fixed point 4−1
4 = 0.75, whereas the orbit of x0 = 0.5 converges to the fixed point
0.
The orbit of a initial point under a logistic function can also be constructed
graphically using the algorithm below to generate a construction known as a
web diagram.
Algorithm (Orbit tracing for logistic function)
For a given iterated function f : R → R, the plot consists of a diagonal y = x
line and a curve representing y = f (x). To plot the behaviour of a value x0 ,
apply the following steps.
1.1. Logistic functions
5
1. Given an integer n and an initial variable x0 .
2. Find the point on the function curve with an x-coordinate of xi . This has
the coordinates (xi , f (xi )).
3. Plot horizontally across from this point to the diagonal line. This has the
coordinates (f (xi ), f (xi )).
4. Plot vertically from the point on the diagonal to the function curve. This
has the coordinates (f (xi ), f (f (xi ))).
5. If i + 1 = n, stop. Otherwise, go to step 2.
Figure 1.1: From left to right and from top to bottom we have the orbits of
x0 = 0.25, x0 = 0.4, x0 = 0.49, and x0 = 0.5 respectively, for the logistic
function when c = 4.
In Figure 1.1 we have plotted (using the Maple) some orbits of the logistic
function f4 (x) = 4x(1 − x), for n = 10. We can observe graphically that the orbit of x0 = 0.25 converges to the fixed point 0.75 and that the orbit of x0 = 0.5
converges to 0.
As a result, the behavior described by dynamical systems can become extremely complicated and unpredictable. In this cases, very slight differences in
the values of these initial conditions may lead to vastly different results. This
fact is known as the butterfly effect, because of the sentence ”the presence or
absence of a butterfly flapping its wings could lead to creation or absence of a
hurricane”.
6
Chapter 1. Transformations
1.2
Linear and affine transformations
1.2.1
Linear transformations
In mathematics, a linear transformation is a function between two vector spaces1
that preserves the operations of vector addition and scalar multiplication.
Definition 1.2.1. Let V and W be vector spaces over the same field F. Then
f : V → W is called a linear transformation iff
f (αx1 + βx2 ) = αf (x1 ) + βf (x2 )
for all α, β ∈ F and all x1 , x2 ∈ V .
To any linear transformation f := R2 → R2 there corresponds a unique
matrix
a b
A :=
c d
such that
f
x
a
=
y
c
b
x
ax + by
=
d
y
cx + dy
for all (x, y) ∈ R2 and a, b, c, d ∈ R. That is,
f (x, y) = (ax + by, cx + dy)
1.2.2
Affine transformations
Definition 1.2.2. A transformation f : R2
  
a b
x
f y  =  c d
0 0
1
→ R2 of the form
 
x
e
f  y 
1
1
where a, b, c, d, e, f ∈ R, is called a two-dimensional affine transformation. An
affine transformation consist of a lineal transformation followed by a translation.
The basic properties of affine transformations are that they
i) Map straight lines into straight lines.
ii) Preserve ratios of distances between points on straight lines.
iii) Map parallel straight lines into parallel straight lines, trinagles into triangles
and interiors of triangles to interiors of triangles.
Definition 1.2.3. A translation is an affine transformation in which the linear
part is the identity
  
 
x
1 0 e
x
f y  = 0 1 f  y 
1
0 0 1
1
where e, f ∈ R.
1 See
for example, chapter 7 in [3].
1.2. Linear and affine transformations
7
Definition 1.2.4. A similarity with ratio r is an affine transformation f of the
Euclidean plane such that for each pair of points P and Q,
d(f (P ), f (Q)) = rd(P, Q)
for some nonzero real number r >
following matrix representations:
  
x
a
f y  = −b
1
0
  
x
a
f y  =  b
1
0
0. A similarity with ratio r has one of the
b
a
0
b
−a
0
 
e
x
f  y  (Direct)
1
1
 
e
x
f  y  (Indirect)
1
1
where a2 + b2 = r2 and a, b, e, f ∈ R.
When r = 1 the affine transformation is an isometry and it preserves the distance, that is d(X, Y ) = d(f (X), f (Y )).
Example 1.2.1. This transformation is a
  1
0
x
2
f y  =  0 12
1
0 0
Definition 1.2.5. A similarity f : R2
these forms
  
r cos θ
x
f y  =  r sin θ
0
1
  
x
r cos θ
f y  =  r sin θ
1
0
direct similarity with ratio r = 12 .
 
0
x
0 y 
1
1
→ R2 can also be express with one of
 
−r sin θ e
x
r cos θ f  y 
0
1
1
 
r sin θ
e
x
−r cos θ f  y 
0
1
1
for some translation e, f ∈ R, θ ∈ [0, 2π] and r 6= 0.
θ is called the rotation angle while r is called the scaling factor or ratio.
Definition 1.2.6. The linear transformation
  
x
cos θ − sin θ
cos θ
f y  =  sin θ
0
0
1
 
0
x
0 y 
1
1
is a rotation, where θ ∈ [0, 2π].
Definition 1.2.7. The linear transformation
  
 
x
1 0 0
x
f y  = 0 −1 0 y 
1
0 0 1
1
is a reflection.
8
Chapter 1. Transformations
Definition 1.2.8. A shear with axis m, denoted Sm , is an affinity that keeps m
pointwise invariant and maps every other point P to a point P 0 so that the line
P P 0 is parallel to m. The matrix representation of a shear with axis x[0, 1, 0] is
  
 
x
1 j 0
x
Sm y  = 0 1 0 y 
1
0 0 1
1
Definition 1.2.9. A strain with axis m, denoted Tm , keeps m pointwise invariant and maps every other point P to a point P 0 so that the line P P 0 is
perpendicular to m. The matrix representation of a strain with axis x[0, 1, 0] is
  
 
x
1 0 0
x
Tm y  = 0 k 0 y 
1
0 0 1
1
Theorem 1.2.1. Any affinity can be written as the product of a shear, a strain
and a direct similarity.
Example 1.2.2 (Self-portrait). We begin with the triangle of vertices (0, 0),
(10, 0) and (5, 9) in Figure 1.2. We will apply some affine transformations to
this triangle to construct a self-portrait.
Figure 1.2: This is our initial triangle.
To contruct the ”mouth” we will use the following affine transformation.
  1
 
x
x
0 10
3
3
 y 
f1 y  =  0 16 10
6
1
1
0 0 1
Then, we apply this affine transformation to the vertice of our initial triangle.
We show how to do it for the first vertice (0, 0).
  1
    10 
0
0 10
0
3
3
3
 0 =  5 
f1 0 =  0 16 10
6
3
1
1
0 0 1
1
If we apply the affine transformation in the same way for the others two vertices
5
20 5
of the triangle, we have that the vertices for the ”mouth” are ( 10
3 , 3 ), ( 3 , 3 ) and
19
(5, 6 ).
1.2. Linear and affine transformations
9
Figure 1.3: Original triangle and the result of applying f1 .
Figure 1.3 shows both the original triangle and the result of applying the affine
transformation f1 to construct the ”mouth”.
Once we have the mouth, we have to construct the two eyes. The affine
transformations to construct the left and the right eyes are, respectively:
  1
x
10
f2  y  =  0
1
0
0
1
− 30
0
7
x
2
11   
y
2
  1
x
10
f3  y  =  0
1
0
0
1
− 30
0
11
x
2
11   
y
2
 
1
1
 
1
1
Note that f2 , f3 have a reflection included. Applying this affine transformations to the vertices of the original triangle, we get the new vertices for the left
and right eye. These are:
9 11
26
Left eye: ( 72 , 11
2 ), ( 2 , 2 ) and (4, 5 ).
11
13 11
26
Right eye: ( 11
2 , 2 ), ( 2 , 2 ) and (6, 5 ).
If we draw all transformations together, with the initial triangle (all filled) we
get Figure 1.4.
Figure 1.4: Self-portrait constructet applying f1 , f2 and f3 to the initial triangle.
10
Chapter 1. Transformations
1.3
Möbius transformations
Definition 1.3.1. The set C ∪ {∞} is called the extended complex plane or
the Riemann sphere and is denoted by Ĉ.
Definition 1.3.2. A transformation f : Ĉ → Ĉ defined by
f (z) =
(az + b)
(cz + d)
where a, b, c, d ∈ Ĉ and ad − bc 6= 0 is called a Möbius transformation on Ĉ.
Definition 1.3.3. Let f be a Möbius transformation. If c 6= 0 we define
a
f ( −d
c ) = ∞ and f (∞) = c . If c = 0 we define f (∞) = ∞.
Möbius transformations have the property that map the set of all circles
and straight lines onto the set of all circles and straight lines. In addition, they
preserve angles and its orientation.
Theorem 1.3.1 (Fundamental theorem of Möbius transformations). Let z1 , z2 , z3
and w1 , w2 , w3 be two sets of distinct points in the extended complex plane
Ĉ = C ∪ {∞}. Then there exists a unique Möbius transformation that maps
z1 to w1 , z2 to w2 and z3 to w3 .
Example 1.3.1. An example of a Möbius transformation is f (z) = z1 . As we
can see in Figure 1.5, this transformation maps 0 to ∞, ∞ to 0 and 1 to 1. The
unit circle {z ∈ C : |z| = 1} is invariant as a set.
Figure 1.5: Möbius transformation f (z) =
1
z
z−i
Example 1.3.2. Another example of a Möbius transformation is f (z) = z+1
,
shown in Figure 1.6, that takes the real line to the unit circle centered at the
origin.
To draw Figures 1.5 and 1.6 we have used an applet2 that allows us to draw
points, lines, and circles, and see what happens to them under a specific Möbius
transformation.
Example 1.3.3. Any affine transformation is a Möbius transformation with
the point at infinity fixed, i.e that maps ∞ to ∞.
2 This
applet can be found on http://www.math.ucla.edu/ tao/java/Mobius.html
1.3. Möbius transformations
Figure 1.6: Möbius transformation f (z) =
11
z−i
z+1
Remark 1.3.1. Any affine transformation is determined by the image of three
non-colinear points.
Example 1.3.4 (Self-portrait). We want to apply the Möbius transformation
f (z) = z1 to our self-portrait constructed in the Example 1.2.2.
One vertice of the initial triangle is (0, 0) and we would have problems mapping
the Möbius transformation, so, first of all, we have to apply a translation to the
self-portrait.
The translation applied to the all the vertices of the self-portrait (initial triangle, mouth and eyes) is
  
 
x
1 0 3
x
f y  = 0 1 3 y 
1
0 0 1
1
So, now, the self-portrait has been moved 3 units to the right and 3 units up.
The new vertices for the self-portrait are:
Initial triangle: (3, 3), (13, 3) and (8, 12).
14
29 14
37
Mouth: ( 19
3 , 3 ), ( 3 , 3 ) and (8, 6 ).
42
151 42
141 81
Left eye: ( 131
20 , 5 ), ( 20 , 5 ) and ( 20 , 10 )
42
191 42
181 81
Right eye: ( 171
20 , 5 ), ( 20 , 5 ) and ( 20 , 10 )
Figure 1.7: Self-portrait after the translation f .
Now we can apply the Möbius transformation f (z) =
1
z
to the new self-
12
Chapter 1. Transformations
portrait. If we take z = x + iy we can write the transformation like this
f (z) =
1
1
x − iy
x − iy
=
=
= 2
z
x + iy
(x + iy)(x − iy)
x + y2
So, we can apply the following transformation to all the new vertices of the
self-portrait.
x
y
f (x, y) =
,− 2
x2 + y 2
x + y2
The result of drawing the new self-portrait after the Möbius transformation
f (z) = z1 is shown in Figure 1.8.
Figure 1.8: Self-portrait after the Möbius transformation f (z) =
1
z
In Figure 1.9 we have plotted the original self-portrait and the self-portrait
after the Möbius transformation all in one. Notice that the self-portrait after
the transformation is really small compared with the original. Like in Figure
1.5, points are mapped close to the point (0, 0)..
Figure 1.9: Original self-portrait and the self-portrait after the Möebius transformation (really small close to (0, 0)).
Chapter 2
The metric space of fractals
(H(X), dH)
In this chapter we introduce metric spaces, with the focus of those properties that
we will be used later, like the space of fractals: (H(X), dH ) To know more about
metric spaces see [6].
2.1
Metric spaces and its properties
In mathematics, a metric space is a set where a notion of distance (called a
metric) between elements of the set is defined. See [6].
2.1.1
Metric spaces
Definition 2.1.1. A metric space (X, d) consists of a space X together with a
metric or distance function d : X × X → R that measures the distance d(x, y)
between pairs of points x, y ∈ X and has the following properties:
(i) d(x, y) = d(y, x) ∀ x, y ∈ X
(ii) 0 < d(x, y) < +∞ ∀ x, y ∈ X, x 6= y
(iii) d(x, x) = 0 ∀ x ∈ X
(iv) d(x, y) ≤ d(x, z) + d(z, y) ∀ x, y, z ∈ X (obeys the triangle inequality)
Example 2.1.1. One example of a metric space is (R2 , dEuclidean ), where
p
dEuclidian (x, y) := (x1 − y1 )2 + (x2 − y2 )2
for all x, y ∈ R2 .
Metric spaces of diverse types play a fundamental role in fractal geometry.
They include familiar spaces like R, C, code spaces (see section 2.3) and many
other examples.
We denote by
f : (X, dX ) → (Y.dY )
a transformation between two metric spaces (X, dX ) and (Y, dY ).
Joanpere Salvadó, 2011.
13
14
Chapter 2. The metric space of fractals (H(X), dH )
Definition 2.1.2. Two metrics d and d˜ are equivalent if and only if there exists
a finite positive constant C such that
1
˜ y) ≤ Cd(x, y) for all x, y ∈ X
d(x, y) ≤ d(x,
C
Definition 2.1.3. Two metric spaces (X, dX ) and (Y, dY ) are equivalent if there
is a function f : (X, dX ) → (Y, dY ) (called a metric transformation) which is
injective and surjective (i.e it is invertible), and the metric dX is equivalent to
the metric d given by
˜ y) = dY (f (x), f (y)) for all x, y ∈ X.
d(x,
Every metric space is a topological space in a natural manner, and therefore
all definitions and theorems about general topological spaces also apply to metric
spaces.
Definition 2.1.4. Let S ⊂ X be a subset of a metric space (X, d). S is open if
for each x ∈ S there is an > 0 such that B(x, ) = {y ∈ X : d(x, y) < } ⊂ S.
B(x, ) is called the open ball of radius centred at x.
Definition 2.1.5. The complement of an open set is called closed. A closed
set can be defined as a set which contains all its accumulation points.
Definition 2.1.6. If (X, d) is a metric space and x ∈ X, a neighbourhood of x
is a set V , which contains an open set S containing x.
Definition 2.1.7. Let X be a topological space. Then X is said to be connected
iff the only two subsets of X that are both open and closed are X and ∅.
A subset S ⊂ X is said to be connected iff the space S with the relative topology
is connected. S is said to be disconnected iff it is not connected.
Definition 2.1.8. Let X be a topological space. Let S ⊂ X. Then S is said
to be pathwise connected iff whenever, x, y ∈ S there is a continuous function
f : [0, 1] ⊂ R → S such that x, y ∈ f ([0, 1]).
2.1.2
Cauchy sequences, limits and complete metric spaces
In this section we define Cauchy sequences, limits, completeness and continuity. These important concepts are related to the construction and existence of
various types of fractals.
Definition 2.1.9. Let (X, d) be a metric space. Then a sequence of points
{xn }∞
n=1 ⊂ X is said to be a Cauchy sequence iff given any > 0 there is a
positive integer N > 0 such that
d(xn , xm ) < whenever n, m > N
In other words, we can find points as near as wanted by going long enough in
the sequence. However, just because a sequence of points moves closer together
as one goes along the sequences, we must not infer that they are approaching a
point.
Definition 2.1.10. A point x ∈ X is said to be an accumulation point of a set
S ⊂ X if every neighbourhood of x contains infinitely many points of S.
2.1. Metric spaces and its properties
15
Definition 2.1.11. A sequence of points {xn }∞
n=1 in a metric space (X, d) is
said to converge to a point x ∈ X iff given any > 0 there is a positive integer
N > 0 such that
d(xn , x) < whenever n > N
In this case x is called the limit of {xn }∞
n=1 , and we write
lim xn = x
n→∞
Theorem 2.1.1. If a sequence of points {xn }∞
n=1 in a metric space (X, d) converge to a point x ∈ X, then {xn }∞
is
a
Cauchy
sequence.
n=1
The converse of this theorem is not true. For example, {xn = n1 : n =
1, 2, . . .} is a Cauchy sequence in the metric space ((0, 1), dEuclidean ) but it has
no limit in the space. So we make the following definition:
Definition 2.1.12. A metric space (X, d) is said to be complete iff whenever
{xn }∞
n=1 is a Cauchy sequence it converges to a point x ∈ X.
In other words, there actually exists, in the space, a point x to which the
Cauchy sequence is converging. This point x is of course the limit of the sequence.
Example 2.1.2. The sequence {xn = n1 : n = 1, 2, . . .} converges to 0 in the
metric space [0, 1]. We say that 0 is an accumulation point.
Example 2.1.3. The spaces (Rn , dEuclidean ) for n = 1, 2, 3, . . . are complete,
but the spaces ((0, 1), dEuclidean ) and (B := {(x, y) ∈ R2 : x2 +y 2 < 1}, dEuclidean )
are not complete.
Definition 2.1.13. Let (X, d) and (Y, d) be metric spaces. Then the function
f : (X, d) → (Y, d)
is said to be a continuous at a point x iff, given any > 0, there is a δ > 0 such
that
d(f (x), f (y)) < whenever d(x, y) < δ with x, y ∈ X.
We say that f : X → Y is continuous iff it is continuous at every point x ∈ X.
2.1.3
Compact spaces
Many fractal objects that we will present are construct by a sequence of compact
sets. So, we need to define compactness and provide ways of knowing when a
set is compact.
Definition 2.1.14. Let S ⊂ X be a subset of a metric space (X, d). S is
compact if every infinite sequence {xn }∞
n=1 in S contains a subsequence having
a limit in S.
An equivalent definition of compactness is given here.
Definition 2.1.15. Let S ⊂ X be a subset of a metric space (X, d).
S S is
compact iff for any family {Ui }i∈I of open sets of X such that S ⊆ i∈I Ui ,
there is a finit subfamily Ui1 , . . . , Uin that covers S.
16
Chapter 2. The metric space of fractals (H(X), dH )
Definition 2.1.16. Let S ⊂ X be a subset of a metric space (X, d)). S is
bounded if there is a point a ∈ X and a number R > 0 so that
d(a, x) < R ∀ x ∈ X
Theorem 2.1.2. Let X be a subspace of Rn with the natural topology. Then
the following three properties are equivalent:
(i) X is compact.
(ii) X is closed and bounded.
(iii) Each infinite subset of X has at least one accumulation point in X .
Definition 2.1.17. A metric space (X, d) is said to be a totally bounded iff,
for given > 0, there is a finite set of points {x1 , x2 , . . . , xL } such that
[
X = {(B(xl , ) : l = 1, 2, . . . , L}
where B(xl , ) is the open ball of radius centred at xl .
Theorem 2.1.3. Let (X, d) be a complete metric space. Then X is compact iff
it is totally bounded.
2.1.4
Contraction mappings
We begin defining a contraction mapping, also called contractive transformation.
Definition 2.1.18. A transformation f : X → X on a metric space (X, d) is
called contractive or a contraction mapping if there is a constant 0 ≤ l < 1 such
that
d(f (x), f (y)) ≤ ld(x, y) ∀ x, y ∈ X
Such number l is called a contractivity factor (or ratio) for f .
Example 2.1.4. A similarity with ratio r < 1 is a contractive function.
The following theorem will be used to construct fractal sets.
Theorem 2.1.4 (Contraction mapping theorem). Let X be a complete metric
space. Let f : X → X be a contraction mapping with contraction factor l. Then
f has a unique fixed point a ∈ X. Moreover, if x0 is any point in X and we have
xn = f (xn−1 ) for n = 1, 2, 3, . . . then
d(x0 , a) ≤
d(x0 , x1 )
1−l
and
lim xn = a.
n→∞
Proof
The proof of this theorem starts by showing that {xn }∞
n=0 is a Cauchy sequence.
Let a ∈ X be the limit of this sequence. Now for the continuity of f a = f (a).
Lemma 2.1.1. Let f : X → X be a contraction mapping on the metric space
(X, d). Then f is continuous.
2.2. The metric space of fractals
17
Lemma 2.1.2. Let (X, d) be a complete metric space. Let f : X → X be a
contraction mapping with contractivity factor 0 ≤ l ≤ 1, and let the fixed point
of f be a ∈ X. Then
d(x, a) ≤
d(x, f (x))
for all x ∈ X.
1−l
Lemma 2.1.3. Let (P, dp ) be a metric space and (X, d) be a complete metric
space. Let f : P × X → X be a family of contraction mappings on X with
contractivity factor 0 ≤ l ≤ 1. That is, for each p ∈ P , f (p, ·) is a contraction
mapping on X. For each fixed x ∈ X let f be continuous on P . Then the fixed
point of f depends continuously on p. That is, a : P → X is continuous.
2.2
The metric space of fractals
Let (X, d) be a metric space such that R2 or C. We will describe the space
(H(X), dH ) of fractals on the space X. H(X) is the space of nonempty compact
sets of X.
Definition 2.2.1. Let (X, d) be a complete metric space. Then H(X) denotes
the space whose points are compact subsets of X, other than the empty set.
Definition 2.2.2. Let (X, d) be a complete metric space and H(X) denote the
space of nonempty compact subsets of X. Then the distance from a point x ∈ X
to B ∈ H(X) is defined by
DB (x) := min{d(x, b) : b ∈ B}
We refer to DB (x) as the shortest-distance function from x to the set B.
Now we are going to define the distance from one set to another, that is the
distance in H(X).
Definition 2.2.3. Let (X, d) be a metric space and H(X) the space of nonempty
compact subsets of X. The distance from A ∈ H(X) to B ∈ H(X) is defined by
DB (A) := max{DB (a) : a ∈ A}
for all A, B ∈ H(X).
Finally we can define the Hausdorff metric.
Theorem 2.2.1. Let (X, d) be a metric space and H(X) denote the nonempty
compact subsets of X. Let
dH(X) := max{DB (A), DA (B)} for all A, B ∈ H(X).
Then (H(X), dH(X) ) is a metric space.
Definition 2.2.4. The metric dH = dH(X) is called the Hausdorff metric. The
quantity dH (A, B) is called the Hausdorff distance between the points A, B ∈
H(X).
18
Chapter 2. The metric space of fractals (H(X), dH )
B
A
0
1
2
Figure 2.1: The Hausdorff distance between A and B is 1.
Example 2.2.1. In the following example we compute the Hausdorff distance
between A, B ∈ H(X), illustrated in Figure 2.1. A is the unit circle and B the
circle of radius 2, both centered at (0, 0).
The distance from A to B is DB (A) := max{DB (a) : a ∈ A} = 0. The distance
from B to A is DA (B) := max{DA (b) : b ∈ B} = 1.
So, the Hausdorff distance between A and B is the maximum of the distances
above
dH(X) (A, B) := max{DB (A), DA (B)} = max{0, 1} = 1
Example 2.2.2. In this other example, the Hausdorff distance between the two
rectangles A and B (see Figure ??) is
dH(X) (A, B) := max{DB (A), DA (B)} = max{5, 10} = 10
10
B
A 5
8
4
6
3
Figure 2.2: The Hausdorff distance between A and B is 10.
2.2.1
The completeness of the space of fractals
Our principal goal is to establish that the space of fractals (H(X), dH(X) ) is a
complete metric space.
Theorem 2.2.2 (Extension lemma). Let (X, d) be a complete metric space and
let {An ∈ H(X)}∞
n=1 be a Cauchy sequence in (H(X), dH ). Consider the Cauchy
∞
sequence {xnj ∈ Anj }∞
j=1 in (X, d), where {nj }j=1 is an increasing sequence of
positive integers. Then there exists a Cauchy sequence {xn ∈ An }∞
n=1 in (X, d)
for which {xnj ∈ Anj }∞
is
a
subsequence.
j=1
The following result provides a general condition under which (H(X), dH ) is
complete and a characterization of the limits of Cauchy sequences in H(X).
2.2. The metric space of fractals
19
Figure 2.3: A Cauchy sequence of compact sets An in the space H(R2 ) converging to a fern set.
Theorem 2.2.3 (The completeness of the space of fractals). Let (X, d) be a
complete metric space. Then (H(X), dH ) is a complete metric space. Moreover,
if {An ∈ H(X)}∞
n=1 is a Cauchy sequence then
A := lim An
n→∞
can be characterized as
A = {x ∈ X : there is a Cauchy sequence {xn ∈ An }∞
n=1 that converges to x}.
One of the properties of the space (H(X), dX ) is that is pathwise connected.
This is used in the applications to computer graphics, to find the attractors.
2.2.2
Contraction mappings on the space of fractals
Let (X, d) be a metric space and let (H(X), dH ) denote the corresponding space
of nonempty compact subsets, with the Hausdorff metric dH .
The following lemma tells us how to construct a contraction mapping on (H(X), dH ),
from a contraction mapping on the metric space (X, d).
20
Chapter 2. The metric space of fractals (H(X), dH )
Lemma 2.2.1. Let f : X → X be a contraction mapping on the metric space
(X, d) with contractivity factor l. Then f := H(X) → H(X) defined by
f (B) = {f (x) : x ∈ B} ∀B ∈ H(X)
is a contraction mapping on (H(X), dH ) with contractivity factor l.
We also can combine mappings on (H(X), dH ) to produce new contraction
mappings on (H(X), dH ). The following lemma provides us a method to do it.
Lemma 2.2.2. Let (X, d) be a metric space. Let {fn : n = 1, 2, . . . , N } be contraction mappings on (H(X), dH ). Let the contractivity factor for fn be denoted
by ln for each n. Define F : H(X) → H(X) by
F = f1 (B) ∪ f2 (B) ∪ · · · ∪ fN (B) =
N
[
fn (B), for each B ∈ H(X)
n=1
Then F is a contraction mapping with contractivity factor l = max{ln : n =
1, 2, . . . , N }.
Lemma 2.2.3. Let (X, d) be a metric space and suppose we have continuous
transformations fn : X → X, for n = 1, 2, . . . , N depending continuously on
a parameter p ∈ P , where (P, dp ) is a compact metric space. That is fn (p, x)
depends continuously on p for fixed x ∈ X. Then the transformation F : H(X) →
H(X) defined by
N
[
fn (p, B) ∀B ∈ H(X)
F(p, B) =
n=1
is also continuous in p. That is, F(p, B) is continuous in p for each B ∈ H(X),
in the metric space (H(X), dH ).
2.3
Adresses and code spaces
In this section we describe how the points of a space may be organized by
addresses. Addresses are elements of certain types of spaces called code spaces.
When a space consists of many points, as in the cases of R and R2 , it is often
convenient to have addresses for the points in the space. An address of a point
is a way to indentify the point.
Example 2.3.1. For example, the address of a point x ∈ R may be its decimal expansion. Points in R2 may be addressed by ordered pairs of decimal
expansions.
We shall introduce some useful spaces of addresses, namely code spaces.
These spaces will be needed later to represent sets of points on fractals, in chapter 4.2.1.
An address is made from an alphabet of symbols. An alphabet A consists of
a nonempty finite set of symbols as {1, 2, . . . , N } or {0, 1, . . . , N }, where each
symbol is distinct. The number of symbols in the alphabet is |A|.
2.3. Adresses and code spaces
21
Let Ω0A denote the set of all finite strings made of symbols from the alphabet A. The set Ω0A includes the empty string ∅. That is, Ω0A consists of all
expressions of the form
σ = σ1 σ2 · · · σK
where σn ∈ A for all 1 ≤ n ≤ k.
Examples of points in Ω0[1,2,3] are 1111111, 123, 123113 or 2.
A more interesting space for us, which we denote by ΩA , consists of all
infinite strings of symbols from the alphabet A. That is, σ ∈ ΩA if and only if
it can be written
σ = σ1 σ2 · · · σn · · ·
where σn ∈ A for all n ∈ {1, 2, . . .}.
An example of a point in Ω[1,2] is σ = 121121121111 · · · . An example of a point
in Ω[1,2,3] is σ = 2̄ = 22222222222222 · · · .
Definition 2.3.1. Let ϕ : Ω → X be a function from Ω = Ω0A ∪ ΩA onto a
space X. Then ϕ is called an address function for X, and points in Ω are called
addresses. Ω is called a code space. Any point σ ∈ Ω such that ϕ(σ) = x is
called an address of x ∈ X. The set of all addresses of x ∈ X is ϕ−1 ({x}).
Example 2.3.2. The Cantor set is an example of the code space Ω[0,1] .
Figure 2.4: Addresses of points in the Cantor set.
2.3.1
Metrics of code space
We give two examples of metric for any code space Ω = Ω0A ∪ ΩA .
A simple metric on ΩA is defined by dΩ (σ, σ) = 0 for all σ ∈ ΩA , and
dΩ (σ, ω) :=
1
if σ 6= ω,
2m
for σ = σ1 σ2 σ3 · · · and ω = ω1 ω2 ω3 · · · ∈ ΩA , where m is the smallest positive
integer such that σm 6= ωm .
We can extend dΩ to Ω0A ∪ ΩA by adding a symbol, which we will call Z, to
the alphabet A to make a new alphabet à = A ∪ {Z}. Then we embed Ω0A ∪ ΩA
in ΩÃ via the function ε : Ω0A ∪ ΩA → ΩÃ defined by
ε = σZZZZZZZ · · · = σ Z̄ if σ ∈ Ω0A
ε(σ) = σ if σ ∈ ΩA
22
Chapter 2. The metric space of fractals (H(X), dH )
and we define
dΩ (σ, ω) = dΩ (ε(σ), ε(ω)) for all σ, ω ∈ Ω0A ∪ ΩA
There is another metric that we can define on Ω0A ∪ ΩA . It depends on the
number of elements |A| in the alphabet A, so we denote it by d|A| . Assume that
A = {0, 1, . . . , N − 1} and the number of elements of the alphabet is |A| = N .
This metric is defined on ΩA by
X
∞ σn − ωn for all σ, ω ∈ ΩA .
d|A| = (|A| + 1)n n=1
Finally we extend d|A| to the space Ω0A ∪ ΩA using the same construction as
above and defining ξ := Ω0A → [0, 1] such that
ξ(σ1 σ2 · · · σn ) = 0.σ1 σ2 · · · σn Z̄
that is,
ξ(σ) =
m
X
σn
1
+
n
(N + 1)
(N + 1)m
n=1
for all σ = σ1 σ2 · · · σn ∈ ΩA0 .
We define
d|A| (σ, ω) = |ξ(σ) − ξ(ω)| = dEuclidean (ξ(σ), ξ(ω)) for all σ, ω ∈ Ω0A ∪ ΩA .
Theorem 2.3.1. Both (Ω0A ∪ ΩA , dΩ ) and (Ω0A ∪ ΩA , d|A| ) are metric spaces.
The code space has the properties of a metric space.
Theorem 2.3.2. The metric spaces (ΩA ∪ Ω0A , dΩ ) and (Ω0A ∪ ΩA , d|A| ) are
complete.
Theorem 2.3.3. The code space Ω = ΩA ∪ Ω0A is compact.
Chapter 3
What is a fractal?
In this chapter we give a definition of fractal, and introduce one of its properties,
the self-similarity. Finally we present some examples of fractal objects.
Once we have defined the space of fractals ((H(X), dH ), we can define a
fractal.
Definition 3.0.2. Let (X, d) be a metric space. We say that a fractal is a
subset of ((H(X), dH ). In particular, is a fixed point of a contractive function
on ((H(X), dH ).
A fractal is a geometric object that is repeated at ever smaller scales to produce irregular shapes that cannot be represented by classical geometry. We say
that they are self-similar.
An object is said to be self-similar if it looks ”roughly” the same on any scale.
The Sierpinski triangle in Figure 3.1 is an example of a self-similar fractal. If
we zoom the red triangle we see that it is similar to the first one. This occurs
in all scales.
Figure 3.1: The Sierpinski triangle is self-similar.
In chapter 4 we will see that a fractal is invariant under certain tranformations of X.
Joanpere Salvadó, 2011.
23
24
Chapter 3. What is a fractal?
In the following subsections we are going to introduce specific examples of
fractals.
3.1
The Cantor Set
The cantor set is generated by beginning with a segment (usually of length 1)
and removing the open middle third of this segment. The process of removing
the open middle third of each remaining segment is repeated for each of the new
segments.
Figure 3.2 shows the first five stages in this generation.
Figure 3.2: First 4 stages in Cantor set generation.
3.2
Koch curve
The Koch curve is another well known fractal. To construct it begin with
a straight line. Divide it into three equal segments and replace the middle
segment by the two sides of an equilateral triangle of the same length as the
segment being removed. Now repeat the same construction for each of the new
four segments. Continue these interations.
Figure 3.3: Stage 0, 1, 2 and 9 of the Koch curve.
3.3
Sierpinski triangle
Without a doubt, Sierpinski’s Triangle is at the same time one of the most
interesting fractals and one of the most simple to construct.
3.4. Other examples
25
Figure 3.4: Sierpinski triangle
One simple way of generating the Sierpinski Triangle in Figure 3.4 is to
begin with a triangle. Connect the midpoints of each side to form four separate
triangles, and cut out the triangle in the center. For each of the three remaining
triangles, perform this same act. Iterate infinitely. The firsts iterations of the
sierpinski triangle are presented in Figure 3.5.
Figure 3.5: Stages 0, 1 and 2 of the Sierpinski triangle.
3.4
Other examples
In this subsection we show other examples of fractals.
3.4.1
Self-portrait fractal
Here we have a fractal constructed applying repeatedly the affine transformations seen in section 1.2.
Figure 3.6: Stages 1, 2 and 3 of the self-portrait fractal.
26
3.4.2
Chapter 3. What is a fractal?
Sierpinski carpet and Sierpinski pentagon
The Sierpinksi carpet is a generalization of the Cantor set to two dimensions.
The construction of the Sierpinski carpet begins with a square. The square is
cut into 9 congruent subsquares in a 3-by-3 grid, and the central subsquare is
removed. The same procedure is then applied recursively to the remaining 8
subsquares, and infinitum.
Figure 3.7: Firsts iterations of the Sierpinski Carpet.
The sierpinski pentagon, a fractal with 5-fold simmetry, is formed starting
with a pentagon and using similar rules that in the sierpinski triangle but for
pentagons.
Figure 3.8: Sierpinski pentagon
3.4.3
Peano curve
The Peano curve is created by iterations of a curve. The limit of the Peano
curve is a space-filling curve, whose range contains the entire 2-dimensional
unit square. In Figure 3.9 we can see the firts 3 iterations of the curve. We will
explore more this curve in chapter 5.2.
Figure 3.9: 3 iterations of the Peano curve construction.
Chapter 4
Iterated Function Systems
A fractal set generally contains infinitely many points whose organization is so
complicated that it is not possible to describe the set by specifying directly where
each point in it lies. Instead, the set may be defined by the ’relations between
the pieces’. [Barnsley]
Iterated function systems provide a convenient framework for the description, classification and expression of fractals. Two algorithms, the Random Iteration Algorithm
and the Deterministic Algorithm, for computing pictures of fractals, are presented.
Finally, the collage theorem characterises an iterated function system whose attractor is close to a given set. All the results here can be found in [1] [2].
4.1
IFS
So far the examples of fractals we have seen are all strictly self-similar, that is,
each can be tiled with congruent tiles where the tiles can be mapped onto the
original using similarities with the same scaling-factor; or inversely, the original
object can be mapped onto the individual tiles using similarities with a common
scaling factor.
In general, modelize such complicated objects require involved algorithms,
but one can develope quite simple algorithms by studying the relations between
parts of a fractal that allow us to use relative small sets of affine transformations.
The set of Sierpinski transformations is an example of an iterated function system (IFS) consisting of three similarities of ratio r = 12 . Since r < 1,
the transformations are contractive, that is, the transformations decrease the
distance between points making image points closer together than their corresponding pre-images. When the three transformations are iterated as a system
they form the Sierpinski triangle.
In general, an iterated function system consists of affine transformations, this
allowing direction specific scaling factors as well as changes in angles. We formalize these ideas in the following definitions.
Definition 4.1.1. An iterated function system consists of a complete metric
space (X, d) together with a finite set of contraction mappings (see section 2.1.4)
fn : X for n = 1, 2, . . . , N , where N ≥ 1. The abbreviation ”IFS” is used for
Joanpere Salvadó, 2011.
27
28
Chapter 4. Iterated Function Systems
”iterated function system”. It may be denoted by
{X; f1 , f2 , . . . , fN } or {X; fn , n = 1, 2, . . . , N }.
Moreover, if {f1 , f2 , . . . , fN }, is a finite sequence of strictly contractive transformations, fn : X → X, for n = 1, 2, . . . , N , then {X; f1 , f2 , . . . , fN } is called a
strictly contractive IFS or a hyperbolic IFS.
We say that a transformation fn : X → X is strictly contractive if and only
if there exists a number ln ∈ [0, 1) such that
d(fn (x), fn (y)) ≤ ln d(x, y)
for all x, y ∈ X. The number ln is called a contractivity factor for fn and the
number
l = max{l1 , l2 , . . . , lN , }
is called a contractivity factor for the IFS.
We use such terminology as ’the IFS {X; f1 , f2 , . . . , fN } and ’Let F denote
and IFS’.
The following theorem is the cornerstone of the theory of fractals. The
theorem gives us the algorithm to create a fractal using contractive affine transformations.
Theorem 4.1.1. Let {X; fn , n = 1, 2, . . . , N.} be a hyperbolic iterated function
system with contractivity factor l. Then the transformation F : X(H) → H(X)
defined by
N
[
F(B) = f1 (B) ∪ f2 (B) ∪ . . . ∪ fN (B) =
fn (B)
n=1
for all B ∈ H(X), is a contraction mapping on the complete metric space
(H(X), dH ) with contractivity factor l. That is
h(F(B), F(C)) ≤ l · h(B, C)
for all B, C ∈ H(X).
Its unique fixed point, A ∈ H(X) obeys the self-referential equation
A = f1 (A) ∪ f2 (A) ∪ . . . ∪ fN (A) =
N
[
fn (A)
n=1
and is given by A = limn→∞ F n (B) for any B ∈ H(X).
Definition 4.1.2. The fixed point A ∈ H(X) described in the theorem is called
the attractor of the IFS.
The following theorem establish the continuous dependence of the attractor
of a hyperbolic IFS on parameters in the maps of the IFS.
Theorem 4.1.2. Let (X, d) be a metric space. Let {X; fn , n = 1, 2, . . . , N.}
be a hyperbolic iterated function system with contractivity factor l. For n =
1, 2, . . . , N , let fn depend continuously on a parameter p ∈ P , where P is a
compact metric space. Then the attractor A(p) ∈ H(X) depends continuously on
p ∈ P , with respect to the Hausdorff metric dH .
4.2. IFS codes
29
Theorem 4.1.2 says that small changes in the parameters will lead to small
changes on the attractor. This is very important because we can continuously
control the attractor of an IFS, by varying parameters on the transformations.
We will use it in the applications to computer graphics (collage theorem or
fractal interpolation), to find the attractors that we want.
Example 4.1.1. The set of Sierpinski transformations is an example of an
iterated function system (IFS) consisting of three similarities of ratio r = 12 . In
this case the iterated function system consist of the complete metric space R2
together with a finite set of contraction mappings fn : R2 → R2 for n = 1, 2, 3.
Here we have three contraction mappings to generate the fractal:
x y
f1 (x, y) =
,
2 2
x y
f2 (x, y) =
, +1
2 2
x
y
f3 (x, y) =
+ 1, + 1
2
2
In this case, the contractivity factor for the IFS is
1 1 1
1
l = max{l1 , l2 , l3 } = max{ , , } =
2 2 2
2
The attractor of this IFS is the Sierpinski triangle in Figure 3.1.
Example 4.1.2. The Iterated Function System for the self-portrait fractal (Figure 3.6) consists of the following three tranformations:
 
  1
x
0 10
x
3
3
f1 y  =  0 16 53  y 
1
1
0 0 1
  1
 
7
x
x
0
10
2
1
11   
y
f2 = y   0 − 30
2
1
1
0
0
1
  1



11
x
x
0
10
2
11   
y
f3 = y   0 −1
30
2
1
1
0
0
1
4.2
IFS codes
Here we describe the notation used to implement IFS, called IFS codes.
For simplicity we restrict attention to hyperbolic IFS of the form {R2 : fn : n =
1, 2, . . . , N }, where each mapping is an affine transformation.
As we have seen in chapter 1 each affine transformation is given by a matrix.
We are going to illustrate the IFS described in Example 4.1.1, whose attractor is a Sierpinski triangle, in the matrix form:
  
x
0.5
f1  y  =  0
1
0
0
0.5
0
 
0
x
0 y 
1
1
30
Chapter 4. Iterated Function Systems
  
x
0.5
f2 y  =  0
1
0
0
0.5
0
 
0
x
1 y 
1
1
  
x
0.5
f3 y  =  0
1
0
0
0.5
0
 
1
x
1 y 
1
1
Table 4.1 is another way of representing the same iterated function system
presented in example 4.1.1.
n
1
2
3
a
1
2
1
2
1
2
b
0
c
0
0
0
0
0
d
1
2
1
2
1
2
e
0
f
0
0
1
1
1
p
1
3
1
3
1
3
Table 4.1: IFS code for a Sierpinski triangle
So, in general we can represent each mapping transformation using this
matrix form
  
 
x
an bn en
x
fn y  =  cn dn fn  y  for n = 1, 2, . . . , N.
1
1
0 0 1
A tidier way of representing a general iterated function system is given in table
4.2.
n
1
..
.
a
a1
..
.
b
b1
..
.
c
c1
..
.
d
d1
..
.
e
e1
..
.
f
f1
..
.
p
p1
..
.
n
an
bn
cn
dn
en
fn
pn
Table 4.2: General IFS code
Table 4.2 also provides a number pn associated with fn for n = 1, 2, 3. These
numbers are the probabilities of using the function fn . In the more general
case of the IFS {X : fn : n = 1, 2, . . . , N } there would be N such numbers
{pn : n = 1, 2, . . . , N } which obey
p1 + p2 + . . . + pN = 1 and pn > 0 for n = 1, 2, . . . , N.
These probabilities play an important role in the computation of images of the
attractor of an IFS using the Random Iteration Algorithm (Section 4.3.2). They
play no role in the Deterministic Algorithm.
Other IFS codes are given in Tables 4.3 and 4.4.
4.2. IFS codes
31
n
1
2
3
a
1
2
1
2
1
2
b
0
c
0
0
0
0
0
d
1
2
1
2
1
2
e
0
1
2
1
4
p
f
0
1
3
1
3
1
3
0
√
3
4
Table 4.3: Another IFS code for a Sierpinski triangle
n
1
2
3
4
a
0
0.85
0.2
-0.15
b
0
0.04
-0.26
0.28
c
0
-0.04
0.23
0.26
d
0.16
0.85
0.22
0.24
e
0
0
0
0
f
0
1.6
1.6
0.44
p
0.01
0.85
0.07
0.07
Table 4.4: IFS code for a Fern
4.2.1
The addresses of points on fractals
We begin by considering the concept of the addresses of points on the attractor
of a hyperbolic IFS. Consider the IFS of Table 4.1 whose attractor A, is a
Sierpinski triangle with vertices at (0, 0), (0, 1) and (1, 1).
We can address points on A according to the sequences of transformations
which lead to them, how we can see in Figure 4.1, for the firsts two steps of the
Sierpinski triangle transformation.
2
3
22
23
21
12
32
33
31
13
1
11
Figure 4.1: Addresses of points for the firsts two steps of the Sierpinski triangle
transformation.
There are points in A which have two addresses. One example is the point
that lies in the set f1 (A) ∩ f3 (A). The address of this point can be 311111 . . .
or 1333333 . . ., as illustrated in Figure 4.2.
On the other hand, some points on the Sierpinski triangle have only one
address, such as the three vertices. The proportion of points with multiple
addresses is ’small’. In such cases we say that the IFS is just-touching.
If there is a unique address to every point of A we say that the IFS is totally
disconnected. When it appears that the proportion of points with multiple
addresses is large, the IFS is overlapping.
Continuous transformations from code space to fractals
Definition 4.2.1. Let {X; f1 , f2 , . . . , fN } be a hyperbolic IFS. The code space
associated with the IFS, (Ω, d|A| ), is defined to be the code space on N symbols
32
Chapter 4. Iterated Function Systems
Figure 4.2: Addresses of some points of the Sierpinski triangle.
{1, 2, . . . , N }, with the metric d|A| described in 2.3.1.
Theorem 4.2.1. Let (X, d) be a complete metric space. Let {X; f1 , f2 , . . . , fN }
be a hyperbolic IFS. Let A denote the attractor of the IFS and let (Ω, d|A| ) denote
the code space associated with the IFS. There exists a continuous transformation
φ : Ω{1,2,...,N } → A
defined by
φ(σ) = lim fσ1 σ2 ...σn
n→∞
for σ = σ1 σ2 . . . σn ∈ Ω{1,2,...,N }
for any x ∈ X, where fσ1 σ2 ...σn (x) = fσ1 ◦ fσ2 ◦ . . . ◦ fσn (x).
The function φ : Ω → A provided is continuous and surjective.
Definition 4.2.2. Let φ : Ω → A be the continuous function from code space
onto the attractor of the IFS. An address of a point x ∈ A is any member of
the set
φ−1 (x) = {σ ∈ Ω : φ(σ) = x}
This set is called the set of addresses of x ∈ A.
In Figure 4.2 we find examples of addresses.
4.3
Two algorithms for computing fractals from
IFS
In this section we provide two algorithms for rendering pictures of attractors
of an IFS. The algorithms presented are the Deterministic Algorithm and the
Random Iteration Algorithm. Both are based in Theorem 4.1.1 and Theorem
2.2.3.
4.3. Two algorithms for computing fractals from IFS
4.3.1
33
The Deterministic Algorithm
Let F = {X; f1 , f2 , . . . , fN } be a hyperbolic IFS. We choose a compact set
A0 ∈ R2 . Then we compute successively An for n = 1, 2, . . . according to
N
[
A1 = F(A0 ) =
fj (A0 )
j=1
A2 = F 2 (A0 ) =
N
[
fj (A1 )
j=1
..
.
An = F n (A0 ) =
N
[
fj (An−1 )
j=1
Thus construct a sequence {An : n = 0, 1, 2, 3, . . .} ∈ H(X). Then by Theorem
4.1.1 the sequence {An } converges to the attractor of the IFS in the Hausdorff
metric of H(X).
We have used the IFS Construction Kit 1 [7] to run the Deterministic Algorithm. The algorithm takes an initial compact set A0 ∈ H(X) (the red square
in Figure 4.3) and apply the function F(An ) = f1 (An ) ∪ f2 (An ) ∪ . . . ∪ fN (An )
where f1 , f2 , . . . , fN are the functions on the IFS (of the table 4.3 for the
Sierpinski triangle). Then it plots the new set F(A0 ). The next iteration
plots F 2 (A0 ) = F(F(A0 )). Continued iteration produces the sequence of sets
A0 , F(A0 ), F 2 (A0 ), F 3 (A0 ) . . . that converges to the attractor.
Figure 4.4 is the result of running the Deterministic Algorithm for IFS code
in Table 4.4 starting from a circle as the initial array.
4.3.2
The Random Iteration Algorithm
The Random Iteration Algorithm is a method of creating a fractal, using a
polygon and an initial point selected at random inside it. This algorithm is
sometimes called the ”chaos game” due to the role of the probabilities in the
algorithm.
Let {F; f1 , f2 , . . . , fN } be a hyperbolic IFS, where probability pn has been
assigned to fn for n = 1, 2, . . . , N , with
N
X
pn = 1
n=1
Let Ω{1,2,...,N } be the code space associated with the IFS, and σ = σ1 , σ2 , . . . , σl ∈
Ω{1,2,...,N } . Choose x0 ∈ X to be the initial point.
1 IFS Construction Kit is a free software to design and draw fractals based on iterated
function systems.
34
Chapter 4. Iterated Function Systems
Figure 4.3: The result of running the Deterministic Algorithm with various
values of N, for the IFS code in Table 4.3, whose attractor is the Sierpinski
Triangle. Shown, from left to right and top to bottom are the sets F n (A0 ) for
n = 0, 1, 2, . . . , 8.
Then, for l = 1, 2, 3, . . . do
x1 = fσ1 (x0 )
x2 = fσ2 (x1 )
..
.
xl = fσl (xl−1 )
where σl are chosen according to the probabilities pn .
Thus construct a sequence {xl : l = 0, 1, 2, 3, . . .} ∈ X. Each point xl is a
combination of the points f1 (l − 1), f2 (l − 1), . . . , fN (l − 1) with weights the
probabilities pn . The attractor A for the fractal constructed using the random
iteration algorithm is
lim fσ1 σ2 ...σn (x0 )
l→∞
Random Iterated Algorithms have the advantages, when compared with deterministic iteration, of low memory requirement and high accuracy, the iterated
4.3. Two algorithms for computing fractals from IFS
35
Figure 4.4: Fern constructed using the deterministic algorithm. The initial
comptact set A0 is a circle. Shown, from left to right and top to bottom are the
sets F n (A0 ) for n = 0, 1, 2, . . . , 8.
point can be kept at a precision much higher than the resolution of the attractor. [2]
We illustrate the implementation of the algorithm. The following program
computes and plots n points on the attractor corresponding to the IFS code in
Table 4.1. The program is written in Maple and it plots a fractal image constructed by the iterated function scheme discussed by Michael Barnsley in his
1993 book Fractals Everywhere [1].
PROGRAM 4.3.1.
restart;
fractal:=proc(n)
local Mat1, Mat2, Mat3,Vector1, Vector2, Vector3, Prob1, Prob2,
Prob3,P, prob, counter, fractalplot,starttime, endtime;
Mat1:=linalg[matrix]([[0.5,0.0],[0.0,0.5]]);
Mat2:=linalg[matrix]([[0.5,0.0],[0.0,0.5]]);
Mat3:=linalg[matrix]([[0.5,0.0],[0.0,0.5]]);
Vector1:=linalg[vector]([0,0]);
Vector2:=linalg[vector]([0,1]);
Vector3:=linalg[vector]([1,1]);
Prob1:=1/3;
36
Chapter 4. Iterated Function Systems
Figure 4.5: Random iteration algorithm for the Sierpinski triangle for 100 and
200 points. Black point is x0 .
Prob2:=1/3;
Prob3:=1/3;
P:=linalg[vector]([0,0]);
writedata(”fractaldata”, [[P[1],P[2]]], [float,float]);
starttime:=time():
for counter from 1 to n do
prob:=rand()/10∧12;
if prob < Prob1 then P:=evalm(Mat1&*P+Vector1)
elif prob < Prob1+Prob2 then P:=evalm(Mat2&*P+Vector2)
else P:=evalm(Mat3&*P+Vector3);
fi;
writedata[APPEND](”fractaldata”, [[P[1],P[2]]], [float,float]);
od;fractalplot:=readdata(”fractaldata”,2);
print(plot(fractalplot, style=point, scaling=constrained,
axes= none, color=green, title=cat(n, ” iterations”)));
fremove(”fractaldata”);
end:
Figure 4.6: This Sierpinski triangle is the result of running program 4.3.1 presented above for 2.000, 10.000 and 25.000 iterations respectively.
The mathematics underlying this code is the following iteration scheme. Pick
a position vector in the plane and apply an affine transformation. Plot the resulting point. Apply to the new point a possibly different affine transformation,
depending on the probabilities. Repeat. In the given example, there are three
different affine transformations involved, and the one that is picked at a given
step is randomized; each transformation has a the same probability (pn = 13 ) of
being chosen at any particular step.
The final plot is thus a set of points in the plane, and because of the randomness, a different set each time the procedure is executed. The surprise is
that for a large number of iterations, the final picture always looks the same.
The result of running this program is presented in Figure 4.6. We have run
4.3. Two algorithms for computing fractals from IFS
37
the program for n = 100, 200, 2.000, 10.000, 25.000 points.
We can construct a Fern using a similar algorithm. In this case we need 4
transformations and we have to modify the others variables according to the
Table 4.4 (IFS code for a Fern). Here the probabilities for choosing the different transformations are not the same. The following algortihm is the random
iteration algorithm to draw a fern. If we run the random iteration algorithm for
a Fern we obtain Figure 4.7
PROGRAM 4.3.2.
restart;
fractal:=proc(n)
local Mat1, Mat2, Mat3,Vector1, Vector2, Vector3, Prob1, Prob2,
Prob3,P, prob, counter, fractalplot,starttime, endtime;
Mat1:=linalg[matrix]([[0.0,0.0],[0.0,0.16]]);
Mat2:=linalg[matrix]([[0.85,0.04],[-0.04,0.85]]);
Mat3:=linalg[matrix]([[0.2,-0.26],[0.23,0.22]]);
Mat4:=linalg[matrix]([[-0.15,0.28],[0.26,0.24]]);
Vector1:=linalg[vector]([0,0]);
Vector2:=linalg[vector]([0,1.6]);
Vector3:=linalg[vector]([0,1.6]);
Vector4:=linalg[vector]([0,0.44]);
Prob1:=0.01;
Prob2:=0.85;
Prob3:=0.07;
Prob4:=0.07;
P:=linalg[vector]([0,0]);
writedata(”fractaldata”, [[P[1],P[2]]], [float,float]);
starttime:=time():
for counter from 1 to n do
prob:=rand()/10∧12;
if prob < Prob1 then P:=evalm(Mat1&*P+Vector1)
elif prob < Prob1+Prob2 then P:=evalm(Mat2&*P+Vector2)
else P:=evalm(Mat3&*P+Vector3);
fi;
writedata[APPEND](”fractaldata”, [[P[1],P[2]]], [float,float]);
od;fractalplot:=readdata(”fractaldata”,2);
print(plot(fractalplot, style=point, scaling=constrained,
axes= none, color=green, title=cat(n, ” iterations”)));
fremove(”fractaldata”);
end:
Figure 4.7: The result of running the fern random algorithm of program 3.2.2
for 2.000, 10.000 and 25.000 iterations respectively.
Probabilities play an important role in Random Iteration Algorithm. If
we modify the probabilities pn , the final attractor may vary considerably. For
example, in program 4.3.2 we can change probabilites for these new ones:
Prob1:=0.25;
Prob2:=0.25;
38
Chapter 4. Iterated Function Systems
Prob3:=0.25;
Prob4:=0.25;
If we run the modified random algorithm, where all probabilities are equal, we
obtain the attractor of the figure 4.8.
Figure 4.8: The result of running the modified random algorithm (with equal
probabilities) for 25.000 iterations.
We observe that when all probabilites are equal, the stem (as well as the
central part of the fern) of the fern is wider than the stem of the fern in Figure
4.7, where the probability for this part of the fern was very small.
4.4
Collage theorem
When we want to find an IFS whose attractor is equal to a given compact target
set T ⊂ R2 . Sometimes we can simply spot a set of contractive transformations
f1 , f2 , . . . , fN taking R2 into itself, such that
T = f1 (T ) ∪ f2 (T ) ∪ . . . ∪ fN (T )
If this equation holds, the unique solution T of this equation is the attractor
of the IFS {R2 ; f1 , f2 , . . . , fN }. But in computer graphics modelling or image
approximation is not always possible to find and IFS such that this equation
holds. However, we may search an IFS that makes this equation approximately
true. That is, we may try to make T out of transformations of itself.
Michael Barnsley [1, 2] used an IFS consisting of four transformations to
generate the fern that has become another ”icon” of fractal geometry. He described a method for finding an IFS to generate a target image in his Collage
Theorem.
According to Barnsley, the theorem tells us that to find an IFS whose attractor is ”close to” or ”looks like” a given set, one must find a set of transformations
(contraction mappings on a suitable space within which the given set lies) such
that the union, or collage, of the images of the given set under the transformations is near to the given set. Nearness is measured using the Hausdorff metric
in H(X). The Collage theorem gives an upper bound to the distance between
4.4. Collage theorem
39
the attractor of the resulting IFS and T.
Methods of finding and modifying these transformations to fit the set are,
for instance, keyboard manipulation of numerical entries in the transformation
matrices or onscreen dragging of transformation images that induces computer
calculation of the corresponding matrix entries.
Iterated function systems are dense in H(X), we can approximate any fractal by
an hyperbolic IFS.
Theorem 4.4.1. (The collage theorem) Let (X, d) be a complete metric space.
Let T ∈ H(X) be given and let ≥ 0 be given. Suppose that a hyperbolic IFS
F = {X; f1 , f2 , . . . , fN } of contractivity factor 0 ≤ l < 1 can be found such that
dH (T, F(T )) ≤ where dH denotes the Hausdorff metric. Then
dH (T, A) ≤
1−l
where A is the set attractor of the IFS.
The collage theorem is closely related to Lemma 2.1.2. In fact, is a particular
case of the Lemma when d(x, f (x)) = .
Example 4.4.1. We will use the Collage Theorem to help find a hyperbolic
IFS of the form {R2 , f1 , f2 , f3 }, where f1 ,f2 and f3 are transformations in R2 ,
whose attractor is represented in Figure 4.9.
Figure 4.9: Both pictures are the same attractor. Colors will help us to solve
the problem.
Solution: We can view the fractal as part of a square with vertices (0, 0),
(0, 1), (1, 0) and (1, 1). We can develope a quite simple algorithm by studying
the relations between parts of the fractal. It’s easy to see that there are three
smaller replicas of the big picture (green, blue and orange). So, the IFS will
consist of three similarity transformations with the same ratio r = 12 .
If we look at the green region, we see that we have a similarity with ratio 12 , so
can define the function
x y
,
f1 (x, y) =
2 2
To define f2 we look at the blue region. Here we have a similarity with ratio
r = 21 and a translation with vector ( 21 , 0). So, the function is
f2 (x, y) =
x 1 y
+ ,
2 2 2
40
Chapter 4. Iterated Function Systems
For the orange region we have the similarity with ratio 21 , a translation with
vector (1, 12 ) and a rotation of 90 degrees to the left. So we can define last
function
y
x 1
f3 (x, y) = − + 1, +
2
2 2
Hence, the iterated function system we are looking for is
F = {R2 ,
y
x 1
x y x 1 y
, ,
+ , , − + 1, + }
2 2
2 2 2
2
2 2
We can express this iterated function system in the Table 4.5.
n
1
2
3
a
1
2
1
2
0
b
0
c
0
0
0
- 21
1
2
d
1
2
1
2
0
e
0
f
0
1
2
0
1
1
2
p
1
3
1
3
1
3
Table 4.5: IFS code for example 3.1.1
Example 4.4.2. Let T ∈ H(X) be the picture on Figure 4.10. Let =
given. Let
x y x 1 y
y
x 1
F = {R2 , , ,
+ , , − + 1, + }
2 2
2 2 2
2
2 2
1
√
2 2
be
be the hiperbolic IFS found, with contractivity factor l = 12 , such that dH (T, F(T )) ≤
1
√
. Then
2 2
1
dH (T, A) ≤
=√
1−l
2
Figure 4.10: We can approximate the attractor with an IFS
The collage theorem is an expression of the general principle that the attractor of a hyperbolic IFS depends continuosly on its defining parameters, such as
the coefficients in an IFS code. (Theorem 4.1.2).
The flexibility and adjustability of such objects have meany applications in computer graphics, biological modelling and many other situations where we want to
construct and adjust fractal models in order to appoximate given information.
Chapter 5
Fractal dimension and its
applications
In this chapter we introduce the concept of fractal dimension. The fractal dimension
of a set is a number which tells how densely the set occupies the metric space in
which it lies [1]. We also present applications of the collage theorem, such as the
fractal interpolation.
5.1
5.1.1
Fractal dimension
Self-similarity dimension
To yield self-similarity dimension to fractals, it is helpful to consider how segments, squares, and cubes can be tiled using the same magnification factor for
each tile, such that the new objects are similar to the original.
The table below shows the continuation of this procedure. We arribe to an
equation that relates an object’s dimension D, the magnification factor s and
the number of tiles N .
Original object
Segment
Dimension
(D) of the
object
1
Number of tiles (N ) after
magnification factor s = 2
Picture
2 = 21
Square
2
4 = 22
Cube
3
8 = 23
4-cube
d-cube
4
d
16 = 24
N = 2d
Table 5.1: Dimension data for Euclidean d-cubes.
As we can see in Table 5.1, the equation relating the dimension D of a
d-cube, the number of tiles N the and the magnification factor s is N = sD .
Joanpere Salvadó, 2011.
41
42
Chapter 5. Fractal dimension and its applications
Definition 5.1.1. Given a self-similar set where N is the number of tiles and
s the magnification factor, the self-similarity dimension of the set is given by
N = sD . Solving this equation, we can express D as
D=
ln N
ln s
The magnification factor s is the inverse of the scaling factor l presented in
chapter 2. So, we have that s = 1l .
Example 5.1.1. The self-similarity dimension of the Sierpinski Triangle can
N
1
be found using the equation D = ln
ln s . In this case, the scaling factor is l = 2
so, the magnification factor is s = 2. The number of new tiles is N = 3. So we
have that the self-similarity dimension of this fractal is
D=
ln 3
≈ 1.58496
ln 2
Figure 5.1: The self-similarity dimension of the Sierpinski triangle is D =
ln 3
ln 2 .
Example 5.1.2. In the case of the Koch curve, the magnification factor is s = 3
(because the scaling factor is l = 31 ) and the number of tiles congruent to the
N
original after the interation is N = 4. So, using the equation D = ln
ln s , we have
that the self-similarity dimension for the koch curve is
D=
ln 4
≈ 1.26185
ln 3
Figure 5.2: The self-similarity dimension of the koch curve is D =
5.1.2
ln 4
ln 3
Box dimension
The self-similarity dimension applies only to sets that are strictly self-similar.
So, we need to define a more generalized dimension that can be applied to
sets that are only ”approximately” self-similar, including natural fractals like
coastlines. This generalized dimension is called box dimension.
5.1. Fractal dimension
43
Definition 5.1.2. Let A ∈ H(X) where (X, d) is a metric space. For each > 0,
let N (A, ) denote the smallest number of closed balls of radius > 0 needed to
cover A.
If
ln(N (A, ))
exists,
D = lim
→0
ln( 1 )
then D is called the fractal dimension of A.
We will say that ”A has fractal dimension D”.
The following theoremes simplify the process of calculating the fractal dimension, because it allows us to change the variable for a discrete variable.
Theorem 5.1.1 (The Box counting theorem). Let A ∈ H(X) where (X, d) is
a metric space. Let n = Crn for real numbers 0 < r < 1 , C > 0 and
n = 1, 2, 3, . . ..
If
)
(
ln(N (A, n ))
exists,
D = lim
n→∞
ln( 1n )
then A has fractal dimension D.
Next theorem is a very used particular case of the Box counting theorem
when r = 21 .
Theorem 5.1.2. Let A ∈ H(X) where the Euclidean metric is used. Cover Rn
by closed just-touching square boxes of side length ( 21n ). Let Nn (A) denote the
number of boxes of side length ( 21n ) which intersect the attractor.
If
ln(Nn (A))
exists,
D = lim
n→∞
ln(2n )
then A has fractal dimension D.
Example 5.1.3. Consider the attractor A of the Sierpinski triangle, in Figure
5.3 , as a subset of (R2 , Euclidean).
Figure 5.3: Sierpiski triangle.
We see that
N1 (A) = 3
N2 (A) = 9
N3 (A) = 27
N4 (A) = 81
44
Chapter 5. Fractal dimension and its applications
..
.
Nn (A) = 3n for n = 1, 2, 3, . . .
So, by the particular case of the Box counting theorem, we have that the
fractal dimension of the attractor of Sierpinski triangle is
ln(Nn (A))
ln(3n )
ln 3
D(T ) = lim
= lim
=
n→∞
n→∞ ln(2n )
ln(2n )
ln 2
Example 5.1.4. Let A ∈ (H(X) be the attractor of the koch curve in Figure
5.4. In this case we have that n = ( 31 )n , for n = 1, 2, 3, . . .. The number of
boxes of side lenght ( 21n ) are:
N1 (A) = 4
N2 (A) = 16
N3 (A) = 64
..
.
Nn (A) = 4n for n = 1, 2, 3, . . .
So, using the Box Couting theorem, we have that A has fractal dimension
ln 4
ln(4n )
ln(Nn (A))
=
lim
=
D(T ) = lim
n→∞ ln(3n )
n→∞
ln(3n )
ln 3
Figure 5.4: Koch curve
One can see in Examples 5.1.3 and 5.1.4, that the fact fractal A has fractional
dimension D means that the ”density” of the fractal in the plane is bigger than
the Euclidean dimension 1 and smaller than the Euclidean dimension 2.
5.2
Space-filling curves
We say that a space-filling curve is a curve whose range contains the entire
2-dimensional unit square. Space-filling curves are special cases of fractal constructions, which fractal dimension is 2.
Space-filling curves in the 2-dimensional plane are commonly called Peano
curves, because Giuseppe Peano was the first to discover one. Peano discovered
a dense curve that passes through every point of the unit square. His purpose
was to construct a continuous mapping from the unit interval onto the unit
square.
5.2. Space-filling curves
45
Example 5.2.1. To construct an example of a Peano curve, we start with a
square of vertices (0, 0), (0, 1), (1, 0) and (1, 1).
Stage 0 in this procediment is the function y = x, as we can see in Figure 5.5.
Figure 5.5: Stage 0 in the construction of the Peano curve
In stage 1 (Figure 5.6) we draw the following functions:
2
2
y = x; y = x − ; y = x +
3
3
and
2
4
y = −x + ; y = −x +
3
3
Figure 5.6: Stage 1 of the contruction of the Peano Curve
In general, in the n-stage, we draw the following functions:
yk = x + pk where pk =
−3n + 1 + 2k
3n
and
with k = 0, . . . , 3n − 1.
2j
with j = 1, . . . , 3n − 1.
3n
We have run the following program with Maple. We have plotted (for n =
0, 1, 2, 3, 4.) each stage of the construction of the Peano curve.
yl = −x + pl where pl =
PROGRAM 5.2.1.
restart;
n:=1; % Change this n to draw the diferent iterations.
A:=seq(x + (−1 ∗ 3n + 1 + 2 ∗ i)/3n , i = 0..3n − 1):
B:=seq(−x + 2 ∗ i/3n , i = 1..3n − 1):
plot([A,B],x=0..1,y=0..1);
46
Chapter 5. Fractal dimension and its applications
Figure 5.7: Result of running the Program 5.2.1 for n = 2, 3, 4 respectively.
(Stages 2, 3 and 4 of the Peano curve)
The self-similarity dimension of Peano curve is
D=
ln 9
ln N
=
=2
ln l
ln 3
where N = 9 are the number of new tiles and l = 31 the scaling factor. The
dimension is 2, so it leads to a filled square, and the curve is a space-filling
curve.
5.3
Fractal interpolation
In this section we introduce fractal interpolation functions. Using this new
technology one can make complicated curves. It is shown how geometrically
complex graphs of continuous functions can be constructed to pass through
specified data points. The graphs of these functions can be used to approximate
image components such as the profiles of mountains, the tops of clouds, horizons
over forets, tumours, etc. Fractal interpolation is a consequence of the Collage
theorem and theorem 4.1.2.
Definition 5.3.1. A set of data is a set of points of the form {(xi , Fi ) ∈ R2 :
i = 0, 1, 2, . . . , N } where
x0 < x1 < x2 < · · · < xN
An interpolation function corresponding to this set of data is a continuous function f : [x0 , xN ] → R such that
f (xi ) = Fi for i = 0, 1, 2, . . . , N.
The points (xi , Fi ) ∈ R2 are called the interpolation points. We say that he
function f interpolates the data; and that f passes through the interpolation
points.
Let a set of data {(xi , Fi ) : i = 0, 1, 2, . . . , N } be given. We explain how to
construct an IFS such that its attractor is the graph of continuous interpolation
function f : [x0 , xN ] → R which interpolates the data. We consider an IFS
of the form R2 ; fn , n = 1, 2, . . . , N , where the contractive mappings have this
special form
 
  
x
an 0 en
x
fn y  =  cn dn fn  y  for n = 1, 2, . . . , N.
0 0 1
1
1
5.3. Fractal interpolation
47
The transformations are constrained by data according to
  

   
x0
xn−1
xN
xn
fn F0  = Fn−1  and fn FN  = Fn  for n = 1, 2, . . . , N.
1
1
1
1
Let n ∈ {1, 2, 3, . . . , N }. The transformations fn depends of five numbers
an , cn , dn , en and fn , which obey the four linear equations
an x0 + en = xn−1
an xN + en = xn
cn x0 + dn F0 + fn = Fn−1
cn xN + dn FN + fn = Fn
So, if we have 5 parameters and 4 linear equations, it follows that there is one
free parameter. We choose this free parameter to be dn . We call dn the vertical
scaling factor in the transformation fn . If dn is any real number, we can solve
the above equations in terms of dn . The solutions are
an =
cn =
(Fn − Fn−1 ) dn (FN − F0 )
−
(xN − x0 )
(xN − x0 )
en =
fn =
(xn − xn−1 )
(xN − x0 )
(xN xn−1 − x0 xn )
(xN − x0 )
(xN Fn−1 − x0 Fn ) dn (xN F0 − x0 FN )
−
(xN − x0 )
(xN − x0 )
So, we have defined an IFS {R2 ; fn , n = 1, 2, . . . , N } which attractor is a continuous interpolation function where dn is the vertical scaling factor. In the
following subsection we will see that modifing this free parameter we can approximate the fractal dimension of the interpolation function.
Program 5.3.1, written in Maple, computes the solutions above to find the
parameters of the transformations of an IFS, for the particular case when N = 3.
There are four interpolation points, that can be modified. The scaling factors
can also be changed.
PROGRAM 5.3.1.
restart;
x0:=0: F0:=0: % you can change the interpolation points
x1:=30: F1:=40:
x2:=60: F2:=30:
x3:=100: F3:=50:
d1:=0.3: d2:=0.3: d3:=0.3: % scaling factors can be changed
b:=0:
a1:=(x1-x0)/(x3-x0):
c1:=(F1-F0)/(x3-x0)-d1*(F3-F0)/(x3-x0):
e1:=(x3*x0-x0*x1)/(x3-x0):
f1:=(x3*F0-x0*F1)/(x3-x0)-d1*(x3*F0-x0*F3)/(x3-x0):
a2:=(x2-x1)/(x3-x0):
c2:=(F2-F1)/(x3-x0)-d2*(F3-F0)/(x3-x0):
48
Chapter 5. Fractal dimension and its applications
e2:=(x3*x1-x0*x2)/(x3-x0):
f2:=(x3*F1-x0*F2)/(x3-x0)-d2*(x3*F0-x0*F3)/(x3-x0):
a3:=(x3-x2)/(x3-x0):
c3:=(F3-F2)/(x3-x0)-d2*(F3-F0)/(x3-x0):
e3:=(x3*x2-x0*x3)/(x3-x0):
f3:=(x3*F2-x0*F3)/(x3-x0)-d2*(x3*F0-x0*F3)/(x3-x0):
a1; b; c1; d1; e1; f1;
a2; b; c2; d2; e2; f2;
a3; b; c3; d3; e3; f3;
Example 5.3.1. We have used program 5.3.1 to find the IFS which attractor
interpolates the following data set
{(0, 0), (30, 40), (60, 30), (100, 50)}
The program gives us the parameters of the IFS. This parameters are shown in
the following table
n
1
2
3
a
3
10
3
10
2
5
b
0
c
d
0
1
4
- 14
0
0.05
3
10
3
10
3
10
e
0
f
0
30
40
60
30
p
1
3
1
3
1
3
Table 5.2: IFS code for an interpolation function
The attractor of the IFS code of Table 5.2 is shown in Figure 5.8. The IFS
has a unique attractor which is the graph of a function which passes through the
interpolation points {(0, 0), (30, 40), (60, 30), (100, 50)}. In this case, we have
chosen dn = 0.3 for n = 1, 2, 3.
Figure 5.8: Graph of the interpolation function.
Hyperbolic IFS with attractor are a way of finding interpolation functions.
In the next two theorems we first define an IFS from the points (xi , Fi ) and
secondly we see that the attractor of the IFS denotes the required interpolation
function.
Theorem 5.3.1. Let N be a positive integer greater thant one. Let R2 ; fn , n =
1, 2, . . . , N denote the IFS associated with the data set {(xi , Fi ) : i = 0, 1, . . . , N }.
5.3. Fractal interpolation
49
Let the vertical scaling factor dn obey 0 ≤ dn < 1 for n = 1, 2, . . . , N . Then
there is a metric d on R2 , equivalent to the Euclidean metric, such that the
IFS is hyperbolic with respect to d. In particular, there is a unique nonempty
compact set
N
[
G=
fn (G)
n=1
Theorem 5.3.2. Let N be a positive integer greater than one. Let Let R2 ; fn , n =
1, 2, . . . , N denote the IFS associated with the data set {(xi , Fi ) : i = 0, 1, . . . , N }.
Let the vertical scaling factor dn obey 0 ≤ dn < 1 for n = 1, 2, . . . , N . Let G
denote the attractor of the IFS. Then G is the graph of a continuous function
f : [x0 , xN ] → R which interpolates the data {(xi , Fi ) : i = 0, 1, . . . , N }, that is
G = {(x, f (x)) : x ∈ [x0 , xN ]}
where
f (xi ) = Fi for i = 0, 1, 2, . . . , N.
Definition 5.3.2. The function f (x) whose graph is the attractor of an IFS
as described in the above theorems, is called a fractal interpolation function
corresponding to the data {(xi , Fi ) : i = 0, 1, . . . , N }.
The theory of hyperbolic IFS is applied to fractal interpolation functions.
We can use IFS algorithms of chapter 4 to compute fractal interpolation functions. The Collage theorem is used to find fractal interpolations functions which
approximate the given data.
5.3.1
The fractal dimension of interpolation functions
The following theorem tells us the fractal dimension of fractal interpolation
functions.
Theorem 5.3.3. Let N be a positive integer greater than one. Let {(xi , Fi ) : i =
0, 1, . . . , N } be a set of data. Let {R2 ; fn , n = 1, 2, . . . , N } be an IFS associated
with the data. Let G denote the attractor of the IFS, so that G is the graph of
a fractal interpolation function associated with the data. If
N
X
|dn | > 1
n=1
and the interpolation points do not all lie on a single straight line, the the fractal
dimension of G is the unique real solution D of
N
X
|dn |an D−1 = 1
n=1
Otherwise, the fractal dimension of G is one.
Example 5.3.2. Consider the set of data {(0, 0), (1, 1), (2, 1), (3, 2)}. In this
case, the interpolation points are equally spaced. It follows that an = N1 , where
50
Chapter 5. Fractal dimension and its applications
N = 3. Hence, if condition (1) in Theorem 5.3.3 holds, then the fractal dimension D of the interpolation function is
D−1
N
N
X
X
1
D−1
=1
|dn |an
=
|dn |
3
n=1
n=1
If we isolate the summation we have
3
X
|dn | = 3D−1
n=1
If we then apply logaritms and solve the equation we get that
P
3
log
|d
|
n
n=1
D =1+
log 3
We observe that varying the scaling factor dn for every n = 1, 2, . . . , N we can
approximate the fractal dimension of the interpolation function.
We have run the program 5.3.1 with our interpolation points, to get the IFS
code for diferent values of the scaling factor dn . So, we have fractal interpolation
functions corresponding to the set of data {(0, 0), (1, 1), (2, 1), (3, 2)} with diferent fractal dimension. Figure 5.9 shows these fractal interpolation functions.
Figure 5.9: Members of the family of fractal interpolation functions corresponding to the set of data {(0, 0), (1, 1), (2, 1), (3, 2)}, such that each function has
diferent dimension.
We have varied the scaling factor dn of each transformation to obtain the
different fractal interpolation functions in Figure 5.9.
5.4. Applications of fractal dimension
51
1. For the first fractal interpolation function (top left) we have that d1 = 0,
d2 = 0 and d3 = 0, so the fractal dimension is D = 1.
2. For the second function (top right) d1 =
fractal dimension is also D = 1.
1
3,
d2 =
1
3
and d3 =
1
3,
so the
3. For the third interpolation function (bottom left), we have chosen d1 = 0.4,
d2 = 0.4 and d3 = 0.4, so the fractal dimension is D ≈ 1.1659.
4. For our last interpolation function (bottom right), we have that d1 = 0.5,
d2 = 0.5 and d3 = 0.5, so in this case the fractal dimension of the function
is D ≈ 1.369.
Observation: In this four examples we have chosen d1 = d2 = d3 , but this is
not necessary. Each dn can be diferent for n = 1, 2, 3.
5.4
5.4.1
Applications of fractal dimension
Fractals in Stock Market
Benoit Mandelbrot, a mathematician known as the father of fractal geometry,
began to apply his knowledge of fractals to explain stock markets [5].
Looking to stock markets, we see that they have turbulence. Some days the
change in markets is very small, and some days it moves in a huge leap. Only
fractals can discribe this kind of random change.
Economists in the 1970s and 1980s proposed Gaussian models to analyze the
market behaviour. But Mandelbrot explained that there are far more market
bubbles and market crashes than these models suggested. He argues that fractal
techniques may provide a more powerful way to analyse risk, and how fractal
techniques might be applied to financial data to provide better estimation for
risk and volatility.
The FTSE 100 Index is a share index of the 100 most highly capitalised
UK companies listed on the London Stock Exchange. Figure 5.10 1 shows the
changing volatility of FTSE 100 Index, as the magnitude of price varied wildly
during the day. This chart is from Monday, 6th of June 2011.
It is possible to use fractal notions to study and build models of how stock
markets work. For example, computing the fractal dimension of data charts it
is possible to know if it has been a day of high turbulence and volatility. Fractal
dimension of data chart usually varies from 1.15 to 1.4. If the fractal dimension
of a data chart is high, this means that this day (or period) has high turbulences
and volatility. On that chart we will observe the line jumping up and down.
Unlike the fractals we have seen so far, a stock market is not an exact selfsimilar geometric object.
1 This
Figure has been taken from finance.yahoo.com
52
Chapter 5. Fractal dimension and its applications
Figure 5.10: FTSE 100 chart of Monday, June 6 2011.
5.4.2
Fractals in nature
Approximate fractals are easily found in nature. These objects display selfsimilar structure over an extended, but finite, scale range. Examples include
clouds, mountain, snow flakes, broccoli or systems of blood vessels and pulmonary vessels. Ferns are fractal in nature and can be modeled on a computer
by using a recursive algorithm, as we have done in chapter 4.
Coastlines
Even coastlines may be loosely considered fractal in nature. We are going to
shown the example of the coast of Norway. Norway comprises the western part
of Scandinavia in Northern Europe. It has a rugged coastline, broken by huge
fjords and thousands of islands.
Figure 5.11: Part of the Norway’s coastline.
Figure 5.11 2 shows part of the coast of Norway. The rougher an object is,
the higher its fractal dimension. The coast of Norway is very rough, so using the
2 This
picture is from http://photojournal.jpl.nasa.gov/catalog/PIA03424
5.4. Applications of fractal dimension
53
box dimension, we get that the fractal dimension of the coast is approximately
D = 1.52. Coastlines are not an exact self-similar geometric object.
Clouds
Clouds look very irregular in shape, indeed, clouds are fractal in shape just like
most other objects in nature.
We have used Collage theorem to find an IFS whose attractor looks like a cloud,
varying the parameters of the IFS to get what we want. Figure 5.12 is the result
of iterating this IFS with three transformation functions. This fractal has been
plotted using a similar technique that the one in Section 5.3 but for overlapping
IFS.
Figure 5.12: We have used the Collage theorem to construct a fractal that looks
like a cloud.
Although this can be a method to approximate fractals, normally self-similar
fractals are too regular to be realistic. To make fractals more realistic, we use
a different type of self-similarity called Brownian self-similar. In the Brownian
self-similarity, although each line is composed of smaller lines (as in the selfsimilarty that we knew), the lines are random instead of being fixed.
Brownian self-similarity is found in plasma fractals. Plasma fractals are very
useful in creating realistic landscapes and fractals in nature. Unlike most other
fractals, they have a random element in them, which gives them Brownian selfsimilarity. Due to their randomness, plasma fractals closely resemble nature.
Because of this, we have used them to make plasma fractals look like clouds, as
we can see in Figure 5.13.
Figure 5.13: Clouds generated using plasma fractal method compared with real
clouds of Linköping.
We have used a free software called XFractint to draw this plasma fractal.
We can control how fragmented the clouds are by changing parameters.
54
Chapter 5. Fractal dimension and its applications
Chapter 6
Fractal tops
One application of tops is to modelling new families of synthetic pictures in computer
graphics by composing fractals. A top is ”the inverse” of an adressing function for
a fractal (attractor of an IFS) that is surjective but, in general, not bijective. With
fractals tops we address each point in the attractor of a IFS in a unique way.
6.1
Fractal tops
We recall from chapter 4 how to get fractals as the attractor A of an IFS. Let
an hyperbolic iterated function system (IFS) be denoted
F := {X, f1 , f2 , ..., fN }
As we have seen this consists of a finite sequence of one-to-one contractive
mappings
fn : X → X, n = 0, 1, 2, . . . , N − 1
acting on the compact metric space (X, d) with metric d. So that for some
0 ≤ l < 1 we have d(fn (x), fn (y)) ≤ l · d(x, y) for all x, y ∈ X.
Let A denote the attractor of the IFS, that is A ⊂ X is the unique non-empty
compact set such that
[
A=
fn (A) given by theorem 4.1.1
n
Let the associated code space be denoted by Ω = Ω{1,2,...,N } . We have seen
in chapter 4.2.1 that there exist a continuous transformation
φ : Ω{1,2,...,N } → A
from the code space Ω{1,2,...,N } onto the set attractor A of the hyperboloic IFS
F := {X, f1 , f2 , ..., fN }.
This transformation is defined by
φ(σ) = lim fσ1 σ2 ...σn
n→∞
for σ = σ1 σ2 . . . σn ∈ Ω{1,2,...,N }
for any x ∈ X, where fσ1 σ2 ...σn (x) = fσ1 ◦ fσ2 ◦ . . . ◦ fσn (x).
Joanpere Salvadó, 2011.
55
56
Chapter 6. Fractal tops
Now notice that the set of addresses of a point x ∈ A, defined to be φ−1 (x),
is both bounded above and closed, hence it is compact. So it must posses a
unique largest element (the top). We denote this element by τ (x).
Definition 6.1.1. Let F be a hyperbolic IFS with set attractor A and code
space function φ : Ω → A. Then the tops function of F is τ : A → Ω, defined
by
τ (x) = max{σ ∈ Ω : φ(σ) = x}
The set of points Gτ := {(x, τ (x)) : x ∈ A} is called the graph of the top of the
IFS or simply the fractal top of the IFS.
6.2
Pictures of tops: colour-stealing
Here we introduce the application of tops to computer graphics. The basic idea
of the colour-stealing is that we start with two iterated function systems (IFSs)
and an input image. Then we run the random iteration algorithm, applying
the same random choice to each IFS simultaneously. One of the IFSs produces
a sequence of points that lie on the input image. The other IFS produces a
sequence of points that are coloured according to the colour value of the other
IFS, read off from the input picture.
Here we are interested in picture functions of the form B : DB ⊂ R2 → C,
where C is a colour space, for example C = [0, 255]3 ⊂ R3 . For colour-stealing
applications we may choose
DB = := {(x, y) ∈ R2 : 0 ≤ x, y ≤ 1}
Let two hyperbolic IFSs be
FD := {; f1 , f2 , . . . , fN } and FC := {; f˜1 , f˜2 , . . . , f˜N }
The index 0 D0 stands for ’drawing’ and the index 0 C 0 for colouring.
Let a picture function
BC := → C
be given. Let A denote the attractor of the IFS FD and let à denote the
attractor of the IFS FC .
Let
τD : A → Ω
denote the tops function for FD and
φC : Ω → Ã
denote the addressing function for FC . Then we define a new picture function
BD : A → C by
BD = BC ◦ φC ◦ τD
This is the unique picture function defined by the IFS’s FD , FC and the picture
BC . We say that BD has been produced by tops plus colour-stealing.
An example of the colour-stealing method is shown in Figure 6.1. This fractal
6.2. Pictures of tops: colour-stealing
57
has been drawn using IFS Construction Kit. The method used by the software
is based on the idea explained above.
We choose a colorful input image and a coloring IFS (FC ) with the same
number of functions as the drawing IFS (FD ) being used to generate the fractal.
Each time a random function from the drawing IFS is chosen to plot the next
point in that iteration, the corresponding function in the coloring IFS is used to
plot the next point for that IFS. The coloring IFS is drawn on top of the input
image. The point computed for the drawing IFS is plotted with the same color
as the point determined by the coloring IFS. The drawing IFS ”steals” the color
from the image underneath the coloring IFS.
The fractal image on Figure 6.1 is a fractal fern colored based on the image
of purple flowers. In this case FD and FC are based in the same IFS code in
Table 4.4, whose attractor is a fractal fern.
Figure 6.1: Fractal top produced by colour-stealing. The colours were ’stolen’
from the picture on the right.
On Figure 6.2 we have construct a new fractal, using the IFS code in Table
4.4 for the FD and a Sierpinski triangle to colour the image. As we need the same
number of transformations in each IFS, we have added a new transformation
with probability 0 in the IFS of Table 4.3, so that the final attractor A remains
the Sierpinski triangle, but now we are able to use the colour-stealing method.
So, for the coloring IFS FC we have used the IFS code in Table 6.1.
Thus, the final fractal is a fern colored using a Sierpinski triangle that ’stoles’
colours from the initial picture.
n
1
2
3
4
a
1
2
1
2
1
2
1
2
b
0
c
0
0
0
0
0
0
0
d
1
2
1
2
1
2
1
2
e
0
1
2
1
4
0
f
0
p
3
4
1
3
1
3
1
3
0
0
0
√
Table 6.1: Another IFS code for a Sierpinski triangle
58
Chapter 6. Fractal tops
Figure 6.2: Fractal top produced by colour-stealing. The colours were ’stolen’
from the picture on the right.
Observe that in both Figures 6.1 and 6.2 the coloring IFS is drawn on the
initial picture.
Bibliography
[1] Barnsley, Michael. (1988), Fractals everywhere, Boston MA, Academic
Press.
[2] Barnsley, Michael. (2006), Superfractals, Cambrige University Press.
[3] Beardon, Alan F. (2005), Algebra and Geometry, Cambrige University
Press.
[4] Cederberg, Judith N. (2005), A course in modern geometries, Second Edition. Springer.
[5] Mandelbrot, Benoit and Hudson, Richard. (2004), The (Mis)behavior of
Markets: A Fractal View of Risk, Ruin, and Reward., Basic Books.
[6] Rudin, Walter. (1964), Principles of Mathematical Analysis, Second Edition. McGraw-Hill, New York.
[7] IFS Construction Kit webpage. http://ecademy.agnesscott.edu/∼lriddle/ifskit/
Joanpere Salvadó, 2011.
59
60
Bibliography
Copyright
The publishers will keep this document online on the Internet - or its possible replacement - for a period of 25 years from the date of publication barring
exceptional circumstances. The online availability of the document implies a
permanent permission for anyone to read, to download, to print out single copies
for your own use and to use it unchanged for any non-commercial research and
educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional on the consent of the
copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual
property law the author has the right to be mentioned when his/her work is
accessed as described above and to be protected against infringement. For additional information about the Linköping University Electronic Press and its
procedures for publication and for assurance of document integrity, please refer
to its WWW home page: http://www.ep.liu.se/
Upphovsrätt
c 2011, Meritxell Joanpere Salvadó
Joanpere Salvadó, 2011.
61
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement