naturalthan most others.
We often speak of two dimensional
or 2D and three-dimensional
or 3D. There are several equivalent ways of defining these ideas; let’s consider a specific one: cube counting.
Consider a chunk of some shape that passes though a cube. Subdivide that cube evenly into divisions along each axis to make a grid of smaller cubes. How many of those smaller cubes does the shape pass through?
The exact answer to this question is overly dependent on the specifics of the case1 Some definitions of fractal dimension use spheres instead of cubes, which removes some of these challenges but adds others because sphere packing is a nontrivial topic in its own right.. For example, an axis-aligned straight line passes through cubes while a straight line between opposite cube corners may passe through as many as . In computing we know how to handle such constants: big-O. Any straight line passes through cubes. Any flat plane passes through cubes. Any solid passes through cubes. We call the exponent in these expressions the dimensionality of the shape.
This formulation suggests we could have a fractional dimension like : we just have to find some shape that passes through of the cubes. One way to construct such a shape is to find a shape that exhibits that ratio at one scale and recursively apply it to itself so that it exhibits that same ratio for every .
The Koch curve starts with the following shape
This shape is chosen to make the number of cubes not scale linearly; in particular, shrinking the cubes by a factor of 3 requires 4 times as many to cover it. That’s easier to see at this low resolution if we use circles instead of squares:
This pattern of reducing the radius by ⅓ and requiring 4× as many circles does not continue past 4 circles, but we can change the shape so that it does: we’ll replace each of the four straight segments with a ⅓-scale copy of the entire curve.
That gives us one more scale where the circles works, but still has straight lines. However, we can continue the process of replacing line segments with scaled-down copies of the starting shape ad infinitum to get
A representation of the complete Koch curve, made by repeating the segment-replacement strategy until the remaining segments are shorter than the screen can display.
Links in the table below add illustration of covering circles or squares to this figure.Counting up either circles or squares needed2 If we pushed the circles a little more to reduce overlap, or move the squares down a little, we could reduce these counts by a noisy small constant, but the overall pattern would remain. to cover this curve, we get:
scale () | circles needed () | squares needed () |
---|---|---|
1 | 1 | 1 |
1/3 | 4 | 4 |
1/9 | 16 | 16 |
1/27 | 64 | 64 |
Note that multiplying by 3 (for example to go from 1/9 to 1/27 scale) multiplies by 4 (for example, to go from 16 to 64 shapes needed to cover the curve). That means that ; solving for we get .
Nothing in nature exhibits a true mathematical fractal: zoom in enough and you find 3D cells or 1D elementary particles. But many things in nature are visually more fractal than they are Platonic. Many mountains look more like fractals than they look like spheres, many trees look more like fractals than they look like cylinders, dirt accumulates in patterns that look more like fractals than they do gradients, and so on. Fractals make a useful starting point for making a stylized model of nature.
Fractals are also useful in that they break up our ability to notice patterns. The eye is much less likely to notice a low-fidelity image if the image is made up fractals than if it is made up of simple shapes. Fractals are a useful tool for making a simple model look more detailed than it is.
There are many fractals used in graphics, but four are common enough to be worth discussing in more detail.
All of these are forms of fractal noise: that is, they use pseudorandom parameters to create a fractal that looks random instead of looking mechanical or mathematical.
White noise is commonly defined as a time-varying value, where the value at any give point in time is purely random (within some bounds) with no correlation to the value it had a moment earlier. If we graph this with time on one axis on value on the other, at low sampling rates we get something that looks like it might have fractal dimension:
But as we add more samples, the bumpiness increases
until eventually it’s true dimensional appears: 2, the same dimension as the solid rectangle that it fills.
Because 2 is an integer, white noise is not a fractal. However, it is a kind of noise that is easy for computers to generate, and is sometimes used as a basis for fractals.
Brownian motion refers to the trajectory followed by a particle that randomly changes direction. The most common formulation of fractal Brownian motion (abbreviated fBm) is a 1.x-dimensional fractal created by the position of a 1D Brownian motion on one axis and time on the other.
Many other fBm formulations also exist; for example, fBm noise is the integral of white noise.
The integral formulation is useful because it helps us characterize the visual effect of fractal dimension. The higher the magnitude of the underlying white noise, the steeper the slopes
of the resulting integral will be, meaning the more squares will be needed in any given column of a grid covering the curve and thus the higher the fractal dimension will be.
In graphics, the term fBm noise
is sometimes used3 This technically-incorrect but common-in-practice use of words is common in many fields and can make learning a new field more challenging than you might prefer. to describe any purely stochastic fractal, even if there is no way to characterize it as the motion of a particle.
Subdivision methods provide a computationally simple way of generating a fractal. Given a low-resolution mesh, replace each primitive with several smaller primitives and then randomly offset the vertices. Provided the expected magnitude of the random offsets is proportional to the size of the primitive, the result will be fractal.
Naïve fractal subdivision often exhibits visible patterns where the seams between the low-res primitives remain visible in the resulting fractal. To avoid this the subdivision itself should result in a smooth surface with no visible seams even if there are no random offsets.
Faulting methods are not particularly common in graphics, but they are easy to implement and thus are part of your programming assignments. As such, we have a separate page about them.
Ken Perlin made a two-part innovation in the efficient creation of fractals that has become a mainstay of computer graphics.
First, Perlin came up with a readily-computed representation of random smooth bumps. Pick points on a fixed uniform grid and at each pick a random surface normal, then fit a surface to those normals. Conceptually such a surface can be found by making one hill-and-pit pair and placing a rotated copy of it at each point. Practically it can be found using polynomially-interpolated dot products.
The original Perlin noise used a square grid to create the bumps. The turns out to result in visible axis-aligned patterns, so various alternatives such as simplex noise and opensimplex noise use different grid patterns instead.
Second, Perlin recognized that a fractal can be created from almost any bumpy surface, including his, by adding scaled-down copies of the surface to itself. This approach, called octaves
, is based on the observation that if is a bumpy function then is a fractal, and that we can stop the sum early once the
term is making the subsequent iterations have no visible impact.
Because of the significance of octave-created fractals, octave fractals that are not based on Perlin’s random-gradient bumps are sometimes informally called Perlin
.