Stephen and I wrote this paper for last year’s SIGBOVIK, a satirical computer science conference that takes place around April 1 each year. You can find the PDF version in the 2024 proceedings, or in the repository.
While brainstorming 2025 submissions, I decided to give this a little refresher. The inline images are now interactive, thanks to the magic of WebAssembly.
I’ve made some minor textual edits, and re-generated all figures from scratch. Minor figure differences from the paper come from missing parameters (we didn’t write them down); the major changes are footnoted for figures 101 and 112.
As erstwhile rock stars, the authors have intentionally applied distortion to certain 1-D signals for aesthetic effect. Sometimes this distortion comes from saturating the signal (overdrive); other times, it comes from a more subtle transformation, such as vacuum tube or transistor amplification. Some of the authors’ peers use “fractal audio processing” for distortive effects.
In this paper, we explore another dimension, and investigate how different ways of representing small numbers distort the world differently. To get a vibe check on numerical imprecision, we rendered two fractals using various numeric formats.
- Computing methodologies – Symbolic and algebraic manipulation – Representation of mathematical objects – Representation of exact numbers
- Computing methodologies – Computer graphics – Rendering – Rasterization
- Human-centered computing – Interaction design – Empirical studies in interaction design
- Human-centered computing – Visualization – Empirical studies in visualization
In this paper, we investigate two fractals: the Mandelbrot set and a Newton fractal. Both map a pixel coordinate \((x, y)\) into the complex plane as \(c = x + yi\).
We evaluate the Mandelbrot set in the usual way:
$$\begin{aligned} z_0 &= 0 \\ z_{n+1} &= {z_n}^2 + c\end{aligned}$$up to an iteration limit (\(n\)). Pixels that escape (\(|z| \geq 2\)) are given a hue according to Munafo 2023. Pixels that do not escape are colored black.
We compute the Newton fractal on the polynomial \(p(z) = z^3-1\):
$$\begin{aligned} z_0 &= c \\ z_{n+1} &= \frac{p(z_n)}{p'(z_n)}\end{aligned}$$and plot whether \(z\) reaches zero within a specified iteration limit. Points that reach zero (within some threshold) are grouped based on which zero of the function they are close to. Pixels are colored by assigning each group a hue, and given a color value according to how many iterations it took to converge to zero.
BigRational
Since we’re rendering for a computer screen, 3 we can (and do!) use exact inputs.
The raster grid (pixel grid) has integer locations, with \(xres \times yres\) pixels. We map each (integer) pixel location \((px,py)\) to a (rational) vector within the render window, centered at \((0,0)\):
$$\begin{aligned} \hat{x} &= \frac{px}{xres} - \frac{1}{2} \\ \hat{y} &= \frac{1}{2} - \frac{py}{yres}\end{aligned}$$We render a portion of the complex plane centered at \((\frac{x_c}{scale}, \frac{y_c}{scale})\), with equal width and height \(\frac{size}{scale}\). We constrain these to all be rational numbers, which allows us to compute exact (rational) pixel coordinates:
$$\begin{aligned} x &= \hat{x} \times \frac{size}{scale} + \frac{x_c}{scale} \\ y &= \hat{y} \times \frac{size}{scale} + \frac{y _c}{scale}\end{aligned}$$We perform these computations using an arbitrary-precision rational
number type, num::BigRational
. 4
In principle, we could also carry out the fractal computations exactly
using BigRational
. The fractal formulae above require complex
arithmetic, which is simple, plus some comparison operators ("greater
than four" for Mandelbrot, "zeroish" for Newton).
When we tried to render the fractals using BigRational
, though, it
reached a hard timeout (against the author’s patience). Moreover,
rationality has little place in an aesthetic evaluation such as this, so
we stick with finite-precision types.
Instead, we convert BigRational
values to various numeric formats,
approximating as closely as the format allows. Some of the conversions
might even be correct (e.g. those we didn’t write).
The formats we investigate are:
f32
and f64
: IEEE 754 single- and double-precision
floating-point formats (Cowlishaw et al 2008)
MaskedFloat<N,M>
, an IEEE 754 float with some exponent and/or
mantissa bits removed
IxFy
, fixed-point numeric formats (Wong 2017, implementation)
P32
, P16
, P8
: 32/16/8-bit posits,™ an alternate float-like
format (Posit Working Group 2022)
MaskedFloat isn’t yet a popular floating point type, but one we made up
for this paper to try to create more interesting errors than we were
seeing with just IEEE f64
. It involves taking an f64
, and masking
off some of the bits, for our own amusement.
A normal f64
consists of one bit of sign, 11 bits of exponent,
and then 52 fractional bits, representing the value:
If we want to experiment with what the world could be like if these were different (smaller) sizes, we can force some of those bits to one or zero. For the fractional bits, this is easy, as they are an unsigned value–just setting the least-significant bits to zero gets rid of it.
For the exponential bits, this is more complicated. If we want to constrain the exponent to be effectively 4 bits (i.e., range from -7 to 8), we have to constrain the exponent value to be between 1016 and 1031). Thankfully, we can do this with a bit of bit-twiddling. To make that easy to see, first, let’s see those values in binary:
$$\begin{aligned} 1016 &= 0b01111111000 \\ 1031 &= 0b10000000111\end{aligned}$$\[9:3\]\[9:3\]are also not set (i.e., less than 1016), clamp up to 1016.
This naive masking does have the side effect of re-introducing some of the problems near zero that IEEE had carefully removed when they added subnormals, as well as adding some more, unique problems. Since our goal in this paper is "shenanigans", that’s great news for us.
The exponent has a special value, all-zeros, that is used along with an all-zero fraction to represent, unsurprisingly, zero. A naive exponent mask would turn that value instead into the smallest possible exponent (e.g., in the previous example, \(2^{-1}\)), which isn’t quite accurate; though that inaccuracy can be interesting, zero is a useful number for fractal computation, so we preserve this special-case behavior.
We use the fixed
Rust library to perform fixed-point
arithmetic.5 The library offers a family of fixed-point formats
specified as IxFy
, with \(x\) signed integer bits and \(y\) fractional
bits.
In this paper, we show a subset of the fixed-point formats we thought were coolest.
For posits,™ we use the softposit
Rust library and we
implement conversion from BigRational
via the associated quire types.
The quire formats are wider fixed-precision formats associated with
posits.™ For instance, the quire associated with the 32-bit posit™ is 512
bits wide, with a precision of \(2^{-240}\) – roughly equivalent to an
I241F240
fixed-point.
When rendering fractals, distortion, like inspiration, can come from anywhere.
When implementing the MaskedFloat type, the authors originally did not properly account for the special case around zero, and instead represented it as the smallest possible positive number. This resulted in strange, but pretty neat looking, asymmetrical error, depicted in Figure 2.
The Mandelbrot and Newton fractals both exhibit self-similarity, repeating themselves at various scales. However, not all of the formats sampled can operate at multiple scales: all of them have some scaling limits, but some are more limited than others.
This is most obvious when dealing with fixed-point formats. I11F5
, for
instance, can only represent 128 distinct values in the range
\(|z| < 2\) - where the Mandelbrot set lives. That’s fewer than the pixels
used to render these images – hence, pixelation, as shown in Figure 3.
The nature and pattern of the pixelation depends on the format used, as
shown in Figure 4. In a region of size \(\frac{2}{3}\) centered on
\(\frac{-4}{3} + 0i\), the P8
and I11F5
formats show different
aesthetic qualities. P8 (depicted) and floating-point stretch into
rectangles when far from \(x=y\); while fixed-point gives
the appearance of overlapping tiles.
It’s possible to preserve dynamic range (see the below section)
but lose precision,
as in MaskedFloat<6,3>
: it has 6 bits of exponent (scale), but only 4
bits at any given scale.
Figure 5 shows this in the Newton fractal. The area
close to the origin maintains high resolution, but it becomes steadily
more pixelated further out. In some sense, this error mirrors the human
eye’s capabilities: a foveal region in the center of vision/complex
plane.
This is the key characteristic of MaskedFloat when configured for high precision (many mantissa bits) but a small dynamic range. The places where the exponent saturates, we introduce distortion. This distortion appears as arcs of discontinuity in the render. See Figure 6 for a comparison of Mandelbrot fractal rendered with the f64 and MaskedFloat formats (configured with 4 exponent bits and 51 mantissa bits.)
For the fractals of interest, coloring a point involves iterating on a value. Since in many of the numerical formats, larger numbers means large possible errors, the range of values this calculation reaches matters.
This affects how much distortion we see in the Mandelbrot fractal compared to Newton: in the Mandelbrot set, the iterated values tend to stay near their starting point, or become larger than our escape threshold 4, but in the Newton fractal, intermediate values can become very large when the slope is near zero, and the final numbers are very small, as \(f(z)\) approaches zero.
You may also be interested in this development thread, where the authors have noted regions of interest.
Figures 3 and 4 don’t do a lot to recommend fixed-point to us. Pixelation can be nice if you’re a child of the 80s or something, but it’s not really weird enough that we should use a whole new numeric format. Just use ImageMagick, like everyone else.
The Newton fractal is a different story, as already shown in Figure
5.
Since the Newton fractal may result in very small or very large
intermediate values, fixed-point numbers are uniquely poorly suited for
accurate computation. In Figure 7, we can see a nice fractal visualization
of approaches to zeros become a fractal hellscape of non-convergence and
incorrect answers, by switching from f64
to I22F10
. Despite no
longer containing useful information, the distorted Newton fractal
looks much cooler.
Posits™ maintain greater precision than f32
, with the same number of
bits. Figure 8
shows f32
, f64
, and P32
over the same
area.
Clearly posits™ are the best option for everything. Right? Come. Follow the word of Gustafson. Enter the circle between zero and infinity. join us Join Us JOIN US JOIN US
Sorry, got a little chanty there–didn’t mean to scare you. Can we interest you in an informative pamphlet?
Let’s face it, there’s a reason that Mandelbrot is popular: there’s lots of different shapes and colors. But despite being less popular, Newton fractals have some interesting artifacts too! Seriously! Promise!
Like Figure 9: floating-point and P32
formats all
run up against the iteration limit at the center area. However,
evaluation at higher iterations "closes" the hole (not shown).
P16
, though, runs up against its limits, rippling out noise and
forming a hole in the center, which persists at higher iterations.
MaskedFloat goes even further and produces a distorted mirror within the
empty space.
It’s a little creepy, honestly. Have you seen Sphere? It’s like that. Right? Right.
As you zoom farther out in the Newton fractal, we reach the point where
the formats can no longer store the values of the starting position or
its intermediate values (or can only store with reduced accuracy). The
authors refer to this as the outer event horizon. Figure 10
shows the shape of this event horizon for various types.
Note that these are not at the same zoom
level–f32
can represent much, much larger numbers than the others.1
If instead of zooming out towards infinity, we zoom in towards zero, we
reach the inner event horizon–where the formats cannot represent how
small the numbers have become. Figure 11
shows the shapes and edges of this inner event horizon.
Interestingly, the inner event horizon appears at
approximately the same scale for P16
, MaskedFloat<4,50>
, and I20F12
;
unlike Figure 10, Figure 11
depicts a consistent scale.2
Slightly above the inner event horizon, there’s also an interesting
distortion region, where the intermediate values get twisted and warped
at the edge of their range. See Figure 12
for some examples, from the wobbly P16
,
to the funhouse mirror of MaskedFloat, and lastly a demonic sunrise over I20F12
.
One additional pattern of distortion that MaskedFloat formats demonstrate is the "hyperbolic starburst",6 so named for its appearance during an earlier (buggier) implementation of MaskedFloat. This pattern seems to appear in Mandelbrot near denser regions of the set e.g. repetitions of the Mandelbrot "beetle" shape.
For instance, Figure 13 shows two MaskedFloat configurations in the
same range: centered on a beetle, in the lighting off of the north bulb.
Note that some portions of MaskedFloat<3,50>
mirror the fine structure
of the fractal (lighting bolts), while others appear to be curvilinear.
Figure 14
shows something even odder: an apparent structural difference between
f64
and a MaskedFloat format. Whether this points to an offset error
in the format, or another issue, we leave for future investigators.
We unequivocally recommend the MaskedFloat family of numeric formats for making weird-looking fractal art, and whatever format(s) your hardware supports for everything else.
The authors only explored the Newton fractal on the Wikipedia example polynomial \(p(z) = z^3-1\). It’s possible other Newton fractals would lead to more cool distortion. As the core operation in the Newton fractal is very similar to the core operation in machine learning’s gradient descent calculation, there’s probably an ML-adjacent paper one could shovel out about this, if you want big-corp funding.
Just go to Github: https://github.com/cceckman/fractal-farlands
Thanks to M+T for leaving Stephen enough sleep to work on this. Thanks to Q for supporting Charles while working on this.
Well, we say “work”…