This notebook implements the Cyclic Symmetric Multi-Scale Turing Patterns of Jonathan McCabe in WebGPU. The simulation uses FFT-based convolution with analytical kernels (Gaussian and circular) to compute activator-inhibitor dynamics across multiple scales. It’s a translation of previous implementations, the best output of which I captured in galleries 1 and 2.
The algorithm selects the scale with minimum variation at each point, creating the characteristic multiscale patterns. Colors are blended based on which scale dominates locally.
Controls
Each row in the scale parameters table defines a pattern scale:
- Activator radius (A): The radius of the activator kernel in pixels. Smaller radii create finer details; larger radii create broader structures.
- Inhibitor/Activator ratio (I/A): The inhibitor radius as a multiple of the activator radius. Higher ratios (e.g., 2.0) create more separation between features. Must be > 1.
- Kernel (K): Shape of the convolution kernel. Gaussian has smooth falloff; Circle has a sharp cutoff. Gaussian kernels tend to produce smoother features.
- Amount (Δ): Step size multiplier for this scale. Controls how strongly this scale influences the pattern evolution. Small values (0.01-0.1) give subtle influence; larger values dominate.
- Weight (W): Relative importance when computing pattern variation. Higher weights make a scale more likely to “win” and drive the local pattern.
- Symmetry (S): Rotational symmetry order of the kernel. 1 = no symmetry (standard Gaussian/circle), 2-8 = creates n-fold symmetric patterns like stripes (2), triangles (3), squares (4), or hexagons (6).
Use mouse to zoom and pan the visualization. Click the expand button to view fullscreen.
The Gray-Scott Model
Reaction-diffusion can mean a lot of things. The first model I learned about–or really learned of, to be more precise–is the Gray-Scott model. The Gray-Scott model describes a chemical reaction between substances and which diffuse in space at different rates and react, producing an interesting landscape of complicated behaviors.
In these equations, and are diffusion coefficients, is the feed rate, and is the kill rate.
I’ve been a long-term fan, in particular because the Gray-Scott model is unusually forgiving to simulate. Second order finite differences for diffusion and explicit Euler time-stepping are perfectly adequate.
John Pearson’s 1993 paper “Complex Patterns in a Simple System” maps out the parameter space and documents a remarkable variety of behaviors including self-replicating spots, traveling waves, and labyrinthine patterns. Karl Sims has a nice tutorial which discusses the dynamics. Below, in that style, I’ve implemented a simple two-dimensional version of the above PDE.
Derivation of the Hopf bifurcation curve
The Gray-Scott model consists of two coupled reaction-diffusion equations:
where and are the concentrations of two chemical species, is the feed rate, and is the kill rate.
Steady states. Setting the time derivatives to zero (ignoring diffusion for spatially uniform states):
From the second equation, . Substituting into the first:
Jacobian matrix. To determine stability, we examine how small deviations and from the steady state evolve. The Jacobian matrix, formed from partial derivatives of the reaction terms evaluated at the steady state, governs this evolution:
Using at steady state, the element .
Hopf bifurcation condition. The eigenvalues of determine stability. For a 2×2 matrix, . When and , both eigenvalues have negative real parts. These eigenvalues dictate growth and decay of perturbations, so negative real parts means perturbations decay and the steady state is stable. When , perturbations grow exponentially.
The boundary (with ) marks a Hopf bifurcation: the eigenvalues become purely imaginary, and the system transitions from stable equilibrium to sustained oscillations. Setting :
So at the Hopf bifurcation, .
Solving for the curve. Combining with the steady-state condition :
From , we get . Substituting:
Therefore , giving the quadratic:
Solving: (taking the smaller root for stability).
Finally, gives the Hopf curve parametrized by .
Saddle-node bifurcation. The solid curve marks where non-trivial steady states exist. Recall the steady-state condition . Viewing this as a quadratic in , solutions exist only when the discriminant is non-negative: . The boundary where this discriminant equals zero gives the saddle-node curve . Outside this curve, only the trivial state exists. Inside, a second steady state with appears.
Interpreting the phase diagram. Outside the dashed Hopf curve, the system settles to a spatially uniform state (no patterns). Inside the dashed curve, the uniform state becomes unstable, and small perturbations grow into the spots and stripes we observe.
The activator-inhibitor model
As interesting as the Gray-Scott model is, we can isolate dynamics of striped and spotted patterns with a simpler activator-inhibitor model. The idea, proposed by Alan Turing in 1952, is that two interacting substances with different diffusion rates can spontaneously break spatial symmetry. The key ingredients are:
- A slowly-diffusing activator that promotes its own growth
- A faster-diffusing inhibitor that suppresses the activator
Consider a small bump in the concentration field. The activator, diffusing slowly, remains concentrated near the bump and reinforces local growth. The inhibitor diffuses faster, spreading outward before it can fully suppress the peak. But once the inhibitor reaches the surrounding region, it dominates there, preventing the bump from spreading. The result is a sharpening instability: peaks grow while their surroundings are suppressed, creating the characteristic spacing of Turing patterns.
The figure below illustrates this with Gaussian kernels of different widths standing in for the activator and inhibitor influence. The narrow activator kernel responds strongly to the peak, while the wider inhibitor kernel averages over a broader region. Their difference drives the update: positive at the center, negative at the flanks.
1D Turing pattern model
The simulation below implements the activator-inhibitor model in one dimension. We maintain a scalar field and use the Fast Fourier Transform (FFT) to repeatedly convolve it with two Gaussian kernels: a narrow activator and a wider inhibitor. At each step, we add the difference to the field, then apply an function to keep values bounded.
The ratio of inhibitor to activator width determines the characteristic spacing of the pattern. A 2:1 ratio creates tight spacing; larger ratios spread the features further apart. Try adjusting the activator radius to see how it affects the emerging pattern. The colored bars below the simulation show the relative widths of the activator (blue) and inhibitor (red) kernels.
2D Turing pattern model
The same dynamics work in two dimensions. Below is a WebGPU implementation using FFT-based convolution on a 256×256 grid.
Extension to Multiple Scales
The single-scale algorithm produces patterns with a characteristic wavelength determined by the kernel ratio. To create the rich, nested structures of true multiscale Turing patterns, we run multiple scales simultaneously and let them compete.
At each point in the field, we compute activator and inhibitor values for several scales. Each scale proposes an update direction based on the sign of its difference. The question is which scale should “win” at each location.
The answer comes from looking at the magnitude at each scale. Where this variation is small, the field is already near equilibrium for that scale—further updates would create excessive local contrast. Where variation is large, the scale has strong opinions about which direction to push.
The algorithm selects the scale with minimum variation at each point and steps in the direction that scale indicates. The effect is that coarse scales dominate in regions where fine scales would fight against each other, while fine scales fill in details where coarse scales have nothing to say. The result is a hierarchy of nested patterns spanning the full range of scales.
Two-Scale Demo
The simulation below demonstrates this scale-selection logic with just two scales: a fine scale and a coarse scale. Each scale computes its own activator-inhibitor difference, and at each pixel the scale with minimum controls the update direction.
Check “Show scale selection” to visualize which scale wins at each location. Blue regions are controlled by the fine scale, orange by the coarse scale.
Multi-scale Turing pattern Implementation notes
The visualization at the top of the page puts all the pieces together and performs a multi-scale simulation. Getting it to work efficiently is a moderate undertaking with a few tricks and caveats documented below. The rough algorithm, from Jonathan McCabe’s Cyclic Symmetric Multi-Scale Turing Patterns is:
- Convolve the field with Gaussian or circular kernels at different radii to compute activator and inhibitor values
- Find the scale with minimum variation (smallest |activator - inhibitor|)
- Step the field in the direction indicated by that scale (positive if activator > inhibitor)
- Mix the color toward the dominant scale’s color
This implementation uses several techniques to maximize throughput on the GPU:
Packed FFTs: We compute four real-valued FFTs in a single pass by packing data into complex pairs. The real solution field is stored as a vec4 treating it as two identical complex numbers:
Similarly, four convolution kernels (two activator-inhibitor pairs for scales and ) are packed into two complex numbers:
where and are the frequency-domain activator and inhibitor kernels. After a single forward FFT, we multiply by two sets of frequency-domain kernels and perform one inverse FFT to recover all four convolved outputs. It’s interesting to note that even thought the real and complex components of the two kernels end up intermingled in frequency space, the final inverse FFT separates them nicely into the real-valued convolutions we seek.
Analytical frequency-domain kernels: Rather than storing precomputed kernel textures, we evaluate kernels analytically in the shader. The Fourier transform of a Gaussian with standard deviation is another Gaussian:
The Fourier transform of a circular disc of radius is a Bessel function of the first kind:
This eliminates kernel memory bandwidth entirely and allows arbitrary kernel parameters without regenerating textures.
Float16 precision: All computation uses 16-bit floating point for storage buffers, halving memory bandwidth compared to float32. The reduced precision is sufficient for the visual nature of the simulation while enabling larger grid sizes and faster execution.