This notebook demonstrates the Jump Flooding Algorithm (JFA) of Rong and Tan applied to 3D rendered geometry. Two interlocking (2,3) torus knots are rendered with an orbiting camera, and object IDs are written to a texture. The JFA computes a distance field from object boundaries, enabling effects like outlines and hover detection.
The algorithm is useful because its speed depends on the number of pixels in the output image rather than the amount or complexity of geometry.
References:
- G. Rong, T.-S. Tan, Jump Flooding in GPU with Applications to Voronoi Diagram and Distance Transform
- Alan Wolfe, Fast Voronoi Diagrams and Distance Field Textures on the GPU With the Jump Flooding Algorithm
- Ryan Kaplan, Voronoi Diagrams on the GPU
- Claudio Esparança, Jump Flooding
Implementation Details
Data Flow and Render Passes
The rendering pipeline consists of four stages executed each frame.
-
Geometry pass. The two torus knots are rendered into two render targets simultaneously using multiple render target (MRT) output. The fragment shader writes shaded color to an RGBA8 texture and object ID data to an RGBA32UI texture. The object ID texture stores screen-space coordinates in the red and green channels (offset by one to distinguish from empty pixels) and the object identifier in the blue channel.
-
Texture copy. The object ID texture is copied to the first of two ping-pong textures. This initializes the JFA with seed pixels wherever geometry was rendered, while background pixels remain zero.
-
Jump flooding passes. A series of compute shader passes propagate seed information outward. Each pass reads from one ping-pong texture and writes to the other. The first pass samples neighbors at a large step size (half the maximum outline width), and each subsequent pass halves the step size until reaching one pixel. Each pixel examines its eight neighbors at the current step distance plus itself, keeping whichever seed is nearest. After log2(N) passes, every pixel knows its nearest seed location and which object that seed belongs to.
-
Composite pass. A full-screen shader combines the color texture, JFA result, and object ID texture into the final image. For each pixel it computes distance to the nearest seed and determines whether the pixel lies within the outline region. Background pixels inside the outline receive the outline color modulated by a distance-based sawtooth ramp. Object pixels near boundaries between different objects are darkened. Hover detection samples the JFA texture at the mouse position to identify which object the cursor is near, then applies a highlight tint to that object and its outline.