GPUs do many things well, but drawing transparent 3D objects is not one of them. Opacity doesn’t commute so that the order in which you draw surfaces makes a big difference.
The simplest way to draw transparent objects is from back to front via the painter’s algorithm. In this approach we sort geometry and draw only from back to front. This requires sorting triangles, which, in addition to the possibility of ambiguous overlapping, is a pretty big pain to do whenever the view angle changes.
Order independent transparency includes a number of more sophisticated approaches which avoid the need to sort geometry. Some approaches sort individual fragments (pixels) by depth so that the layering can be computed correctly. Some do this in a more approximate sense.
This notebook explores a few approaches to transparent rendering, from a practical two-pass hack through more principled order-independent techniques. “Fake transparency” uses a solid surface pass followed by a wireframe-only pass without depth occlusion, giving the impression of transparency without actually sorting or blending layers. Weighted blended OIT approximates correct layering in a single pass using depth-based weights. Front-to-back depth peeling captures exact layers one at a time, and dual depth peeling extends this by capturing two layers per pass.
Fake Transparency
The constant-width gridlines are adapted from glsl-solid-wireframe. You can use the approach to draw any sort of gridlines on any sort of surface. Or to draw 2D contours in a fragment shader.
The cartoon edges use the same underlying trick as the gridlines. The quantity , the absolute dot product of the surface normal with the view direction, goes to zero at silhouette edges where the surface turns away from the camera. Near such a zero-crossing any smooth scalar field is approximately linear, so dividing by its screen-space gradient magnitude (via fwidth) yields a first-order estimate of the distance to the zero-crossing measured in pixels. Applying a smoothstep threshold to this ratio produces a band of constant pixel width, regardless of depth, perspective, or how sharply the surface curves. The silhouette line stays the same number of pixels wide everywhere on the surface. One downside is that it requires a moderately high-resolution mesh since the derivative is evaluated per-triangle and can’t smooth over abrupt normal changes.
At a strictly subjective level, I find this pretty tolerable for illustrating mathematical surfaces, where the goal is not so much physically accurate representation of objects but instead the communication of structure.
Weighted Blended Order-Independent Transparency
The fake transparency approach above is a practical hack, but it isn’t true transparency. For a more principled approach, we can use weighted blended order-independent transparency (WBOIT), as described by McGuire and Bavoil (2013).
The idea is to render all transparent geometry in a single pass to two render targets. The first target accumulates premultiplied color weighted by a depth-dependent function. The second target tracks the total transmittance (the product of all terms). A final compositing pass combines these two buffers to produce the blended result.
The weighting function is the key ingredient. Surfaces closer to the camera receive higher weight so that nearer geometry contributes more to the final color, roughly approximating correct depth ordering without actually sorting anything. The function used here is , where is the normalized device coordinate depth.
The blend modes for the two targets are additive accumulation for color/weight and multiplicative accumulation for revealage. Then the compositing pass reconstructs the final pixel color as
where is the revealage value.
Front-to-Back Depth Peeling
Depth peeling is an exact order-independent transparency technique. It renders the scene multiple times, each time “peeling” away the nearest unpainted layer from front to back. On the first pass, standard depth testing captures the nearest surface. On each subsequent pass, a fragment is discarded if its depth is less than or equal to the previous layer’s depth, revealing the next-nearest surface. After all layers have been captured, they are composited back to front with standard alpha blending.
The number of peel passes controls the maximum number of transparent layers that can be resolved. For most scenes, four to six passes are sufficient to capture the visible layering. Each additional pass requires re-rendering all geometry, so there is a direct tradeoff between quality and performance. This cost motivates dual depth peeling, which captures two layers per pass instead of one.
Dual Depth Peeling
Standard front-to-back depth peeling captures one layer per pass, so rendering N layers requires N full geometry passes. Dual depth peeling, introduced by Bavoil and Myers (2008), halves the number of geometry passes by capturing both the nearest and farthest surviving fragments in each pass.
The technique works by maintaining a “dual depth” buffer that stores two depth values per pixel. Through MAX blending of , the buffer simultaneously captures the minimum depth (nearest fragment) in one channel and the maximum depth (farthest fragment) in the other. Each pass renders to three targets: the dual depth buffer, a front color buffer that accumulates the nearest fragment’s contribution via under (front-to-back) blending, and a back color buffer that accumulates the farthest fragment’s contribution via over (back-to-front) blending.
The first pass operates over all fragments. Each subsequent pass reads the previous dual depth buffer and discards any fragment whose depth falls at or outside the previous near/far boundaries, peeling inward from both sides simultaneously. After N passes, up to 2N layers have been captured. Compositing proceeds from back to front: back layers from the outermost pass through the innermost, then front layers from the innermost pass through the outermost.
Within each pass, the front and back colors are approximations since hardware blending processes fragments in arbitrary order rather than strict depth order. The approximation converges to exact results as the number of passes increases and fewer fragments compete within each peeling interval.