At Standard Cyborg, we use 3D scans from a number of different sources to design sockets for prosthetic devices (btw we’re always hiring if this stuff interests you). Even in the best of circumstances in which we’re able to keep track of the physical “up” direction from acquisition through to the design phase, “physical up” isn’t really a useful direction since the limb could have been in any orientation during scanning (or any orientation relative to the scanner). We’d like to automatically orient incoming scans vertically so that they’re easy to work with.
Working with 3D models isn’t exactly new territory. Before throwing math at this, we should stop to consider whether a nice arcball camera (or—shudder—x/y/z rotation handles!) would allow users to orient scans as they see fit, removing orientation as any concern of ours. We know so much about this problem though! In a broad sense, we know what the scans look like and how users will be modifying them, and we know that even the most grizzled power users find extra degrees of rotational freedom cumbersome and frustrating when they’re not required for the task at hand. At the very worst, we find automatic alignment a great preprocessing step that helps users, doesn’t hurt what’s already arbitrary, and in many cases nails it right away.
The question remains then what alignment could possibly mean. There are an infinite number of valid meanings and corresponding solutions. The solution I describe here only addresses a particular meaning that happens to solve our little micro-problem quite well. It’s not new or novel—an alternate title for this article was “In which I discover the ellipsoid!”—and I’m only taking the trouble to describe it because I was so delighted to pick a heuristic out of the sky, find cause to break out some math, and actually end up with a function which runs robustly in a couple milliseconds.
Most of the scans for which people use our software (and limbs in general, really) are basically cylindrical tubes so that the longest axis makes a decent first cut for orienting the scan.
This orientation seems friendlier than what we started with, but it doesn’t take long to spot some problems. For one, we haven’t said anything about how to actually compute the longest axis (Principal Component Analysis (PCA) feels relevant?). More importantly though, if the scan had roughly equal proportions, the longest axis would be entirely arbitrary even for a cylinder with plainly obvious orientation.
Failure of this basic sanity check suggests the overall orientation of these scans isn’t so much defined by the position of the surface as by its orientation. Without agonizing over why, I decided a better option would be to select an alignment axis as perpendicular as possible to the surface normals. Hazarding a guess at stating that mathematically, I’d call that the axis which minimizes the sum of the squares of the dot products of the alignment axis with the surface normal vectors.
The statement above is a mouthful which requires a bit of unpacking. If we’re going to tackle this as a minimization problem, we at least know we’ll need to roll up the ideas above into an objective function.
Let’s start with the dot products. Recall the dot product between vectors and is equal to where and are the magnitudes of the two vectors, respectively, and is the angle between them. All we really need to know here is that if two vectors are perpendicular, their dot product is zero.
We can talk about a single surface normal vector, but somehow we need to aggregate information across all faces. Let’s call a surface normal vector of the mesh face and a candidate axis of alignment (the Greek letter “xi”, pronounced ”ksee″, which I’m selecting because it’s fun to write, isn’t likely to get confused with anything, and is fun to call “tornado” instead). My supposition is that if we dot the two, square the result, sum over the faces and call it , i.e. then the best alignment is the one which minimizes .
We might have a reasonable objective function here, but to see why it feels like it should work, consider a cylinder. The axis of the cylinder is always perpendicular to the surface normal vectors. Assuming for simplicity that the vectors are all normalized, then the magnitudes drop out and . The axis of a cylinder minimizes even when it’s not the longest axis, thus fixing the failed sanity check above. (If you want to be fancy, I think you could say we’re solving the same principal axis problem but in the tangent space instead, though I don’t think that interpretation is likely to help most people.)
(Why the square? On a strictly mathematical basis, the dot product may be either positive or negative which would cause the minimization to diverge to . The square keeps non-negative so that we can meaningfully minimize it.)
A bit more precisely, if the faces comprising the mesh aren’t uniformly distributed, the sum will be biased toward clusters of vertices and their associated normals. Instead of a sum over normal vectors , what we really want is an area-weighted sum. In fact what we really want is just an integral over the surface (call it ) with respect to the differential area vector (call it ). We define as parallel to the surface normal but with magnitude equal to the area of a differential surface element. The continuous limit of is then
While we’re being precise, we assumed implicity that the axis of alignment was a nonzero vector, but let’s now make that explicit in order to avoid the trivial solution which always minimizes . Constraining to be a unit vector will do just fine.
Fully stating our problem, we want to find the argument which minimizes subject to the constraint that is a unit vector:
For piecewise constant faces with surface normal (magnitude equal to the face’s area, recall), we can recast this as a discrete summation and arrive at our final problem statement,
As for the areas, Eric Arnebäck has a nice article about Computing the Area of a Convex Polygon. It covers triangles. And for you geometry sorcerers and sorceresses, the answer is yes. We’re fitting an ellipsoid now. The rest of the article is me realizing I’m looking for an ellipsoid.
The problem above is a constrained optimization problem. Those can be a bit challenging to solve since you often only want to explore the solution space in directions which keep the constraints satisfied. It took me a while to recall, but if I learned one thing about constrained optimization in engineering (sadly I didn’t learn much more), I learned that the method of Lagrange multipliers exists to transform constrained optimization problems into unconstrained problems. The method works like this. Instead solving the problem we solve the problem where is an auxiliary parameter (the “Lagrange multiplier”) that drives objective function toward satisfying the constraint. With just a bit of handwaving, we can demonstrate that setting the partial derivatives of equal to zero yields which confirms the constraint is satisfied, and with the final leap of faith equality to zero taken since is a stationary point. This step then enforces the original objective function, though I haven’t adequately justified it here. Wikipedia actually has a pretty good explanation which I’d be foolish to try to outdo.
It only takes the tiniest modification to state our problem in the canonical form of a Lagrange-multiplier-ready problem, Applying the method, we arrive at the unconstrained problem
Taking the partial derivatives with respect to , , and as well as and equating to zero isn’t particularly tedious. The result is a system of four simultaneous equations,
It suddenly feels hopeless, especially since the fourth equation is a bit nonlinear in . Let’s cut down on the visual noise by defining as well as the analogous definitions for all pairwise combinations of axes. With these definitions, the above equation looks a bit more manageable, yielding
Neglecting the last equation for a moment, we can state the first three as a matrix multiplication,
This is just the standard form of an eigenvalue problem, and what’s more, its eigenvectors are normalized by convention, which implicitly satisfies the constraint . Eigenvalues are simple and easy to compute, even in JavaScript. We’ve solved it! Upon solving, we get three eigenvalues and corresponding unit eigenvectors which are identically the model axes and associated inverse strengths along the respective eigenvectors.
As a final bonus, recall—or discover today!—that the eigenvalues of a symmetric positive-definite matrix are real and orthogonal, i.e. mutually perpendicular. And there are three of them. So we don’t just get unit vectors out of this, we get a three dimensional rotation matrix which can be applied directly to the model.
Wonderfully! Robustly! Efficiently! The only nontrivial numerical part is the eigenvalue computation, but it’s only a small 3x3 matrix you can farm out to any old numerial library.
The main caveat is that eigenvalues are only unique up to a sign so that we need to check for reflections and apply some slightly ad-hoc heuristics to disambiguate the sign. In particular, I’m just using the total summed area vector to see if we can put the open end in a consistent direction. There’s room for improvement.
You can see the final result below. Note that the two remaining axes also align the knee!
Update: Eric Arnebäck asked about noise. I’ve added a noise slider below and have removed a square root in the scaling so that the magnitudes are a bit more separated. The noise is not IID noise so take it with a grain of salt, but it hopefully gives some indication of the approach’s ability to reject noise.
At the end of the day, I rather suspect I’ve rederived a pretty standard technique for talking about the shape of a surface. I hope you’ll forgive me if my satisfaction isn’t diminished though since opportunities to legitimately break out Lagrange multipliers are so rare! And as part of my day job no less.
There’s room for improvement in the final disambiguation of signs, but frankly once we’ve solved the main problem of figuring out a rough alignment, the subsequent algorithms have a significantly easier time making sense of the scan.
This post uses idyll and regl. They’re great projects! You should check them out! You can find the article source here and an implementation of the algorithm here.
Questions? Comments? Corrections? Drop me a line @rickyreusser!