You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For our assignment, we will assume that we have a triangle mesh representing a
complete scan of the surface $Y$ of some rigid
object and a new partial scan of
that surface $X$.
These meshes will not have the same number of vertices or the even the same
topology. We will first explore different ways to measure how well aligned
two surfaces are and then how to optimize the rigid alignment of the partial
surface $X$ to the complete surface $Y$.
Hausdorff distance
We would like to compute a single scalar number that measures how poorly two
surfaces are matched. In other words, we would like to measure the distance
between two surfaces. Let's start by reviewing more familiar distances:
Point-to-point distance
The usually Euclidean distance between two points$\mathbf{x}$ and $\mathbf{y}$ is the $L^{2}$
norm of their difference :
When we consider the distance between a point $\mathbf{x}$ and some larger object $Y$ (a line,
a circle, a surface), the natural extension is to take the distance to the
closest point $\mathbf{y}$ on $Y$:
written in this way the
infimum considers all
possible points $\mathbf{y}$ and keeps the minimum distance. We may equivalently write
this distance instead as simply the point-to-point distance between $\mathbf{x}$ and
the closest-point projection$P_Y(\mathbf{x})$:
We might be tempted to define the distance from surface $X$ to $Y$ as the
infimum of point-to-projection distances over all points $\mathbf{x}$ on $X$:
but this will not be useful for registering two surfaces: it will measure zero
if even just a single point of $\mathbf{x}$ happens to lie on $Y$. Imagine the noses of
two faces touching at their tips.
Instead, we should take the supremum of point-to-projection distances over
all points $\mathbf{x}$ on $X$:
This surface-to-surface distance measure is called the directedHausdorff
distance. We may interpret
this as taking the worst of the best: we
let each point $\mathbf{x}$ on $X$ declare its shortest distance to $Y$ and then keep
the longest of those.
It is easy to verify that $D_{\overrightarrow{H}}$ will only equal zero if all
points on $X$ also lie exactly on $Y$.
The converse is not true: if $D_{\overrightarrow{H}}=0$ there may still be
points on $Y$ that do not lie on $X$. In other words, in general the directed
Hausdorff distance from surface $X$ to surface $Y$ will not equal the Hausdorff
distance from surface $Y$ to surface $X$:
directed Hausdorff distance between triangle meshes
We can approximate a lower bound on the Hausdorff distance between two meshes
by densely sampling surfaces $X$ and $Y$. We will discuss sampling methods,
later. For now consider that we have chosen a set $\mathbf{P}_X$ of $k$ points on $X$
(each point might lie at a vertex, along an edge, or inside a triangle). The
directed Hausdorff distance from $X$ to another triangle mesh $Y$ must be
greater than the directed Hausdorff distance from this point
cloud$\mathbf{P}_X$ to $Y$:
where we should be careful to ensure that the projection $P_Y(\mathbf{p}_i)$ of the
point $\mathbf{p}_i$ onto the triangle mesh $Y$ might lie at a vertex, along an edge or
inside a triangle.
As our sampling $\mathbf{P}_X$ becomes denser and denser on $X$ this lower bound will
approach the true directed Hausdorff distance. Unfortunately, an efficient
upper bound is significantly more difficult to design.
Hausdorff distance for alignment optimization
Even if it were cheap to compute, Hausdorff distance is difficult to
optimize when aligning two surfaces. If we treat the Hausdorff distance
between surfaces $X$ and $Y$ as an energy to be minimized, then only change to
the surfaces that will decrease the energy will be moving the (in general)
isolated point on $X$ and isolated point on $Y$ generating the maximum-minimum
distance. In effect, the rest of the surface does not even matter or effect the
Hausdorff distance. This, or any type of $L^\infty$ norm, will be much more
difficult to optimize.
Hausdorff distance can serve as a validation measure, while we turn to $L^{2}$
norms for optimization.
Integrated closest-point distance
We would like a distance measure between two surfaces that — like Hausdorff
distance — does not require a shared parameterization. Unlike Hausdorff
distance, we would like this distance to diffuse the measurement over the
entire surfaces rather than generate it from the sole worst offender. We can
accomplish this by replacing the supremum in the Hausdorff distance (
$L^{\infty}$
)
with a integral of squared distances (
$L^2$
). Let us first define a directed
closest-point distance from a surface $X$ to another surface $Y$, as the
integral of the squared distance from every point $\mathbf{x}$ on $X$ to its
closest-point projection $P_Y(\mathbf{x})$ on the surfaces $Y$:
This distance will only be zero if all points on $X$ also lie on $Y$, but when
it is non-zero it is summing/averaging/diffusing the distance measures of all
of the points.
This distance is suitable to define a matching energy, but is not necessarily
welcoming for optimization: the function inside the square is non-linear. Let's
dig into it a bit. We'll define a directed matching energy$E_{\overrightarrow{C}}(Z,Y)$ from $Z$ to $Y$ to be the squared directed
closest point distance from $X$ to $Y$:
where we introduce the proximity function $\mathbf{f}_Y:\mathbb{R}^{3}\Rightarrow \mathbb{R}^{3}$ defined simply as the
vector from a point $\mathbf{z}$ to its closest-point projection onto $Y$:
Suppose $Y$ was not a surface, but just a single point $Y = {\mathbf{y}}$. In this
case, $\mathbf{f}(\mathbf{z}) = \mathbf{z} - \mathbf{y}$ is clearly linear in $\mathbf{z}$.
Similarly, suppose $Y$ was an infinite
plane$Y = {\mathbf{y} | (\mathbf{y}-\mathbf{p})\cdot \mathbf{n} =0}$ defined by some point $\mathbf{p}$ on the plane and the plane's unit normal vector
$\mathbf{n}$. Then $\mathbf{f}(\mathbf{z}) = ((\mathbf{z}-\mathbf{p})\cdot \mathbf{n})\mathbf{n})$ is also linear in $\mathbf{z}$.
But in general, if $Y$ is an interesting surface $\mathbf{f}(\mathbf{z})$ will be non-linear; it
might not even be a continuous function.
In optimization, a common successful strategy to minimize energies composed of
squaring a non-linear functions $\mathbf{f}$ is to
linearize the function about a
current input value (i.e., a current guess $\mathbf{z}_{0}$), minimize the energy built
from this linearization, then re-linearize around that solution, and then
repeat.
minimize f(z)^{2}
z_{0} ← initial guess
repeat until convergence
f_{0} ← linearize f(z) around z_{0}
z_{0} ← minimize f_{0}(z)^{2}
Since our $\mathbf{f}$ is a geometric function, we can derive its linearizations
geometrically.
Constant function approximation
If we make the convenient—however unrealistic—assumption that in the
neighborhood of the closest-point projection $P_Y(\mathbf{z}_{0})$ of the current guess
$\mathbf{z}_{0}$ the surface $Y$ is simply the point $P_Y(\mathbf{z}_{0})$ (perhaps imagine that $Y$
is makes a sharp needle-like point at $P_Y(\mathbf{z}_{0})$ or that $Y$ is very far away
from $\mathbf{x}$), then we can approximate $\mathbf{f}(\mathbf{z})$ in the proximity of our current
guess $\mathbf{z}_{0}$ as the vector between the input point $\mathbf{z}$ and $P_Y(\mathbf{z}_{0})$:
In effect, we are assuming that the surface $Y$ is constant function of its
parameterization: $\mathbf{y}(u,v) = P_Y(\mathbf{z}_{0})$.
Minimizing $E_{\overrightarrow{C}}$ iteratively using this linearization of
$\mathbf{f}$ is equivalent to gradient
descent. We have simply derived
our gradients geometrically.
Linear function approximation
If we make make a slightly more appropriate assumption that in the neighborhood
of the $P_Y(\mathbf{z}_{0})$ the surface $Y$ is a plane, then we can improve this
approximation while keeping $\mathbf{f}$ linear in $\mathbf{z}$:
where the plane that best approximates $Y$ locally near $P_Y(\mathbf{z}_{0})$ is the
tangent plane defined by the
normal vector$\mathbf{n}$ at
$P_Y(\mathbf{z}_{0})$.
Minimizing $E_{\overrightarrow{C}}$ iteratively using this linearization of
$\mathbf{f}$ is equivalent to the
Gauss-Newton method. We
have simply derived our linear approximation geometrically.
Equipped with these linearizations, we may now describe an optimization
algorithm
for minimizing the matching energy between a surface $Z$ and another surface
$Y$.
Iterative closest point algorithm
So far we have derived distances between a surface $Z$ and another surface $Y$.
In our rigid alignment and registration problem, we would like to
transform one
surface $X$ into a new surface $T(X) = Z$ so that it best aligns with/matches
the other surface $Y$. Further we require that $T$ is a rigid transformation:
$T(\mathbf{x}) = \mathbf{R} \mathbf{x} + \mathbf{t}$ for some rotation matrix $\mathbf{R} \in SO(3) \subset \mathbb{R}^{3\times 3}$
(i.e., an orthogonal matrix with determinant
1) and translation vector
$\mathbf{t}\in \mathbb{R}^{3}$.
Our matching problem can be written as an optimization problem to find the best
possible rotation $\mathbf{R}$ and translation $\mathbf{t}$ that match surface $X$ to surface
$Y$:
Even if $X$ is a triangle mesh, it is difficult to integrate over all
points on the surface of $X$. At any point, we can approximate this energy by
summing over a point-sampling of $X$:
where $\mathbf{X} \in \mathbb{R}^{k\times 3}$ is a set of $k$ points on $X$ so that each point $\mathbf{x}_i$
might lie at a vertex, along an edge, or inside a triangle. We defer discussion
of how to sample a triangle mesh surface.
Pseudocode
As the name implies, the method proceeds by iteratively finding the closest
point on $Y$ to the current rigid transformation $\mathbf{R} \mathbf{x} + \mathbf{t}$ of each sample
point $\mathbf{x}$ in $\mathbf{X}$ and then minimizing the linearized energy to update the
rotation $\mathbf{R}$ and translation $\mathbf{t}$.
If V_X and F_X are the vertices and faces of a triangle mesh surface $X$
(and correspondingly for $Y$), then we can summarize a generic ICP algorithm in
pseudocode:
icp V_X, F_X, V_Y, F_Y
R,t \Leftarrow initialize (e.g., set to identity transformation)
repeat until convergence
X \Leftarrow sample source mesh (V_X,F_X)
P0 \Leftarrow project all X onto target mesh (V_Y,F_Y)
R,t \Leftarrow update rigid transform to best match X and P0
V_X \Leftarrow rigidly transform original source mesh by R and t
Updating the rigid transformation
We would like to find the rotation matrix $\mathbf{R} \in SO(3) \subset \mathbb{R}^{3\times 3}$ and
translation vector $\mathbf{t}\in \mathbb{R}^{3}$ that best aligns a given a set of points $\mathbf{X} \in \mathbb{R}^{k\times 3}$ on the source mesh and their current closest points $\mathbf{P} \in \mathbb{R}^{k\times 3}$
on the target mesh. We have two choices for linearizing our matching energy:
point-to-point (gradient descent) and point-to-plane (Gauss-Newton).
ICP using the point-to-point matching energy linearization is slow to converge.
ICP using the point-to-plane matching energy linearization is faster.
In either case, this is still a non-linear optimization problem. This time due
to the constraints
rather than the energy term.
Closed-form solution for point-to-point rigid matching
In an effort to provide an alternative from "Least-Squares Rigid Motion Using
SVD" [Sorkine 2009], this derivation purposefully avoids the trace
operator and its
various nice properties.
The point-to-point (gradient descent) rigid matching problem solves:
This is a variant of what's known as a Procrustes problem, named after a mythical psychopath who would kidnap people and force them to fit in his bed by stretching them or cutting off their legs. In our case, we are forcing $\mathbf{R}$ to be perfectly orthogonal (no "longer", no "shorter").
Substituting out the translation terms
This energy is quadratic in $\mathbf{t}$ and there are no other constraints on
$\mathbf{t}$. We can immediately solve for the optimal $\mathbf{t}^*$ — leaving $\mathbf{R}$ as an unknown — by
setting all derivatives with respect to unknowns in $\mathbf{t}$ to zero:
where $\mathbf{1} \in \mathbb{R}^{k}$ is a vector ones and $||\mathbf{X}||_F^2$ computes the squared Frobenius norm of the matrix $\mathbf{X}$ (i.e., the sum of all squared element values. In MATLAB syntax: sum(sum(A.^2))). Setting the partial derivative with respect to $\mathbf{t}$ of this quadratic energy to zero finds the minimum:
Rearranging terms above reveals that the optimal $\mathbf{t}$ is the vector aligning
the centroids of the points in $\mathbf{P}$
and the points in $\mathbf{X}$ rotated by the — yet-unknown — $\mathbf{R}$. Introducing
variables for the respective centroids $\overline{\mathbf{p}} = \tfrac{1}{k} {\sum}_{i=1}^k \mathbf{p}_i$ and $\overline{\mathbf{x}} = \tfrac{1}{k} {\sum}_{i=1}^k \mathbf{x}_i$, we can write the
formula for the optimal $\mathbf{t}$:
Now we have a formula for the optimal translation vector $\mathbf{t}$ in terms of the
unknown rotation $\mathbf{R}$. Let us
substitute this formula
for all occurrences of $\mathbf{t}$ in our energy written in its original summation
form:
where we introduce $\overline{\mathbf{X}} \in \mathbb{R}^{k \times 3}$ where the ith row contains the
relative position of the ith point to the centroid $\overline{\mathbf{x}}$: i.e.,
$\overline{\mathbf{x}}_i = (\mathbf{x}_i - \overline{\mathbf{x}})$ (and analagously for $\overline{\mathbf{P}}$).
Now we have the canonical form of the orthogonal procrustes problem. To find the optimal rotation matrix $\mathbf{R}^*$, using the associativity property of the Frobenius norm, we will massage the terms in the minimization until we have a maximization problem involving the Frobenius inner-product of the unknown rotation $\mathbf{R}$ and covariance matrix of $\mathbf{X}$ and $\mathbf{P}$:
where $\left<\mathbf{A}, \mathbf{B} \right>_F$ is the Frobenius inner product of $\mathbf{A}$ and $\mathbf{B}$ (i.e., the sum of all per-element products. In MATLAB syntax: sum(sum(A.*B))). This can be further reduced:
Letting $\mathbf{M} = \overline{\mathbf{P}}^{\top} \overline{\mathbf{X}}$. We can understand this problem as projecting the covariance matrix$\mathbf{M}$ to the nearest rotation matrix $\mathbf{R}$.
Question: How can we prove that $\left<\mathbf{R}\overline{\mathbf{X}}^\top,\overline{\mathbf{P}}^\top\right> \left<\mathbf{R},\overline{\mathbf{P}}^\top\overline{\mathbf{X}}\right>$?
Hint:
Recall some linear algebra properties:
Matrix multiplication (on the left) can be understood as acting on each column: $\mathbf{A} \mathbf{B} = \mathbf{A} [\mathbf{B}_1 \ \mathbf{B}_2 \ \ldots \ \mathbf{B}_n] = [\mathbf{A} \mathbf{B}_1 \ \mathbf{A} \mathbf{B}_2 \ \ldots \ \mathbf{A} \mathbf{B}_n]$,
The Kronecker product$\mathbf{I} \otimes \mathbf{A}$ of the identity matrix $\mathbf{I}$ of size $k$ and a matrix $\mathbf{A}$ simply repeats $\mathbf{A}$ along the diagonal k times. In MATLAB, repdiag(A,k),
Properties 1. and 2. imply that the vectorization of a matrix product $\mathbf{B}\mathbf{C}$ can be written as the Kronecker product of the #-columns-in-$\mathbf{C}$ identity matrix and $\mathbf{B}$ times the vectorization of $\mathbf{C}$: $\text{vec}(\mathbf{B}\mathbf{C}) = (\mathbf{I} \otimes \mathbf{B})\text{vec}(\mathbf{C})$,
The transpose of a Kronecker product is the Kronecker product of transposes: $(\mathbf{A} \otimes \mathbf{B})^{\top} = \mathbf{A}^{\top} \otimes \mathbf{B}^{\top}$,
The Frobenius inner product can be written as a dot product of vectorized matrices: $<\mathbf{A},\mathbf{B}>_F = \text{vec}(\mathbf{A}) \cdot \text{vec}(\mathbf{B}) = \text{vec}(\mathbf{A})^{\top} \text{vec}(\mathbf{B})$,
Properties 3., 4., and 5. imply that Frobenius inner product of a matrix $\mathbf{A}$ and the matrix product of matrix $\mathbf{B}$ and $\mathbf{C}$ is equal to the Frobenius inner product of the matrix product of the transpose of $\mathbf{B}$ and $\mathbf{A}$ and the matrix $\mathbf{C}$:
Any matrix can be written in terms of its singular value decomposition. Let's do this for our covariance matrix: $\mathbf{M} = \mathbf{U} \sigma \mathbf{V}^{\top}$, where $\mathbf{U}, \mathbf{V} \in \mathbb{R}^{3\times 3}$ are orthonormal matrices and $\sigma \in \mathbb{R}^{3\times 3}$ is a non-negative diagonal matrix:
We can use the permutation property of Frobenius inner product again to move the products by $\mathbf{V}$ and $\mathbf{U}$ from
the right argument to the left argument:
Now, $\mathbf{U}$ and $\mathbf{V}$ are both
orthonormal, so multiplying
them against a rotation matrix $\mathbf{R}$ does not change its orthonormality. We can
pull them out of the maximization if we account for the reflection they might
incur: introduce ${\Omega} = \mathbf{U}^T\mathbf{R}\mathbf{V} \in O(3)$ with $\det{{\Omega}} = \det{\mathbf{U}\mathbf{V}^{\top}}$.
This implies that the optimal rotation for the original problem is recovered
via $\mathbf{R}^* = \mathbf{U} {\Omega}^* \mathbf{V}^{\top}$. When we move the $\mathop{\text{argmax}}$ inside, we now
look for an orthonormal matrix ${\Omega} \in O(3)$ that is a reflection (if
$\det{\mathbf{U}\mathbf{V}^{\top}} = -1$) or a rotation (if $\det{\mathbf{U}\mathbf{V}^{\top}} = 1$):
This ensures that as a result $\mathbf{R}^*$ will be a rotation: $\det{\mathbf{R}^*} = 1$.
Recall that $\sigma \in \mathbb{R}^{3\times 3}$ is a non-negative diagonal matrix of singular values
sorted so that the smallest value is in the bottom right corner.
Because ${\Omega}$ is orthonormal, each column (or row) of ${\Omega}$ must have unit norm.
Placing a non-zero on the off-diagonal will get "killed" when multiplied by the
corresponding zero in $\sigma $. So the optimal choice of ${\Omega}$ is to set all values to
zero except on the diagonal. If $\det{\mathbf{U}\mathbf{V}^{\top}} = -1$, then we should set
one (and only one) of these values to $-1$. The best choice is the bottom right
corner since that will multiply against the smallest singular value in ${\sum}$ (add
negatively affect the maximization the least):
where $\hat{\mathbf{n}}_i \in \mathbb{R}^3$ is the unit normal at the located closest point $\mathbf{p}_i$. Since $\hat{\mathbf{n}}$ is a unit vector the norm is only measuring the proceeding term $(\mathbf{R} \mathbf{x}_i + \mathbf{t} - \mathbf{p}_i)\cdot \hat{\mathbf{n}}_i$, so we can reduce this problem to:
Unlike the point-to-point problem above, there is closed-form solution to this problem. Instead we will ensure that
that $\mathbf{R}$ is not just any $3\times 3$ matrix, but a rotation matrix by iteartive linearization.
If we simply optimize the 9 matrix entries of $\mathbf{R}$ directly, the result will be far from a rotation matrix: for example, if $\mathbf{X}$ is a twice scaled version of $\mathbf{P}$, then this unconstrained optimization would happily declare the entries of $\mathbf{R}$ to describe a (non-orthonormal) scaling matrix.
Instead, we linearize the constraint that $\mathbf{R}$ stays a rotation matrix and work with a reduced set of variables.
Any rotation $\mathbf{R}$ in 3D can be written as scalar rotation angle $\theta$ around a rotation axis defined by a unit vector $\hat{\mathbf{w}}\in\mathbb{R}^3$.
If $\hat{\mathbf{w}} = \hat{\mathbf{z}} = [0,0,1]$, we know that a rotation by $\theta$ can be written as:
where $\mathbf{W}\in\mathbb{R}^{3\times 3}$ is the skew-symmetriccross product matrix of $\hat{\mathbf{w}}$ so that $\mathbf{W} \mathbf{x} = \hat{\mathbf{w}} \times \mathbf{x}$
In this form, we can linearize by considering a small change in $\theta$ and $\hat{\mathbf{w}}$:
Let's gather a vector of unknowns: $\mathbf{u}^{\top} =[\mathbf{a}^{\top} \mathbf{t}^{\top}] \in \mathbb{R}^{6}$. Then we can use properties of the triple product to rewrite our problem as:
Gather coefficients into $\mathbf{A} \in \mathbb{R}^{6\times 6}$ and $\mathbf{b} \in \mathbb{R}^6$, we have a compact quadratic minimization problem in $\mathbf{u}$:
then our transformation will not be rigid. Instead, we should recover the
axis and angle of rotation from $\mathbf{a}$ via $\theta = ||\mathbf{a}||$ and
$\hat{\mathbf{w}} = \mathbf{a}/\theta$ and then update our rotation via the axis-angle to matrix formula above. Because we used a
linearization of the rotation constraint, we cannot assume that we have
successfully found the best rigid transformation. To converge on an optimal
value, we must set $\mathbf{x}_i \leftarrow \mathbf{R} \mathbf{x}_i + \mathbf{t}$ and repeat
this process (usually 5 times or so is sufficient).
Uniform random sampling of a triangle mesh
Our last missing piece is to sample the surface of a triangle mesh $X$ with $m$
faces uniformly randomly. This allows us to approximate continuous integrals
over the surface $X$ with a summation of the integrand evaluated at a finite
number of randomly selected points. This type of numerical
integration is called the
Monte Carlo method.
We would like our random
variable$\mathbf{x} \in X$ to have a
uniform probability density
function$f(\mathbf{x}) = 1/A_X$, where $A_X$ is the surface
area of the triangle mesh $X$. We
can achieve this by breaking the problem into two steps: uniformly sampling in
a single triangle and sampling triangles non-uniformly according to their
area.
Suppose we have a way to evaluate a continuous random point $\mathbf{x}$ in a triangle
$T$ with uniform probability density function $g_T(\mathbf{x}) = 1/A_T$and we have a
away to evaluate a discrete random triangle index $T \in {1,2,\ldots,m}$ with discrete
probability
distribution$h(T) = A_T/A_X$, then the joint probability of evaluating a certain triangle
index $T$ and then uniformly random point in that triangle $\mathbf{x}$ is indeed
uniform over the surface:
In order to pick a point uniformly randomly in a triangle with corners $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3 \in \mathbb{R}^3$ we will first pick a point uniformly randomly in the
parallelogram formed by
reflecting $\mathbf{v}_1$ across the line $\overline{\mathbf{v}_2\mathbf{v}_3}$:
where ${a_1},{a_2}$ are uniformly sampled from the unit interval $[0,1]$. If ${a_1}+{a_2} > 1$
then the point $\mathbf{x}$ above will lie in the reflected triangle rather than theUniform random sampling of a triangle mesh
original one. In this case, preprocess ${a_1}$ and ${a_2}$ by setting ${a_1}\Leftarrow 1-{a_1}$ and
${a_2}\Leftarrow 1-{a_2}$ to reflect the point $\mathbf{x}$ back into the original triangle.
Area-weighted random sampling of triangles
Assuming we know how to draw a continuous uniform random variable ${a_2}$ from
the unit interval $[0,1]$, we would now like to draw a discrete random
triangle index $T$ from the sequence ${1,\ldots,m}$ with likelihood proportional to
the relative area of each triangle in the mesh.
We can achieve this by first computing the cumulative
sum$\mathbf{C} \in \mathbb{R}^{m}$ of the relative
areas:
$$
C_i = {\sum}_{j=1}^i \frac{A_j}{A_X},
$$
Then our random index is found by identifying the first entry in $\mathbf{C}$ whose
value is greater than a uniform random variable ${a_2}$. Since $\mathbf{C}$ is sorted,
locating this entry can be done in $O(\log m)$time.
Why is my code so slow?
Try profiling your code. Where is most of the computation time spent?
If you have done things right, the majority of time is spent computing
point-to-mesh distances. For each query point, the computational
complexity of
computing its distance to a mesh with $m$ faces is $O(m)$.
This reading task is not directly graded, but it's expected that you read and
understand sections 3.2-3.3 of Sofien Bouaziz's PhD thesis "Realtime Face
Tracking and Animation" 2015. Understanding this may require digging into
wikipedia, other online resources or other papers.
Blacklist
You may not use the following libigl functions:
igl::AABB
igl::fit_rotations
igl::hausdorff
igl::iterative_closest_point
igl::point_mesh_squared_distance
igl::point_simplex_squared_distance
igl::polar_dec
igl::polar_svd3x3
igl::polar_svd
igl::random_points_on_mesh
igl::rigid_alignment
Eigen::umeyama
Whitelist
You are encouraged to use the following libigl functions:
igl::cumsum computes cumulative sum
igl::doublearea computes triangle areas
igl::per_face_normals computes normal vectors for each triangle face
Eigen::JacobiSVD computes singular value decomposition of a matrix
src/random_points_on_mesh.cpp
Generate n random points uniformly sampled on a given triangle mesh with
vertex positions VX and face indices FX.
src/point_triangle_distance.cpp
Compute the distance d between a given point x and the closest point p on
a given triangle with corners a, b, and c.
src/point_mesh_distance.cpp
Compute the distances D between a set of given points X and their closest
points P on a given mesh with vertex positions VY and face indices FY.
For each point in P also output a corresponding normal in N.
It is OK to assume that all points in P lie inside (rather than exactly at
vertices or exactly along edges) for the purposes of normal computation in
N.
src/hausdorff_lower_bound.cpp
Compute a lower bound on the directed Hausdorff distance from a given mesh
(VX,FX) to another mesh (VY,FY). This function should be implemented by
randomly sampling the $X$ mesh.
src/closest_rotation.cpp
Given a $3\times 3$ matrix M, find the closest rotation matrix R.
src/point_to_point_rigid_matching.cpp
Given a set of source points X and corresponding target points P, find the
optimal rigid transformation (R,t) that aligns X to P, minimizing the
point-to-point matching energy.
src/point_to_plane_rigid_matching.cpp
Given a set of source points X and corresponding target points P and their
normals N, find the optimal rigid transformation (R,t) that aligns X to
planes passing through P orthogonal to N, minimizing the point-to-point
matching energy.
src/icp_single_iteration.cpp
Conduct a single iteration of the iterative closest point method align
(VX,FX) to (VY,FY) by finding the rigid transformation (R,t)
minimizing the matching energy.
The caller can specify the number of samples num_samples used to approximate
the integral over $X$ and specify the method (point-to-point or
point-to-plane).