Metric operations

Building a metric that is suitable for remeshing may require combining information from different sources or imposing constraints (e.g. minimum or maximum edge lengths, smooth variation with respect to the input mesh, ... ),

Metric intersection

Given two metrics and , the intersection corresponds to a metric that imposes the largest sizes that are smaller than those imposed by and :

h:12cm center

It is computed as follows: let be the generalized eigenvectors of : then with . The intersection is then with .

See proof

Metric gradation

It is sometimes desirable to control how fast edge lengths grow from from element to the next. This idea can be translated into constraints on the variation of the metric field on the edges of the mesh (see Size gradation control of anisotropic meshes, F. Alauzet, 2010 pdf) summarized as follows:

  • The gradation on an edge is (with )
  • Given a metric , is is possible to span a field at any location as
    with so that the gradation along is
  • In practice a maximum gradation is enforces along edge , by modifying to
    and
  • Achieving a maximum gradation over all the edges in the mesh would require operations; in practices, the above operations is only applied on all the edges of the mesh a small number of times.

Thresholds on the local sizes

In order to ensure that the metric field will generate meshes with edge sizes between and , the metric with is modified as with .

Element-implied metric

Given an element , defined by its vertices , the element-implied metric is the metric for which the element is equilateral

If is the jacobian of the transformation from the reference equilateral element to the physical element, the element-implied metric is

In practice is computed as where - is the jacobian of transformation from the reference orthogonal element to the physical element - is the jacobian of transformation from the reference equilateral element to the reference orthogonal element (independent of )

Controling the step between two metrics

In the remeshing process, it may be interesting to limit the step between the element-implied and target metrics

Given two metric fields and , the objective is to find as close as possible from such that, for all edges i.e. to have

  • If as close as possible is defined as minimizing the Froebenius norm , the optimal is computed as follows
    • compute ,
    • compute the eigenvalue decomposition ,
    • compute where ,
    • compute .

h:12cm center

(the thin blue lines represent and )

See proof

Metric scaling

  • The "ideal" metric is modified using a series of constraints
    • Minimum and maximum edge sizes and ,
    • Maximum gradation ,
    • Maximum step with respect to the element implied metric ,
    • A minimum curvature radius to edge lenth ratio (and optionally ), giving a fixed metric
  • A scaling is applied, such that the actual metric used for remeshing is
  • is chosen to have a given complexity
  • A simple bisection is used to compute if all the constraints can be met.

Maths

Notations

  • One denotes the Frobenius norm of a matrix by . This norm is associated with the scalar product .
  • One denotes by the partial order in the symmetric matrices space : if , i.e. for all , .

Projection on a closed convex subset of the symmetric matrices

The following theorem has been proven in Computing a nearest symmetric positive semidefinite matrix, Higham Nicholas J, 1988:

Theorem 1 Let . Then, is a closed convex subset of the space and then there exists a projection application such that Let and its diagonalisation in an orthonormal basis: , . Then,

Proof

Let and its diagonalisation in a orthonomal basis . As is invariant by linear isometry (, one gets and then the change of variable together with , leads to the following problem This problem can be computed explicitly. Since , ( the canonical basis vectors) This lower bound is reached for the following diagonal matrix , where , and moreover, and is in , which ends the proof.

Generalization

One can generalize the Theorem above as follow:

Theorem 2 Given two symmetric real numbers , let . is a compact convex subset of and then let us denote the associated projection application: Given and its diagonalisation in an orthonormal basis: , . Then,

Proof

The proof follows the one above very closely. The lower bound is got using the inequality: : which is reached for where if , if and otherwise . Again, the matrix is symmetric and indeed in (because since is diagonal, ) and thus is the unique solution of the problem (existence and uniqueness beeing given by the projection on closed convex sets in a Hilbert space).

Metric optimization problem

Given two metrics . The deformation ratio between a metric and is defined as a function of

The problem.

Given two metrics , and two real numbers , let be the set of metrics which are not to stretched with regard to : Again, this set is compact and convex. Given a matrix norm , we want to find which minimize the distance to , i.e. find such that:

lemma : Let two metrics > and . The function is bounded and reaches its bounds which are the minimum and maximum of the eigenvalues of . Thus,

Thanks to this lemma, the admissible set can be re-writen as

In the -twisted Frobenius norm

First, one studies the case of choosing The above problem reads as follows. Find such that: It is then clear that the change of variable leads to a problem of the form of Theorem 2. Denoting the problem reads find such that Then, from Theorem 2, the solution is and can be computed as follows:

  • Compute ,
  • Find such that .
  • Compute .
  • Compute

Remark Maybe a more suitable formulation would minimizing the Frobenius norm . This leads to a different problem. For instance, if where . One gets, denoting the error instead of The two distance measures give very different results... does it matter in practice?

Intersection of metrics

Problem.

Given two metrics , we want to find a metric which respects the most restrictive length of and . Moreover, we want that this metric beeing maximal. More formally, the lenght restriction can be expressed by indeed, is the length of associated with a metric . The local volume associated with a metric is its determinant: , so that the optimization problem reads such that

Solution of the problem.

First, one multiplies by on each side of the constraints, which leads to Then, denoting and applying the change of variable , one gets the simplified problem: such that

This problem can be even more simplified using (does not change the objective as ) and by diagonalization of in an orthonomal basis: Indeed, and , and let us change again the variable , with , the problem is finally: such that

This problem can now to be solved explicitly by finding an upper bound. Let such that , then Hadamard inequality provides . Moreover, , from which and then, using the two constraints: , one gets the upper bound reached for the diagonal matrix . Moreover, this matrix is symmetric and verifies and .

Finally, applying the two changes of variables, the unique solution of the problem is

Hadamard inequality

Given a matrix , one has with equality iff the are orthogonal.

The result if clear if is singular, if not let , so that one has to show that . is hermitian hence diagonalizable in an orthonomal basis, denoting its eigenvalues, one has