Abstract
This article introduces a full mathematical and numerical framework for treating functional shapes (or fshapes) following the landmarks of shape spaces and shape analysis. Functional shapes can be described as signal functions supported on varying geometrical supports. Analyzing variability of fshapes’ ensembles requires the modeling and quantification of joint variations in geometry and signal, which have been treated separately in previous approaches. Instead, building on the ideas of shape spaces for purely geometrical objects, we propose the extended concept of fshape bundles and define Riemannian metrics for fshape metamorphoses to model geometric-functional transformations within these bundles. We also generalize previous works on data attachment terms based on the notion of varifolds and demonstrate the utility of these distances. Based on these, we propose variational formulations of the atlas estimation problem on populations of fshapes and prove existence of solutions for the different models. The second part of the article examines thoroughly the numerical implementation of the tangential simplified metamorphosis model by detailing discrete expressions for the metrics and gradients and proposing an optimization scheme for the atlas estimation problem. We present a few results of the methodology on a synthetic dataset as well as on a population of retinal membranes with thickness maps.
Similar content being viewed by others
Notes
We are using here that for X compact, the mapping \(v\mapsto \phi ^v_1\cdot X\) is continuous for the weak convergence on \(v\in L^2([0,1],V)\) and the convergence for the Hausdorff metric on the set of all compact subsets of E.
References
W. Allard. On the first variation of a varifold. Annals of mathematics, 95(3), 1972.
F. Almgren. Plateau’s Problem: An Invitation to Varifold Geometry. Student Mathematical Library, 1966.
S. Arguillere, E. Trélat, A. Trouvé, and L. Younes. Shape deformation analysis from the optimal control viewpoint. Journal de Mathématiques Pures et Appliquées, 104(1):139–178, July 2015.
V. Arnold. Sur la géométrie différentielle des groupes de Lie de dimension infinie et ses applications à l’hydrodynamique des fluides parfaits. Annales de l’Institut Fourier, 16(2):319–361, 1966.
N. Aronszajn. Theory of reproducing kernels. Trans. Amer. Math. Soc., 68:337–404, 1950.
M. F. Beg, M. I. Miller, A. Trouvé, and L. Younes. Computing large deformation metric mappings via geodesic flows of diffeomorphisms. International journal of computer vision, 61(139–157), 2005.
M. Bruveris, L. Risser, and F. Vialard. Mixture of Kernels and Iterated Semidirect Product of Diffeomorphisms Groups. Multiscale Modeling and Simulation, 10(4):1344–1368, 2012.
C. Carmeli, E. De Vito, A. Toigo, and V. Umanita. Vector valued reproducing kernel hilbert spaces and universality. Analysis and Applications, 8(01):19–61, 2010.
B. Charlier, N. Charon, and A. Trouvé. A short introduction to the functional shapes toolkit. https://github.com/fshapes/fshapesTk/, 2014–2015.
N. Charon. Analysis of geometric and functional shapes with extensions of currents. Application to registration and atlas estimation. PhD thesis, ENS Cachan, 2013.
N. Charon and A. Trouvé. Functional currents : a new mathematical tool to model and analyse functional shapes. JMIV, 48(3):413–431, 2013.
N. Charon and A. Trouvé. The varifold representation of non-oriented shapes for diffeomorphic registration. SIAM journal of Imaging Science, 6(4):2547–2580, 2013.
P. Dupuis, U. Grenander, and M. I. Miller. Variational problems on flows of diffeomorphisms for image matching. Quarterly of applied mathematics, 56(3):587, 1998.
S. Durrleman. Statistical models of currents for measuring the variability of anatomical curves, surfaces and their evolution. PhD thesis, Inria Sophia Antipolis, 2009.
H. Federer. Geometric measure theory. Springer, 1969.
J. Glaunès. Transport par difféomorphismes de points, de mesures et de courants pour la comparaison de formes et l’anatomie numérique. PhD thesis, Université Paris 13, 2005.
J. Glaunès, A. Trouvé, and L. Younes. Diffeomorphic matching of distributions: A new approach for unlabelled point-sets and sub-manifolds matching. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2:712–718, 2004.
J. Glaunès and M. Vaillant. Surface matching via currents. Proceedings of Information Processing in Medical Imaging (IPMI), Lecture Notes in Computer Science, 3565(381–392), 2006.
S. Joshi, B. Davis, M. Jomier, and G. Gerig. Unbiased diffeomorphic atlas construction for computational anatomy. NeuroImage, 23:S151–S160, 2004.
S. Lee, N. Fallah, F. Forooghian, A. Ko, K. Pakzad-Vaezi, A. B. Merkur, A. W. Kirker, D. A. Albiani, M. Young, M. V. Sarunic, and M. F. Beg. Comparative analysis of repeatability of manual and automated choroidal thickness measurements in nonneovascular age-related macular degeneration. Investigative Ophthalmology and Vision Science, 53(5):2864–2871, 2013.
S. Lee, S. X. Han, M. Young, M. F. Beg, M. V. Sarunic, and P. J. Mackenzie. Optic nerve head and peripapillary morphometrics in myopic glaucoma. preprint, 2014.
J. Ma, M. I. Miller, A. Trouvé, and L. Younes. Bayesian template estimation in computational anatomy. NeuroImage, 42(1):252 – 261, 2008.
J. Ma, M. I. Miller, and L. Younes. A bayesian generative model for surface template estimation. Journal of Biomedical Imaging, 2010:16, 2010.
M. Micheli, P. W. Michor, and D. Mumford. Sobolev metrics on diffeomorphism groups and the derived geometry of spaces of submanifolds. Izvestiya: Mathematics, 77(3):541, 2013.
P. W. Michor and D. Mumford. A zoo of diffeomorphism groups on \({\mathbb{R}}^n\). Annals of Global Analysis and Geometry, 44(4):529–540, 2013.
M. I. Miller, A. Trouvé, and L. Younes. On the metrics and euler-lagrange equations of computational anatomy. Annual Review of Biomedical Engineering, 4(1):375–405, 2002.
M. I. Miller, A. Trouvé, and L. Younes. Geodesic Shooting for Computational Anatomy. Journal of Mathematical Imaging and Vision, 24(2):209–228, 2006.
M. I. Miller, L. Younes, and A. Trouvé. Diffeomorphometry and geodesic positioning systems for human anatomy. TECHNOLOGY, 2(1):36–43, 2014.
F. Morgan. Geometric measure theory, a beginner’s guide. Academic Press, 1995.
L. Simon. Lecture notes on geometric measure theory. Australian national university, 1983.
B. K. Sriperumbudur, K. Fukumizu, and G. Lanckriet. On the relation between universality, characteristic kernels and RKHS embedding of measures. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS-10), volume 9, pages 773–780, 2010.
B. Thibert. Sur l’approximation géométrique d’une surface lisse. Applications en géologie structurale. PhD thesis, Université Claude Bernard - Lyon 1, 2003.
A. Trouvé. An approach of pattern recognition through infinite dimensional group action. Rapport de recherche du LMENS, 1995.
A. Trouvé. Diffeomorphisms groups and pattern matching in image analysis. Intern. Jour. of Computer Vision, 28(3):213–221, 1998.
A. Trouvé and L. Younes. Local geometry of deformable templates. SIAM Journal of Mathematical Analysis, 37(1):17–59, 2005.
A. Trouvé and L. Younes. Metamorphoses through lie group action. Foundation of computational mathematics, 5:173–198, sep 2005.
A. Trouvé and L. Younes. Handbook of Mathematical Imaging, chapter Shape spaces, pages 1309–1362. Springer, 2011.
L. Younes. Shapes and diffeomorphisms. Springer, 2010.
Acknowledgments
We would like to thank Mirza Faisal Beg, Sieun Lee, Evgeniy Lebed, Marinko Sarunic and their collaborators for providing the OCT dataset and for fruitful discussions. We are very grateful to the anonymous reviewers for their detailed and constructive comments that helped us improve the original manuscript. The authors also acknowledge the support of the French Agence Nationale de la Recherche project HM-TC (Number ANR-09-EMER-006).
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Peter J. Olver.
Appendices
Appendix 1: Variation Formula for Fvarifolds: Proof of Theorem 5
The proof follows the same steps as the corresponding result for usual varifolds (cf [12]). Given a \(C^{1}\) vector field v on E with compact support, we can consider the 1-parameter group of diffeomorphisms \(\phi _{t}\) with \(\phi _{0}={\text {Id}}\) and \({\partial _{t}}_{\upharpoonright _{t=0}} \phi _{t} =v\). Then, it follows that:
As we see, the previous leads to several terms in the derivative: Differentiate the volume change term \(J_t\doteq |{d_{x}\phi _{t}}_{\upharpoonright _{T_{x}X}}|\), the function \(\omega \) with respect to the position variable, with respect to the tangent space direction and to the signal part. The derivative with respect to point positions and signal values are easy to obtain and equals, respectively, since \(\omega \) is assumed to be \(C^{1}\), \(\left( \frac{\partial \omega }{\partial x} \Big | v \right) \) and \(\frac{\partial \omega }{\partial m}h\). The two other terms require more attention.
1.1 Derivative of the Volume Change
For any vector field u defined on X, we shall denote by \(u^{\top }\) and \(u^{\bot }\) the tangential and normal components of u with respect to the tangent space of X at each point. We also introduce the connection \(\nabla _{\cdot }\cdot \) on the ambient space and an orthonormal frame of tangent vector fields \((e_{i})_{i=1,\ldots ,d}\) on X. Now \(J_{t}=\sqrt{\det ([\langle d_{x}\phi _{t}(e_{i}),d_{x}\phi _{t}(e_{j})]_{i,j})}\) so a simple calculation shows that:
Writing \(v=v^{\top }+v^{\bot }\) provides a first term \(\sum _{i=1}^{d} \langle e_{i}, \nabla _{e_{i}}v^{\top } \rangle \) which is the tangential divergence of the vector field \(v^{\top }\) denoted usually \({{\mathrm{div}}}_{X}(v^{\top })\). The second term becomes \(\sum _{i=1}^{d} \langle e_{i}, \nabla _{e_{i}}v^{\bot } \rangle \). For all \(i=1,\ldots ,d\), we have \(\langle e_{i}, v^{\bot } \rangle = 0\) so that after differentiation we find that \(\langle e_{i}, \nabla _{e_{i}}v^{\bot } \rangle = - \langle \nabla _{e_{i}} e_{i}, v^{\bot } \rangle \). Therefore:
In this last expression, we recognize the mean curvature vector to the submanifold X, which is the trace of the Weingarten map and is denoted \(H_{X}\). As a result, we find that:
where, we adopt in this section the shortcut notation \(\int _{X} g\) to denote the integral \(\int _{X} g(x) {\text {d}}{\mathcal {H}}^{d}(x)\) and \(\int _{\partial X} g\) for \(\int _{\partial X} g(x) {\text {d}}{\mathcal {H}}^{d-1}(x)\). Now, the first term can be rewritten as a boundary integral by applying the Divergence Theorem. Indeed, if we denote by \(\tilde{\omega }\) the function defined on X by \(\tilde{\omega }(x)=\omega (x,T_{x}X,f(x))\) which is \(C^{1}\), we have \({{\mathrm{div}}}_{X}(\tilde{\omega }v^{\top })=\tilde{\omega }{{\mathrm{div}}}_{X}(v^{\top })+\nabla _{v^{\top }} \tilde{\omega }\). Applying the Divergence Theorem (cf [30] Section 7) on the submanifold X gives:
where \(\nu \) is the unit outward normal to the boundary.
1.2 Computation of \(\nabla _{v^{\top }} \tilde{\omega }\)
The previous equation still involve the derivative of the function \(\tilde{\omega }\) along vector field \(v^{\top }\). Given the expression of function \(\tilde{\omega }\), this can be expressed as the sum of three terms :
where \(\nabla _{v^{\top }} T^{X}\) is to be understood as the derivative along vector field \(v^{\top }\) of the function \(T^{X}: \ x \mapsto T_{x}X\) having values in the Grassmannian. For a given \(x \in X\), we are thus left with deriving the variation of the tangent space \(T_x X\) when moving along a curve \(t\mapsto \gamma (t)\) with \(\gamma (0)=x\) and \(\dot{\gamma }(0) = v^{\top }(x)\). Let’s call \(V_{t} \doteq T_{\gamma (t)} X \in G_{d}(E)\). As already mentioned, it is often more convenient to think of \(V_t\) through the embedding in \({\mathcal {L}}(E)\) given by the orthogonal projector \(p_{V_t}\). As explained more in [12], the variation of the orthogonal projector \(p_V\) with respect to V turns out to be the sum of two linear maps transposed to each other, one of which belonging to \({\mathcal {L}}(V,V^{\bot })\). In this way, one identifies the tangent space of \(G_d(E)\) at V with the space \({\mathcal {L}}(V,V^{\bot })\). Let’s consider \((e_1,\ldots ,e_d)\) an orthonormal basis of \(V_0 = T_x X\) and \((e_{1}(t),\ldots ,e_d(t))\) the parallel transport of this orthonormal frame along the curve \(\gamma \). Let’s also take z any vector in \(T_{x}X\) and set \(\overline{z}\) the vector field on X defined by \(\overline{z}(y) = p_{T_y X}(z)\) for all \(y \in X\). Then
and the derivative of (86) writes
The derivative of \(e_{i}(t)\) can be decomposed into its tangential and normal parts \(\dot{e_{i}}(t)=\nabla _{\dot{\gamma }(t)}^{X} e_{i}(t) + \left( \nabla _{\dot{\gamma }(t)} e_{i} \right) ^{\bot }\). Then, \(\nabla _{\dot{\gamma }(t)}^{X} e_{i}(t)\) being the covariant derivative of \(e_{i}\) in X along the curve \(\gamma \), this term vanishes since \(e_i(t)\) is obtained by parallel transport. Thus, we eventually get that :
Since \(z \in T_{x}X\), \(\langle \left( \nabla _{\dot{\gamma }(0)} e_{i} \right) ^{\bot },z \rangle = 0\) and it results that
This is exactly \({\mathrm {II}}(\dot{\gamma }(0),\overline{z})\) where \({\mathrm {II}}\) is the second fundamental form of X. It results that the variation \((\nabla _{v^{\top }}T^X)_{x}\) is precisely \({\mathrm {II}}(v^{\top },\cdot )_x \in {\mathcal {L}}(T_xX,(T_xX)^{\bot })\). By symmetry of the second fundamental form, it also equals to
Now, under the previous identifications of tangent spaces of \(G_d(E)\), \(\frac{\partial \omega }{\partial V}\) at \(V=T_x X\) belongs to \({\mathcal {L}}(T_x X,(T_x X)^{\bot })^{*} \approx {\mathcal {L}}((T_x X)^{\bot },T_x X)\) or equivalently is a map from E to \(T_x X\) vanishing on \(T_x X\) and so the previous term can be written more compactly as
and eventually
1.3 Derivative of Tangent Spaces’ Transport
We now come to the derivative term on the tangent space part in Eq. (84). Again, we identify tangent spaces with their corresponding orthogonal projector. If we now set \(V_{t}=d_{x} \phi _{t}(T_{x}X)\), one can easily show that [12]:
As previously, \(\frac{\partial \omega }{\partial V}\) is an element of \({\mathcal {L}}(T_{x}X,T_{x}X^{\bot })^*\approx (T_{x}X^{\bot })^{*} \otimes T_{x}X\) and which we can write: \(\frac{\partial \omega }{\partial V} = \sum _{j=d+1}^{n} \eta _{j}^{*}\otimes \alpha _{j}\) for \((\eta _{d+1},\ldots ,\eta _{n})\) an orthonormal frame of \(T_{x}X^{\bot }\) and \((\alpha _{j})\) some vectors of \(T_{x}X\) (as usual \(\eta ^*\) denotes the linear form \(\langle \eta ,\cdot \rangle \)). Then, the variation we wish to compute is:
If we introduce \(\left( \frac{\partial \omega }{\partial V} \bigg | v \right) = \sum _{j=d+1}^{n} \eta _{j}^{*}(v)\alpha _{j} = \sum _{j=d+1}^{n} \langle \eta _{j},v \rangle \alpha _{j}\) which is a tangent vector field on X, we have:
The last term in the sum is also \(\sum _{j=d+1}^{n} \langle \eta _{j},\nabla _{\alpha _{j}}v \rangle \), which is nothing else than \(\left( \frac{\partial \omega }{\partial V} \bigg | \nabla v \right) \). As for the two other terms in the sum, it is easy to see that it equals:
Hence, it follows that:
Integrating Eq. (89) over the submanifold X and using the Divergence Theorem as before, we find that:
1.4 Synthesis
Coming back now to Eq. (84), the sum of all different terms gives:
In addition, the integral
thanks once more to the divergence theorem and the fact that \(\left( \frac{\partial \omega }{\partial V} | v^{\top } \right) =0\) because \(v^{\top } \in T_x X\). Since \(v=v^{\top } + v^{\bot }\), it follows that several simplifications occur in the previous sum, leading to
which proves the result of Theorem 5.
Appendix 2: Proof of Proposition 7
1.1 Perturbation
We introduce now a perturbation process on any measure \(\nu \) on \(E \times G_{d}(E) \times {\mathbb {R}}\) that shall be useful for the following. Let \(a>0\) to be fixed later and consider for any \(t\in {\mathbb {R}}\) the function \(\rho _t:{\mathbb {R}}\rightarrow {\mathbb {R}}\) such that
where \(\text {sgn}(z)\) is the sign of z. We have \(\rho _0=\text {Id}_{\mathbb {R}}\) and \(\rho _1\) is a symmetric threshold at level a. Now for any \(t\in {\mathbb {R}}\), we denote \(\nu _t\) the new measure defined for any \(\omega \in C_b(E \times G_{d}(E) \times {\mathbb {R}})\) as:
Obviously, \(\nu _0=\nu \) and \(\nu _1\) is such that \(\nu _1(|f|>a)=0\) so that \(t\mapsto \nu _t\) is an homotopy from \(\nu \) to a measure under which the signal is a.e. bounded by a.
1.2 Proof of Lemma 2
We show the existence of a fvarifold minimizer in \({\mathcal {M}}^X\) [cf Eq. (51)] for the extended functional \(\tilde{J}\). For any \(\nu \in {\mathcal {M}}^X\) and \(t\in {\mathbb {R}}\), we denote \(J_t\doteq \tilde{J}(\nu _t)\) where \(\nu _{t}\) is the previously defined perturbation of \(\nu \) (cf Sect. 2.1) and we assume that \(J_0<\infty \) [which is equivalent to say that \(\nu (|f|^2)<\infty \)]. We recall that \(\Vert \mu _{(X^i,f^i)}-\nu _{t}\Vert _{W'}^2 = (\mu _{(X^i,f^i)}-\nu _{t})(\omega ^{i})\) where \(\omega ^{i}=K_{W}(\mu _{(X^i,f^i)}-\nu _{t}) \in W\). Then, one easily checks that \(J_t<\infty \) and, since we assume that W is continuously embedded into \(C_0^2(E\times G_{d}(E) \times {\mathbb {R}})\), with existing derivative \(J'_t\) at any location t given by
In the sequel, \(C>0\) denotes a constant, the value of which may vary from one line to another. Using again the continuous embedding of W into \(C^2_0(E\times G_{d}(E) \times {\mathbb {R}})\), we get that
Moreover, as we mentioned after Proposition 5, \(\Vert \mu _{(X^i,f^i)}\Vert _{W'} \le C\, {\mathcal {H}}^{d}(X^i)\). Similarly, \(\Vert \nu _{t}\Vert _{W'} \le C\, \nu _{t}(E \times G_{d}(E) \times {\mathbb {R}})\) and, since \(\nu _{t} \in {\mathcal {M}}^X\), we have \(\nu _{t}(E \times G_{d}(E) \times {\mathbb {R}}) = {\mathcal {H}}^{d}(X)\) and consequently \(\Vert \nu _{t}\Vert _{W'} \le C\, {\mathcal {H}}^{d}(X)\). Thus, there exists a constant \(C'>0\) such that:
Noticing now that \(\frac{\text {d}}{{\text {d}}t}\left( \rho _t(f)\right) \rho _t(f)\le 0\), that \(|\frac{\text {d}}{{\text {d}}t}\left( \rho _t(f)\right) |=0\) for \(|f|\le a\) and that \(|\rho _t(f)|\ge a\) for \(|f|\ge a\) and \(t\in [0,1]\), we get for \(t\in [0,1]\)
so that
An important consequence of (97) is that one can restrict the search of a minimum for \(\tilde{J}\) on fvarifolds \(\nu \) such that
with \(a=C'\frac{\gamma _W}{\gamma _f}\sum _{i=1}^N\left( {\mathcal {H}}^d(X^i)+{\mathcal {H}}^d(X)\right) \). In particular, since \(\nu \in {\mathcal {M}}^X\), we will have
Since X is bounded and \(G_{d}(E)\) compact, we can restrict the search of a minimum on a measure supported on a compact subset \(K\subset E \times G_{d}(E) \times {\mathbb {R}}\) so that we introduce:
An easy check shows that \(\tilde{J}\) is lower semicontinuous on the set \({\mathcal {M}}^{X,K}\) for the weak convergence topology. In addition, \({\mathcal {M}}^{X,K}\) is sequentially compact. Indeed, if \(\nu _{n}\) is a sequence in \({\mathcal {M}}^{X,K}\) then all \(\nu _{n}\) are supported by the compact K and in particular \((\nu _{n})\) is tight. Also, as already noted, there exists a constant \(C>0\) independent of n such that \(\nu _{n}(E \times G_{d}(E) \times {\mathbb {R}}) \le C\, {\mathcal {H}}^{d}(X)\), and thus, the sequence is uniformly bounded for the total variation norm. It results, thanks to the Prokhorov Theorem, that there exists a subsequence of \((\nu _{n})\) converging for the weak topology. These compactness and lower semicontinuity properties guarantee the existence of a minimizer \(\nu _*\) of \(\tilde{J}\) with \(\nu _*\in {\mathcal {M}}^{X,K}\) and
1.3 Proof of the Proposition
At this point, we do not yet have a minimizer of \(J_X\). The problem is that if the marginal on \(E\times G_{d}(E)\) of \(\nu _*\) is the transport of \({\mathcal {H}}^d_{|X}\) under the application \(x\mapsto (x,T_{x}X)\), we cannot guarantee that \(\nu _*\) does not weight multiple signal values in the fiber above a location \((x,T_{x}X)\). We will now show that for large enough \(\gamma _f/\gamma _W\), there exists \(f_*\in L^2(X)\) such that \(\nu _*=\nu _{X,f_*}\) so that we will deduce
and the existence of a minimizer on \(L^2(X)\).
Let \(\delta f\in C_b(E\times G_{d}(E) \times {\mathbb {R}})\) and for any \(t\in {\mathbb {R}}\) consider the perturbation \(\nu _t\in {\mathcal {M}}^X\) of any \(\nu \in {\mathcal {M}}^{X,K}\) such that for any \(g\in C_b(E\times G_{d}(E) \times {\mathbb {R}})\) we have:
Here again, the function \(t\mapsto \tilde{J}(\nu _t)\) is differentiable everywhere and we have for \(\omega ^i\doteq K_W (\mu _{(X^i,f^i)}-\nu )\)
so that when \(\nu =\nu _*\) we get
The partial derivative of \(f \mapsto \gamma _f f+\gamma _W A(x,V,f)\) with respect to f equals \(\gamma _f + \gamma _W \frac{\partial A}{\partial f}(x,V,f)\). As before, using the continuous embedding \(W \hookrightarrow C_{0}^{2}(E\times G_{d}(E) \times {\mathbb {R}})\), we have once again a certain constant \(C>0\) such that
It results that for \(\gamma _f/\gamma _W\) large enough and for all \((x,V) \in E \times G_{d}(E)\), \(f \mapsto \gamma _f f+\gamma _W A(x,V,f)\) is a strictly increasing function going from \(-\infty \) at \(-\infty \) to \(+\infty \) at \(+\infty \), and thus, there is a unique solution \(\tilde{f}(x,V)\) to (104). Now, since the application \(f \mapsto \gamma _f f+\gamma _W A(x,V,f)\) is also \(C^{1}\) on \(E \times G_{d}(E) \times {\mathbb {R}}\), we deduce from the Implicit Function Theorem that \(\tilde{f}\) is a \(C^1\) function on \(E \times G_{d}(E)\). Going back to the solution \(\nu _{*}\), we know that for \(\nu _{*}\) almost every \((x,V,f) \in E \times G_{d}(E) \times {\mathbb {R}}\), we have \((x,V,f) \in K\) and \(f=\tilde{f}(x,V)\), so that \(|\tilde{f}| \le a\) a.e. For any continuous and bounded function \(\omega \):
and if we denote by \(\tilde{\omega }(x,V)\doteq \omega (x,V,\tilde{f}(x,V))\) which is a continuous and bounded function on \(E \times G_{d}(E)\), we have by definition of the space \({\mathcal {M}}^X\) given by (51):
Therefore, setting \(f_{*}(x) = \tilde{f}(x,T_{x}X)\) for \(x \in X\), we see that \(|f_{*}|\le a\) so that \(f \in L^{\infty }(X)\) and with (106), we deduce that \(\nu _{*} = \mu _{(X,f_{*})}\) which shows that the solution of the optimization is a fvarifold associated to a true fshape \((X,f_{*})\). In addition, if X is a \(C^{p}\) submanifold then \(x\mapsto T_{x}X\) is a \(C^{p-1}\) function on X and, if \(W\hookrightarrow C_{0}^{m}(E\times G_{d}(E)\times {\mathbb {R}})\) with \(m\ge 2\) and \(m\ge p\), A and \(\tilde{f}\) are \(C^{p-1}\) functions so \(f_{*}\) is also \(C^{p-1}\), which concludes the proof of Proposition 7.
Appendix 3: Proof of Theorem 6
We shall basically follow the same steps as in the previous simpler cases. First of all, exactly as in 5.2.2, existence of a template shape X is guaranteed with the same compacity and lower semicontinuity arguments. Thus, we may assume that X is fixed and we only have to show existence of minimizers to the simplified functional:
Now, as for \(v^{0}\), due to the presence of the penalizations \(\Vert v^i\Vert _{L^{2}([0,1],V)}\doteq \int _0^1 |v^i_t|^2_V {\text {d}}t\), \(1\le i \le N\), one can assume that all vector fields \(v^{i}\), \(1\le i\le N\), belong to a fixed closed ball B of radius \(r > 0\) in \(L^{2}([0,1],V)\). As in the proof of Proposition 7, we first show existence of a minimizer in a space of fvarifolds. Namely, extending the definitions of the previous subsections, we introduce the space \({\mathcal {M}}^X\) of measures \(\nu \) on \(E \times G_{d}(E) \times {\mathbb {R}} \times {\mathbb {R}}^{N}\) such that for all continuous and bounded function h on \(E \times G_{d}(E)\), we have:
For a measure \(\nu \) on \(E \times G_{d}(E) \times {\mathbb {R}} \times {\mathbb {R}}^{N}\) and a diffeomorphism \(\phi \), we denote by \(\phi \cdot \nu \) the transport of \(\nu \) by \(\phi \) defined by:
We now introduce the extended functional:
with \((v^{i})_i \in (L^{2}([0,1],V))^{N}\) and \(\nu \in {\mathcal {M}}^X\), \(\nu (|f|^2)\) denoting in short the integral of the application \((x,V,f,(\zeta ^{i})_i)\mapsto |f|^2\) with respect to \(\nu \). For all \(1\le i \le N\), \(\nu ^{i}\) is the fvarifold defined for all \(\omega \in W\) by:
As previously, we can consider the perturbation function \(\rho _{t}\) acting on signals and the measures
Denoting \(J_{t}=\tilde{J}(\nu _{t},(v^i)_i)\), we have, for \(t\in [0,1]\),
where, for all \(1\le i\le N\), \(\omega ^{i}=K_{W}(\mu _{(X^{i},f^{i})} - (\phi ^{v^i}_{1})\cdot \nu _{t}^{i})\). On the first hand, we know that there exists a constant \(C>0\) such that for all i, \(x \in E\) and \(V \in G_{d}(E)\), \(|{d_{x}\phi ^{v^{i}}}_{\upharpoonright _{V}}|\le C\, |{\text {d}}\phi ^{v^{i}}|_{\infty }\). In addition, it is a classical result on flows of differential equations (cf [38]) that there exists a non-decreasing continuous function \(\tau : {\mathbb {R}}^{+}\rightarrow {\mathbb {R}}^{+}\) independent of \(v \in L^{2}([0,1],V)\) such that \(|{\text {d}}\phi ^{v}_{1}|_{\infty }\le \tau (\Vert v\Vert _{L^{2}([0,1],V)})\). On the other hand, using the same controls as in the previous subsections, we have, for some \(C>0\) denoting a constant the value of which may change from one line to another :
It is also straightforward that there exists a constant \(C>0\) such that \((\phi ^{v^i}_{1})\cdot \nu _{t}^{i}(E\times G_{d}(E)\times {\mathbb {R}})) \le C\, |d\phi ^{v^{i}}|_{\infty } \nu _{t}^{i}(E \times G_{d}(E) \times {\mathbb {R}})\) and, using the fact that \(\nu \in {\mathcal {M}}^X\) as already argued in 5.2.1, \(\nu _{t}^{i}(E \times G_{d}(E) \times {\mathbb {R}})={\mathcal {H}}^{d}(X)\). It results, from all the previous inequalities, the existence of a non-decreasing continuous function that we will still call \(\tau \) such that for all \(i,x,V,f,\zeta ^{i}\):
Following the same path that previously lead to (96)
Just as in 5.2.1, this implies that \(\tilde{J}(\nu _{1},(v^{i})_i) \le \tilde{J}(\nu _{0},(v^{i})_i)\) as soon as:
Therefore, one may restrict the search of a minimum on a set of measures \(\nu \) that are supported on a compact subset K of \(E \times G_{d} \times {\mathbb {R}}\times {\mathbb {R}}^{N}\), which space we shall denote again \({\mathcal {M}}^{X,K}\). The rest of the proof is now very close to the one of 5.2.1. Due to lower semicontinuity of the functional and the compacity of \({\mathcal {M}}^{X,K}\) and B for the weak convergence topologies (respectively on the space of measures and on \(L^{2}([0,1],V)\)), we obtain the existence of a minimizer \((\nu _{*},(v^{i}_{*})_i)\) for the functional \(\tilde{J}\).
The last step is to prove that \(\nu _{*}\), which belongs a priori to the measure space \({\mathcal {M}}^{X,K}\), can be written under the form \(\nu _{*}= \nu _{X,f_{*},(\zeta ^{i}_{*})_i}\), i.e., that there exists functions \(f_{*}\) and \(\zeta ^{i}_{*}\), \(1\le i\le N\), on X such that, for all continuous and bounded function g on \(E \times G_{d}(E) \times {\mathbb {R}}\times {\mathbb {R}}^{N}\):
We then consider variations of the signals \((\delta f, (\delta \zeta ^{i})_i)\) all belonging to the space \(C_{b}(E \times G_{d}(E) \times {\mathbb {R}}\times {\mathbb {R}}^{N})\) and the path \(t\mapsto \nu _{t}\) defined by:
Now, if \(J_{t}\doteq \tilde{J}(\nu _{t},(v^{i}_{*})_i)\), expressing that \({J_{t}'}_{\upharpoonright _{t=0}}=0\) for all \(\delta f\) and \((\delta \zeta ^{i})_i\) gives, similarly to 5.2.1, the following set of equations:
with \(A\left( x,V,f,\left( \zeta ^{i}\right) _i\right) \doteq \Big ( \sum _{i=1}^{N} \frac{\partial \omega ^{i}}{\partial f}\Big (\phi ^{v^{i}_{*}}_{1}(x),d_{x}\phi ^{v^{i}_{*}}_{1}(V),f+\zeta ^{i}\Big ), \Big (\frac{\partial \omega ^{i}}{\partial f}\Big (\phi ^{v^{i}_{*}}_{1}(x), d_{x}\phi ^{v^{i}_{*}}_{1}(V),f+\zeta ^{i}\Big )\Big )_i\Big )\). The derivatives \(\partial _{f} A\) and \(\partial _{\zeta ^{i}} A\) can be shown again to be uniformly bounded in \(x,V,f,\zeta ^{i}\), and a previous argument provides the existence of unique solutions \(f=\tilde{f}(x,V)\) and \(\zeta ^{i}=\tilde{\zeta }^{i}(x,V)\), \(1\le i\le N\), to (111). The rest of the proof is exactly the same as in the end of “Proof of the Proposition” in “Appendix 2”: We set \(f_{*}(x)=\tilde{f}(x,T_{x}X)\) and \(\zeta ^{i}_{*}(x)=\tilde{\zeta }^{i}(x,T_{x}X)\), \(1\le i\le N\), which are again \(L^{\infty }\) functions on X. In addition, one shows easily that the minimizing measure \(\nu _{*}\) equals \(\nu _{X,f_{*},(\zeta _{*}^{i})_i}\) in the sense of (110). Finally, the regularity of \(f_{*}\) and \(\zeta _{*}\) when X is a \(C^{p}\) submanifold is obtained again by applying the Implicit Function Theorem to (111).
Appendix 4: Proof of Theorem 7
We shall only sketch the essential steps to adapt the content of Appendix 3. We start by writing (58) in an extensive way. This gives:
Now, with Lemma 3, we know that the optimal functions \(h^{0}_{*}\) and \(h^{i}_{*}\), \(1\le i\le N\), are given by (57), and thus, the variational problem of (112) can be replaced by the optimization with respect to residual functions \(\zeta ^0\) and \(\zeta ^{i}\), \(1\le i\le N\), living in \(L^{2}(X)\) of the functional:
where \(C^{0}(x)\doteq {\left( \int _0^1\frac{1}{|{d_x\left[ \phi ^{v^{0}}_{s}\circ \left( \phi _1^{v_0}\right) ^{-1}\right] }_{\upharpoonright _{T_{x}X}}|}{\text {d}}s\right) ^{-1}}\) and for all \(1\le i\le N\), \(C^{i}(x)\doteq \left( \int _0^1\frac{1}{|{d_x\phi ^{v^{i}}_{s}}_{\upharpoonright _{T_{x}X}}|}{\text {d}}s\right) ^{-1}\). But we note that the previous, up to the weights in the \(L^2\) metrics given by functions \(C^i\), becomes now extremely close to the problem examined in Theorem 6. In fact, the proof of Appendix 3 can be adapted almost straightforwardly to this situation. As previously, the essential step is to reformulate the optimization problem in a space of measures. Defining the functions:
for \(1\le i\le N\) and \((x,H) \in E \times G_{d}(E)\), we can set, with the same definitions as in Appendix 3:
for \(\nu \in {\mathcal {M}}^X\). The rest of the proof follows the same path, relying on the fact that we can assume the vector fields \(v^0\) and \(v^i\) to be bounded in \(L^{2}([0,1],V_{0})\) and \(L^{2}([0,1],V)\) as we explained in the beginning of Appendix 3. This implies, as already argued in the same section, that we have uniform lower and upper bounds for \(|{d_x\phi ^{v^{i}}_{s}}_{\upharpoonright _{H}}|\), \(s \in [0,1]\) and \(1\le i\le N\) and for the quantities \(|{d_x[\phi ^{v^{0}}_{s}\circ (\phi _1^{v_0})^{-1}]}_{\upharpoonright _{H}}|\). Consequently, we can assume that we have \(\alpha ,\beta >0\) such that for all \(1\le i\le N\), \(\alpha \le \Vert \tilde{C}^i\Vert _{\infty } \le \beta \). Using these inequalities, one can check that we get equivalent controls as in the proof of Theorem 6 which allows us to conclude the existence of a measure minimizer for the extended functional \(\tilde{J}\) and then go back to a fshape solution for J with a similar implicit functions’ argument.
Rights and permissions
About this article
Cite this article
Charlier, B., Charon, N. & Trouvé, A. The Fshape Framework for the Variability Analysis of Functional Shapes. Found Comput Math 17, 287–357 (2017). https://doi.org/10.1007/s10208-015-9288-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10208-015-9288-2
Keywords
- Shape analysis
- Signals on manifolds
- Large deformation models
- Metamorphoses
- Varifolds
- Atlas estimation algorithms