[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

The Fshape Framework for the Variability Analysis of Functional Shapes

  • Published:
Foundations of Computational Mathematics Aims and scope Submit manuscript

Abstract

This article introduces a full mathematical and numerical framework for treating functional shapes (or fshapes) following the landmarks of shape spaces and shape analysis. Functional shapes can be described as signal functions supported on varying geometrical supports. Analyzing variability of fshapes’ ensembles requires the modeling and quantification of joint variations in geometry and signal, which have been treated separately in previous approaches. Instead, building on the ideas of shape spaces for purely geometrical objects, we propose the extended concept of fshape bundles and define Riemannian metrics for fshape metamorphoses to model geometric-functional transformations within these bundles. We also generalize previous works on data attachment terms based on the notion of varifolds and demonstrate the utility of these distances. Based on these, we propose variational formulations of the atlas estimation problem on populations of fshapes and prove existence of solutions for the different models. The second part of the article examines thoroughly the numerical implementation of the tangential simplified metamorphosis model by detailing discrete expressions for the metrics and gradients and proposing an optimization scheme for the atlas estimation problem. We present a few results of the methodology on a synthetic dataset as well as on a population of retinal membranes with thickness maps.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19

Similar content being viewed by others

Notes

  1. We are using here that for X compact, the mapping \(v\mapsto \phi ^v_1\cdot X\) is continuous for the weak convergence on \(v\in L^2([0,1],V)\) and the convergence for the Hausdorff metric on the set of all compact subsets of E.

References

  1. W. Allard. On the first variation of a varifold. Annals of mathematics, 95(3), 1972.

  2. F. Almgren. Plateau’s Problem: An Invitation to Varifold Geometry. Student Mathematical Library, 1966.

  3. S. Arguillere, E. Trélat, A. Trouvé, and L. Younes. Shape deformation analysis from the optimal control viewpoint. Journal de Mathématiques Pures et Appliquées, 104(1):139–178, July 2015.

    Article  MathSciNet  MATH  Google Scholar 

  4. V. Arnold. Sur la géométrie différentielle des groupes de Lie de dimension infinie et ses applications à l’hydrodynamique des fluides parfaits. Annales de l’Institut Fourier, 16(2):319–361, 1966.

    Article  MathSciNet  MATH  Google Scholar 

  5. N. Aronszajn. Theory of reproducing kernels. Trans. Amer. Math. Soc., 68:337–404, 1950.

    Article  MathSciNet  MATH  Google Scholar 

  6. M. F. Beg, M. I. Miller, A. Trouvé, and L. Younes. Computing large deformation metric mappings via geodesic flows of diffeomorphisms. International journal of computer vision, 61(139–157), 2005.

  7. M. Bruveris, L. Risser, and F. Vialard. Mixture of Kernels and Iterated Semidirect Product of Diffeomorphisms Groups. Multiscale Modeling and Simulation, 10(4):1344–1368, 2012.

    Article  MathSciNet  MATH  Google Scholar 

  8. C. Carmeli, E. De Vito, A. Toigo, and V. Umanita. Vector valued reproducing kernel hilbert spaces and universality. Analysis and Applications, 8(01):19–61, 2010.

    Article  MathSciNet  MATH  Google Scholar 

  9. B. Charlier, N. Charon, and A. Trouvé. A short introduction to the functional shapes toolkit. https://github.com/fshapes/fshapesTk/, 2014–2015.

  10. N. Charon. Analysis of geometric and functional shapes with extensions of currents. Application to registration and atlas estimation. PhD thesis, ENS Cachan, 2013.

  11. N. Charon and A. Trouvé. Functional currents : a new mathematical tool to model and analyse functional shapes. JMIV, 48(3):413–431, 2013.

    Article  MATH  Google Scholar 

  12. N. Charon and A. Trouvé. The varifold representation of non-oriented shapes for diffeomorphic registration. SIAM journal of Imaging Science, 6(4):2547–2580, 2013.

    Article  MATH  Google Scholar 

  13. P. Dupuis, U. Grenander, and M. I. Miller. Variational problems on flows of diffeomorphisms for image matching. Quarterly of applied mathematics, 56(3):587, 1998.

    Article  MathSciNet  MATH  Google Scholar 

  14. S. Durrleman. Statistical models of currents for measuring the variability of anatomical curves, surfaces and their evolution. PhD thesis, Inria Sophia Antipolis, 2009.

  15. H. Federer. Geometric measure theory. Springer, 1969.

  16. J. Glaunès. Transport par difféomorphismes de points, de mesures et de courants pour la comparaison de formes et l’anatomie numérique. PhD thesis, Université Paris 13, 2005.

  17. J. Glaunès, A. Trouvé, and L. Younes. Diffeomorphic matching of distributions: A new approach for unlabelled point-sets and sub-manifolds matching. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2:712–718, 2004.

    Google Scholar 

  18. J. Glaunès and M. Vaillant. Surface matching via currents. Proceedings of Information Processing in Medical Imaging (IPMI), Lecture Notes in Computer Science, 3565(381–392), 2006.

  19. S. Joshi, B. Davis, M. Jomier, and G. Gerig. Unbiased diffeomorphic atlas construction for computational anatomy. NeuroImage, 23:S151–S160, 2004.

    Article  Google Scholar 

  20. S. Lee, N. Fallah, F. Forooghian, A. Ko, K. Pakzad-Vaezi, A. B. Merkur, A. W. Kirker, D. A. Albiani, M. Young, M. V. Sarunic, and M. F. Beg. Comparative analysis of repeatability of manual and automated choroidal thickness measurements in nonneovascular age-related macular degeneration. Investigative Ophthalmology and Vision Science, 53(5):2864–2871, 2013.

    Article  Google Scholar 

  21. S. Lee, S. X. Han, M. Young, M. F. Beg, M. V. Sarunic, and P. J. Mackenzie. Optic nerve head and peripapillary morphometrics in myopic glaucoma. preprint, 2014.

  22. J. Ma, M. I. Miller, A. Trouvé, and L. Younes. Bayesian template estimation in computational anatomy. NeuroImage, 42(1):252 – 261, 2008.

    Article  Google Scholar 

  23. J. Ma, M. I. Miller, and L. Younes. A bayesian generative model for surface template estimation. Journal of Biomedical Imaging, 2010:16, 2010.

    Google Scholar 

  24. M. Micheli, P. W. Michor, and D. Mumford. Sobolev metrics on diffeomorphism groups and the derived geometry of spaces of submanifolds. Izvestiya: Mathematics, 77(3):541, 2013.

    Article  MathSciNet  MATH  Google Scholar 

  25. P. W. Michor and D. Mumford. A zoo of diffeomorphism groups on \({\mathbb{R}}^n\). Annals of Global Analysis and Geometry, 44(4):529–540, 2013.

    Article  MathSciNet  MATH  Google Scholar 

  26. M. I. Miller, A. Trouvé, and L. Younes. On the metrics and euler-lagrange equations of computational anatomy. Annual Review of Biomedical Engineering, 4(1):375–405, 2002.

    Article  Google Scholar 

  27. M. I. Miller, A. Trouvé, and L. Younes. Geodesic Shooting for Computational Anatomy. Journal of Mathematical Imaging and Vision, 24(2):209–228, 2006.

    Article  MathSciNet  Google Scholar 

  28. M. I. Miller, L. Younes, and A. Trouvé. Diffeomorphometry and geodesic positioning systems for human anatomy. TECHNOLOGY, 2(1):36–43, 2014.

    Article  Google Scholar 

  29. F. Morgan. Geometric measure theory, a beginner’s guide. Academic Press, 1995.

  30. L. Simon. Lecture notes on geometric measure theory. Australian national university, 1983.

  31. B. K. Sriperumbudur, K. Fukumizu, and G. Lanckriet. On the relation between universality, characteristic kernels and RKHS embedding of measures. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS-10), volume 9, pages 773–780, 2010.

  32. B. Thibert. Sur l’approximation géométrique d’une surface lisse. Applications en géologie structurale. PhD thesis, Université Claude Bernard - Lyon 1, 2003.

  33. A. Trouvé. An approach of pattern recognition through infinite dimensional group action. Rapport de recherche du LMENS, 1995.

  34. A. Trouvé. Diffeomorphisms groups and pattern matching in image analysis. Intern. Jour. of Computer Vision, 28(3):213–221, 1998.

    Article  Google Scholar 

  35. A. Trouvé and L. Younes. Local geometry of deformable templates. SIAM Journal of Mathematical Analysis, 37(1):17–59, 2005.

    Article  MathSciNet  MATH  Google Scholar 

  36. A. Trouvé and L. Younes. Metamorphoses through lie group action. Foundation of computational mathematics, 5:173–198, sep 2005.

    Article  MathSciNet  MATH  Google Scholar 

  37. A. Trouvé and L. Younes. Handbook of Mathematical Imaging, chapter Shape spaces, pages 1309–1362. Springer, 2011.

  38. L. Younes. Shapes and diffeomorphisms. Springer, 2010.

Download references

Acknowledgments

We would like to thank Mirza Faisal Beg, Sieun Lee, Evgeniy Lebed, Marinko Sarunic and their collaborators for providing the OCT dataset and for fruitful discussions. We are very grateful to the anonymous reviewers for their detailed and constructive comments that helped us improve the original manuscript. The authors also acknowledge the support of the French Agence Nationale de la Recherche project HM-TC (Number ANR-09-EMER-006).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to N. Charon.

Additional information

Communicated by Peter J. Olver.

Appendices

Appendix 1: Variation Formula for Fvarifolds: Proof of Theorem 5

The proof follows the same steps as the corresponding result for usual varifolds (cf [12]). Given a \(C^{1}\) vector field v on E with compact support, we can consider the 1-parameter group of diffeomorphisms \(\phi _{t}\) with \(\phi _{0}={\text {Id}}\) and \({\partial _{t}}_{\upharpoonright _{t=0}} \phi _{t} =v\). Then, it follows that:

$$\begin{aligned} (\pounds _{(v,h)}\omega )(x,T_{x}X,f(x))&= {\dfrac{{\text {d}}}{{\text {d}}t}}_{\upharpoonright _{t=0}} (\psi _{t}^{*}\omega )(x,T_{x}X,f(x)) \nonumber \\&={\dfrac{d}{\mathrm{d}t}}_{\upharpoonright _{t=0}} \left|{d_{x}\phi _{t}}_{\upharpoonright _{T_{x}X}}\right|\, \omega (\phi _{t}(x),d_{x}\phi _{t}(T_{x}X),f(x)+th(x)) . \end{aligned}$$
(84)

As we see, the previous leads to several terms in the derivative: Differentiate the volume change term \(J_t\doteq |{d_{x}\phi _{t}}_{\upharpoonright _{T_{x}X}}|\), the function \(\omega \) with respect to the position variable, with respect to the tangent space direction and to the signal part. The derivative with respect to point positions and signal values are easy to obtain and equals, respectively, since \(\omega \) is assumed to be \(C^{1}\), \(\left( \frac{\partial \omega }{\partial x} \Big | v \right) \) and \(\frac{\partial \omega }{\partial m}h\). The two other terms require more attention.

1.1 Derivative of the Volume Change

For any vector field u defined on X, we shall denote by \(u^{\top }\) and \(u^{\bot }\) the tangential and normal components of u with respect to the tangent space of X at each point. We also introduce the connection \(\nabla _{\cdot }\cdot \) on the ambient space and an orthonormal frame of tangent vector fields \((e_{i})_{i=1,\ldots ,d}\) on X. Now \(J_{t}=\sqrt{\det ([\langle d_{x}\phi _{t}(e_{i}),d_{x}\phi _{t}(e_{j})]_{i,j})}\) so a simple calculation shows that:

$$\begin{aligned} {\dfrac{{\text {d}}}{{\text {d}}t}}_{\upharpoonright _{t=0}} J_{t} = \sum _{i=1}^{d} \langle e_{i}, \nabla _{e_{i}}v \rangle \end{aligned}$$

Writing \(v=v^{\top }+v^{\bot }\) provides a first term \(\sum _{i=1}^{d} \langle e_{i}, \nabla _{e_{i}}v^{\top } \rangle \) which is the tangential divergence of the vector field \(v^{\top }\) denoted usually \({{\mathrm{div}}}_{X}(v^{\top })\). The second term becomes \(\sum _{i=1}^{d} \langle e_{i}, \nabla _{e_{i}}v^{\bot } \rangle \). For all \(i=1,\ldots ,d\), we have \(\langle e_{i}, v^{\bot } \rangle = 0\) so that after differentiation we find that \(\langle e_{i}, \nabla _{e_{i}}v^{\bot } \rangle = - \langle \nabla _{e_{i}} e_{i}, v^{\bot } \rangle \). Therefore:

$$\begin{aligned} \sum _{i=1}^{d} \langle e_{i}, \nabla _{e_{i}}v^{\bot } \rangle= & {} -\sum _{i=1}^{d} \langle \nabla _{e_{i}} e_{i}, v^{\bot } \rangle \\= & {} -\bigg \langle \bigg (\sum _{i=1}^{d} \nabla _{e_{i}} e_{i} \bigg )^{\bot }, v^{\bot } \bigg \rangle . \end{aligned}$$

In this last expression, we recognize the mean curvature vector to the submanifold X, which is the trace of the Weingarten map and is denoted \(H_{X}\). As a result, we find that:

$$\begin{aligned} \int _{X} \omega {\dfrac{{\text {d}}}{{\text {d}}t}}_{\upharpoonright _{t=0}} J_{t} = \int _{X} \omega {{\mathrm{div}}}_{X}\left( v^{\top }\right) - \int _{X} \omega \langle H_{X},v^{\bot } \rangle . \end{aligned}$$

where, we adopt in this section the shortcut notation \(\int _{X} g\) to denote the integral \(\int _{X} g(x) {\text {d}}{\mathcal {H}}^{d}(x)\) and \(\int _{\partial X} g\) for \(\int _{\partial X} g(x) {\text {d}}{\mathcal {H}}^{d-1}(x)\). Now, the first term can be rewritten as a boundary integral by applying the Divergence Theorem. Indeed, if we denote by \(\tilde{\omega }\) the function defined on X by \(\tilde{\omega }(x)=\omega (x,T_{x}X,f(x))\) which is \(C^{1}\), we have \({{\mathrm{div}}}_{X}(\tilde{\omega }v^{\top })=\tilde{\omega }{{\mathrm{div}}}_{X}(v^{\top })+\nabla _{v^{\top }} \tilde{\omega }\). Applying the Divergence Theorem (cf [30] Section 7) on the submanifold X gives:

$$\begin{aligned} \int _{X} \omega {{\mathrm{div}}}_{X}\left( v^{\top }\right) = -\int _{X} \nabla _{v^{\top }} \tilde{\omega } + \int _{\partial X} \omega \left\langle \nu , v^{\top } \right\rangle \end{aligned}$$

where \(\nu \) is the unit outward normal to the boundary.

1.2 Computation of \(\nabla _{v^{\top }} \tilde{\omega }\)

The previous equation still involve the derivative of the function \(\tilde{\omega }\) along vector field \(v^{\top }\). Given the expression of function \(\tilde{\omega }\), this can be expressed as the sum of three terms :

$$\begin{aligned} \nabla _{v^{\top }} \tilde{\omega } = \left( \dfrac{\partial \omega }{\partial x} \bigg | v^{\top } \right) + \left( \dfrac{\partial \omega }{\partial V} \bigg | \nabla _{v^{\top }} T^{X} \right) + \dfrac{\partial \omega }{\partial m} \left\langle \nabla f,v^{\top } \right\rangle \end{aligned}$$
(85)

where \(\nabla _{v^{\top }} T^{X}\) is to be understood as the derivative along vector field \(v^{\top }\) of the function \(T^{X}: \ x \mapsto T_{x}X\) having values in the Grassmannian. For a given \(x \in X\), we are thus left with deriving the variation of the tangent space \(T_x X\) when moving along a curve \(t\mapsto \gamma (t)\) with \(\gamma (0)=x\) and \(\dot{\gamma }(0) = v^{\top }(x)\). Let’s call \(V_{t} \doteq T_{\gamma (t)} X \in G_{d}(E)\). As already mentioned, it is often more convenient to think of \(V_t\) through the embedding in \({\mathcal {L}}(E)\) given by the orthogonal projector \(p_{V_t}\). As explained more in [12], the variation of the orthogonal projector \(p_V\) with respect to V turns out to be the sum of two linear maps transposed to each other, one of which belonging to \({\mathcal {L}}(V,V^{\bot })\). In this way, one identifies the tangent space of \(G_d(E)\) at V with the space \({\mathcal {L}}(V,V^{\bot })\). Let’s consider \((e_1,\ldots ,e_d)\) an orthonormal basis of \(V_0 = T_x X\) and \((e_{1}(t),\ldots ,e_d(t))\) the parallel transport of this orthonormal frame along the curve \(\gamma \). Let’s also take z any vector in \(T_{x}X\) and set \(\overline{z}\) the vector field on X defined by \(\overline{z}(y) = p_{T_y X}(z)\) for all \(y \in X\). Then

$$\begin{aligned} p_{V_t}(z) = \sum _{i=1}^{d} \langle e_{i}(t),z \rangle e_{i}(t) \end{aligned}$$
(86)

and the derivative of (86) writes

$$\begin{aligned} \dfrac{{\text {d}}}{{\text {d}}t} p_{V_t}(z) = \sum _{i=1}^{d} \langle \dot{e_{i}}(t),z \rangle e_{i}(t) + \langle e_{i}(t),z \rangle \dot{e_{i}}(t). \end{aligned}$$

The derivative of \(e_{i}(t)\) can be decomposed into its tangential and normal parts \(\dot{e_{i}}(t)=\nabla _{\dot{\gamma }(t)}^{X} e_{i}(t) + \left( \nabla _{\dot{\gamma }(t)} e_{i} \right) ^{\bot }\). Then, \(\nabla _{\dot{\gamma }(t)}^{X} e_{i}(t)\) being the covariant derivative of \(e_{i}\) in X along the curve \(\gamma \), this term vanishes since \(e_i(t)\) is obtained by parallel transport. Thus, we eventually get that :

$$\begin{aligned} {\dfrac{{\text {d}}}{{\text {d}}t}}_{\upharpoonright _{t=0}} p_{V_t}(z) = \sum _{i=1}^{d} \left\langle \left( \nabla _{\dot{\gamma }(0)} e_{i} \right) ^{\bot },z \right\rangle e_{i}(0) + \langle e_{i}(0),z \rangle \left( \nabla _{\dot{\gamma }(0)} e_{i} \right) ^{\bot } . \end{aligned}$$

Since \(z \in T_{x}X\), \(\langle \left( \nabla _{\dot{\gamma }(0)} e_{i} \right) ^{\bot },z \rangle = 0\) and it results that

$$\begin{aligned} {\dfrac{{\text {d}}}{{\text {d}}t}}_{\upharpoonright _{t=0}} p_{V_t}(z) = \sum _{i=1}^{d} \langle e_{i}(0),z \rangle \left( \nabla _{\dot{\gamma }(0)} e_{i} \right) ^{\bot } = \left( \nabla _{\dot{\gamma }(0)} \overline{z} \right) ^{\bot } . \end{aligned}$$

This is exactly \({\mathrm {II}}(\dot{\gamma }(0),\overline{z})\) where \({\mathrm {II}}\) is the second fundamental form of X. It results that the variation \((\nabla _{v^{\top }}T^X)_{x}\) is precisely \({\mathrm {II}}(v^{\top },\cdot )_x \in {\mathcal {L}}(T_xX,(T_xX)^{\bot })\). By symmetry of the second fundamental form, it also equals to

$$\begin{aligned} \nabla _{v^{\top }}T^X = {\mathrm {II}}\left( \cdot ,v^{\top }\right) = \left( \nabla _{\cdot } \, v^{\top }\right) ^{\bot } \end{aligned}$$
(87)

Now, under the previous identifications of tangent spaces of \(G_d(E)\), \(\frac{\partial \omega }{\partial V}\) at \(V=T_x X\) belongs to \({\mathcal {L}}(T_x X,(T_x X)^{\bot })^{*} \approx {\mathcal {L}}((T_x X)^{\bot },T_x X)\) or equivalently is a map from E to \(T_x X\) vanishing on \(T_x X\) and so the previous term can be written more compactly as

$$\begin{aligned} \left( \dfrac{\partial \omega }{\partial V} \bigg | \nabla _{v^{\top }} T^{X} \right) = \left( \dfrac{\partial \omega }{\partial V} \bigg | \nabla v^{\top } \right) \end{aligned}$$

and eventually

$$\begin{aligned} \nabla _{v^{\top }} \tilde{\omega } = \left( \dfrac{\partial \omega }{\partial x} \bigg | v^{\top } \right) + \left( \dfrac{\partial \omega }{\partial V} \bigg | \nabla v^{\top } \right) + \dfrac{\partial \omega }{\partial m} \left\langle \nabla f,v^{\top } \right\rangle \end{aligned}$$
(88)

1.3 Derivative of Tangent Spaces’ Transport

We now come to the derivative term on the tangent space part in Eq. (84). Again, we identify tangent spaces with their corresponding orthogonal projector. If we now set \(V_{t}=d_{x} \phi _{t}(T_{x}X)\), one can easily show that [12]:

$$\begin{aligned} {\dfrac{{\text {d}}}{{\text {d}}t}}_{\upharpoonright _{t=0}} V_{t} = p_{T_{x}X^{\bot }} \circ {\nabla v}_{\upharpoonright _{T_{x}X}} \quad \in {\mathcal {L}}\left( T_{x}X,\left( T_{x}X\right) ^{\bot }\right) . \end{aligned}$$

As previously, \(\frac{\partial \omega }{\partial V}\) is an element of \({\mathcal {L}}(T_{x}X,T_{x}X^{\bot })^*\approx (T_{x}X^{\bot })^{*} \otimes T_{x}X\) and which we can write: \(\frac{\partial \omega }{\partial V} = \sum _{j=d+1}^{n} \eta _{j}^{*}\otimes \alpha _{j}\) for \((\eta _{d+1},\ldots ,\eta _{n})\) an orthonormal frame of \(T_{x}X^{\bot }\) and \((\alpha _{j})\) some vectors of \(T_{x}X\) (as usual \(\eta ^*\) denotes the linear form \(\langle \eta ,\cdot \rangle \)). Then, the variation we wish to compute is:

$$\begin{aligned} \left( \dfrac{\partial \omega }{\partial V} \bigg | \nabla v \right) = \sum _{j=d+1}^{n} \left\langle \eta _{j}, \nabla _{\alpha _{j}}v \right\rangle . \end{aligned}$$

If we introduce \(\left( \frac{\partial \omega }{\partial V} \bigg | v \right) = \sum _{j=d+1}^{n} \eta _{j}^{*}(v)\alpha _{j} = \sum _{j=d+1}^{n} \langle \eta _{j},v \rangle \alpha _{j}\) which is a tangent vector field on X, we have:

$$\begin{aligned}&{{\mathrm{div}}}_{X}\left( \dfrac{\partial \omega }{\partial V} \bigg | v \right) \\&\quad = \sum _{i=1}^{d} \sum _{j=d+1}^{n} \left( \left\langle e_{i}, \nabla _{e_{i}} \alpha _{j} \right\rangle \left\langle \eta _{j},v \right\rangle + \left\langle e_{i}, \left\langle \nabla _{e_{i}}\eta _{j},v \right\rangle \alpha _{j} \right\rangle + \left\langle e_{i}, \left\langle \eta _{j},\nabla _{e_i}v \right\rangle \alpha _{j} \right\rangle \right) . \end{aligned}$$

The last term in the sum is also \(\sum _{j=d+1}^{n} \langle \eta _{j},\nabla _{\alpha _{j}}v \rangle \), which is nothing else than \(\left( \frac{\partial \omega }{\partial V} \bigg | \nabla v \right) \). As for the two other terms in the sum, it is easy to see that it equals:

$$\begin{aligned} \bigg ( \sum _{i=1}^{d} \bigg \langle e_{i}, \nabla _{e_{i}}\sum _{j=d+1}^{n} \eta _{j}^{*}\otimes \alpha _{j} \bigg \rangle \bigg | v \bigg ) = \bigg ( {{\mathrm{div}}}_{X} \left( \dfrac{\partial \omega }{\partial V} \right) \bigg | v \bigg ). \end{aligned}$$

Hence, it follows that:

$$\begin{aligned} \left( \dfrac{\partial \omega }{\partial V} \bigg | \nabla v \right) = {{\mathrm{div}}}_{X}\left( \dfrac{\partial \omega }{\partial V} \bigg | v \right) - \bigg ( {{\mathrm{div}}}_{X} \left( \dfrac{\partial \omega }{\partial V} \right) \bigg | v \bigg ). \end{aligned}$$
(89)

Integrating Eq. (89) over the submanifold X and using the Divergence Theorem as before, we find that:

$$\begin{aligned} \int _{X} \left( \dfrac{\partial \omega }{\partial V} \bigg | \nabla v \right) = \int _{\partial X} \bigg \langle \nu , \left( \dfrac{\partial \omega }{\partial V} \bigg | v \right) \bigg \rangle - \int _{X} \bigg ( {{\mathrm{div}}}_{X} \left( \dfrac{\partial \omega }{\partial V} \right) \bigg | v \bigg ). \end{aligned}$$
(90)

1.4 Synthesis

Coming back now to Eq. (84), the sum of all different terms gives:

$$\begin{aligned} \int _X (\pounds _{(v,h)}\omega )&= \int _{X} \left( \dfrac{\partial \omega }{\partial x} - {{\mathrm{div}}}_{X} \left( \dfrac{\partial \omega }{\partial V} \right) \bigg | v \right) -\int _{X} \omega \left( H_{X} \bigg | v^{\bot } \right) \\&\quad -\int _{X} \left( \dfrac{\partial \omega }{\partial x} \bigg | v^{\top } \right) + \left( \dfrac{\partial \omega }{\partial V} \bigg | \nabla v^{\top } \right) + \dfrac{\partial \omega }{\partial m} \langle \nabla f,v^{\top } \rangle \\&\quad +\int _{X} \dfrac{\partial \omega }{\partial m}\, h + \int _{\partial X} \bigg \langle \nu , \left( \dfrac{\partial \omega }{\partial V} \bigg | v \right) + \omega v^{\top } \bigg \rangle . \end{aligned}$$

In addition, the integral

$$\begin{aligned} \int _{X} \left( \dfrac{\partial \omega }{\partial V} \bigg | \nabla v^{\top } \right)= & {} \int _{\partial X} \bigg \langle \nu , \left( \dfrac{\partial \omega }{\partial V} \bigg | v^{\top } \right) \bigg \rangle - \int _{X} \bigg ( {{\mathrm{div}}}_{X} \left( \dfrac{\partial \omega }{\partial V} \right) \bigg | v^{\top } \bigg ) \\= & {} - \int _{X} \bigg ( {{\mathrm{div}}}_{X} \left( \dfrac{\partial \omega }{\partial V} \right) \bigg | v^{\top } \bigg ) \end{aligned}$$

thanks once more to the divergence theorem and the fact that \(\left( \frac{\partial \omega }{\partial V} | v^{\top } \right) =0\) because \(v^{\top } \in T_x X\). Since \(v=v^{\top } + v^{\bot }\), it follows that several simplifications occur in the previous sum, leading to

$$\begin{aligned} \int _X \pounds _{(v,h)}\omega= & {} \int _{X} \left( \dfrac{\partial \omega }{\partial x} - {{\mathrm{div}}}_{X} \left( \dfrac{\partial \omega }{\partial V} \right) - \omega H_{X} \bigg | v^{\bot } \right) + \dfrac{\partial \omega }{\partial m}\, (h-\langle \nabla f,v^{\top }\rangle ) \\&+ \int _{\partial X} \bigg \langle \nu , \left( \dfrac{\partial \omega }{\partial V} \bigg | v \right) + \omega v^{\top } \bigg \rangle \end{aligned}$$

which proves the result of Theorem 5.

Appendix 2: Proof of Proposition 7

1.1 Perturbation

We introduce now a perturbation process on any measure \(\nu \) on \(E \times G_{d}(E) \times {\mathbb {R}}\) that shall be useful for the following. Let \(a>0\) to be fixed later and consider for any \(t\in {\mathbb {R}}\) the function \(\rho _t:{\mathbb {R}}\rightarrow {\mathbb {R}}\) such that

$$\begin{aligned} \rho _t(z)=z+t(\text {sgn}(z)a-z){\mathbf {1}}_{|z|>a} \end{aligned}$$
(91)

where \(\text {sgn}(z)\) is the sign of z. We have \(\rho _0=\text {Id}_{\mathbb {R}}\) and \(\rho _1\) is a symmetric threshold at level a. Now for any \(t\in {\mathbb {R}}\), we denote \(\nu _t\) the new measure defined for any \(\omega \in C_b(E \times G_{d}(E) \times {\mathbb {R}})\) as:

$$\begin{aligned} \nu _t(\omega )=\int \omega (x,V,\rho _t(f)){\text {d}}\nu (x,V,f). \end{aligned}$$
(92)

Obviously, \(\nu _0=\nu \) and \(\nu _1\) is such that \(\nu _1(|f|>a)=0\) so that \(t\mapsto \nu _t\) is an homotopy from \(\nu \) to a measure under which the signal is a.e. bounded by a.

1.2 Proof of Lemma 2

We show the existence of a fvarifold minimizer in \({\mathcal {M}}^X\) [cf Eq. (51)] for the extended functional \(\tilde{J}\). For any \(\nu \in {\mathcal {M}}^X\) and \(t\in {\mathbb {R}}\), we denote \(J_t\doteq \tilde{J}(\nu _t)\) where \(\nu _{t}\) is the previously defined perturbation of \(\nu \) (cf Sect. 2.1) and we assume that \(J_0<\infty \) [which is equivalent to say that \(\nu (|f|^2)<\infty \)]. We recall that \(\Vert \mu _{(X^i,f^i)}-\nu _{t}\Vert _{W'}^2 = (\mu _{(X^i,f^i)}-\nu _{t})(\omega ^{i})\) where \(\omega ^{i}=K_{W}(\mu _{(X^i,f^i)}-\nu _{t}) \in W\). Then, one easily checks that \(J_t<\infty \) and, since we assume that W is continuously embedded into \(C_0^2(E\times G_{d}(E) \times {\mathbb {R}})\), with existing derivative \(J'_t\) at any location t given by

$$\begin{aligned} J'_t = \nu \bigg (\frac{{\text {d}}}{{\text {d}}t}\left( \rho _t(f)\right) \bigg (\gamma _f\rho _t(f)+\gamma _W\sum _{i=1}^N\frac{\partial \omega ^i}{\partial f}(x,V,\rho _{t}(f))\bigg )\bigg ) \end{aligned}$$
(93)

In the sequel, \(C>0\) denotes a constant, the value of which may vary from one line to another. Using again the continuous embedding of W into \(C^2_0(E\times G_{d}(E) \times {\mathbb {R}})\), we get that

$$\begin{aligned} \bigg |\frac{\partial \omega ^i}{\partial f}(x,V,\rho _{t}(f))\bigg |&\le C\, \Vert \omega ^{i}\Vert _{W} \nonumber \\&\le C\, \left( \Vert \mu _{(X^i,f^i)}\Vert _{W'} + \Vert \nu _{t}\Vert _{W'} \right) . \end{aligned}$$
(94)

Moreover, as we mentioned after Proposition 5, \(\Vert \mu _{(X^i,f^i)}\Vert _{W'} \le C\, {\mathcal {H}}^{d}(X^i)\). Similarly, \(\Vert \nu _{t}\Vert _{W'} \le C\, \nu _{t}(E \times G_{d}(E) \times {\mathbb {R}})\) and, since \(\nu _{t} \in {\mathcal {M}}^X\), we have \(\nu _{t}(E \times G_{d}(E) \times {\mathbb {R}}) = {\mathcal {H}}^{d}(X)\) and consequently \(\Vert \nu _{t}\Vert _{W'} \le C\, {\mathcal {H}}^{d}(X)\). Thus, there exists a constant \(C'>0\) such that:

$$\begin{aligned} \bigg |\sum _{i=1}^{N} \frac{\partial \omega ^i}{\partial f}(x,V,\rho _{t}(f))\bigg | \le C' \sum _{i=1}^{N} \left( {\mathcal {H}}^{d}(X^i) + {\mathcal {H}}^{d}(X) \right) \end{aligned}$$
(95)

Noticing now that \(\frac{\text {d}}{{\text {d}}t}\left( \rho _t(f)\right) \rho _t(f)\le 0\), that \(|\frac{\text {d}}{{\text {d}}t}\left( \rho _t(f)\right) |=0\) for \(|f|\le a\) and that \(|\rho _t(f)|\ge a\) for \(|f|\ge a\) and \(t\in [0,1]\), we get for \(t\in [0,1]\)

$$\begin{aligned} J'_t\le \nu \left( -\left| \frac{{\text {d}}}{{\text {d}}t}\left( \rho _t(f)\right) \right| {\mathbf {1}}_{|f|>a}\left( \gamma _fa-\gamma _W C' \sum _{i=1}^N\left( {\mathcal {H}}^d(X^i)+{\mathcal {H}}^d(X)\right) \right) \right) \end{aligned}$$
(96)

so that

$$\begin{aligned} \tilde{J}(\nu _1)\le \tilde{J}(\nu _0) \text { if }a\ge C'\frac{\gamma _W}{\gamma _f}\sum _{i=1}^N\left( {\mathcal {H}}^d(X^i)+{\mathcal {H}}^d(X)\right) . \end{aligned}$$
(97)

An important consequence of (97) is that one can restrict the search of a minimum for \(\tilde{J}\) on fvarifolds \(\nu \) such that

$$\begin{aligned} \nu \left( {\mathbf {1}}_{|f|> a}\right) =0 \end{aligned}$$
(98)

with \(a=C'\frac{\gamma _W}{\gamma _f}\sum _{i=1}^N\left( {\mathcal {H}}^d(X^i)+{\mathcal {H}}^d(X)\right) \). In particular, since \(\nu \in {\mathcal {M}}^X\), we will have

$$\begin{aligned} x\in X \text { and } |f|\le a~\nu \text { a.e.} \end{aligned}$$
(99)

Since X is bounded and \(G_{d}(E)\) compact, we can restrict the search of a minimum on a measure supported on a compact subset \(K\subset E \times G_{d}(E) \times {\mathbb {R}}\) so that we introduce:

$$\begin{aligned} {\mathcal {M}}^{X,K}\doteq \left\{ \ \nu \in {\mathcal {M}}^X\ |\ (x,V,f)\in K~\nu \text { a.e.}\ \right\} . \end{aligned}$$
(100)

An easy check shows that \(\tilde{J}\) is lower semicontinuous on the set \({\mathcal {M}}^{X,K}\) for the weak convergence topology. In addition, \({\mathcal {M}}^{X,K}\) is sequentially compact. Indeed, if \(\nu _{n}\) is a sequence in \({\mathcal {M}}^{X,K}\) then all \(\nu _{n}\) are supported by the compact K and in particular \((\nu _{n})\) is tight. Also, as already noted, there exists a constant \(C>0\) independent of n such that \(\nu _{n}(E \times G_{d}(E) \times {\mathbb {R}}) \le C\, {\mathcal {H}}^{d}(X)\), and thus, the sequence is uniformly bounded for the total variation norm. It results, thanks to the Prokhorov Theorem, that there exists a subsequence of \((\nu _{n})\) converging for the weak topology. These compactness and lower semicontinuity properties guarantee the existence of a minimizer \(\nu _*\) of \(\tilde{J}\) with \(\nu _*\in {\mathcal {M}}^{X,K}\) and

$$\begin{aligned} \tilde{J}(\nu _*)\le \inf _{f\in L^2(X)}J_X(f). \end{aligned}$$
(101)

1.3 Proof of the Proposition

At this point, we do not yet have a minimizer of \(J_X\). The problem is that if the marginal on \(E\times G_{d}(E)\) of \(\nu _*\) is the transport of \({\mathcal {H}}^d_{|X}\) under the application \(x\mapsto (x,T_{x}X)\), we cannot guarantee that \(\nu _*\) does not weight multiple signal values in the fiber above a location \((x,T_{x}X)\). We will now show that for large enough \(\gamma _f/\gamma _W\), there exists \(f_*\in L^2(X)\) such that \(\nu _*=\nu _{X,f_*}\) so that we will deduce

$$\begin{aligned} J_X(f_*)=\tilde{J}(\nu _*)\le \inf _{f\in L^2(X)}J_X(f) \end{aligned}$$
(102)

and the existence of a minimizer on \(L^2(X)\).

Let \(\delta f\in C_b(E\times G_{d}(E) \times {\mathbb {R}})\) and for any \(t\in {\mathbb {R}}\) consider the perturbation \(\nu _t\in {\mathcal {M}}^X\) of any \(\nu \in {\mathcal {M}}^{X,K}\) such that for any \(g\in C_b(E\times G_{d}(E) \times {\mathbb {R}})\) we have:

$$\begin{aligned} \nu _t(g)\doteq \int g(x,V,f+t\delta f(x,V,f)){\text {d}}\nu (x,V,f). \end{aligned}$$
(103)

Here again, the function \(t\mapsto \tilde{J}(\nu _t)\) is differentiable everywhere and we have for \(\omega ^i\doteq K_W (\mu _{(X^i,f^i)}-\nu )\)

$$\begin{aligned} {\frac{{\text {d}}}{{\text {d}}t}\tilde{J}(\nu _t)}_{\upharpoonright _{t=0}}=\nu \left( \left( \gamma _f f+\gamma _W\sum _{i=1}^N\frac{\partial \omega ^i}{\partial f}(x,V,f)\right) \delta f(x,V,f)\right) \,, \end{aligned}$$

so that when \(\nu =\nu _*\) we get

$$\begin{aligned} \left\{ \begin{array}{l} \gamma _f f+\gamma _W A(x,V,f)=0\ \nu _*\text { a.e.}\\ \text {with}\\ A(x,V,f)\doteq \sum _{i=1}^N\frac{\partial \omega ^i}{\partial f}(x,V,f). \end{array} \right. \end{aligned}$$
(104)

The partial derivative of \(f \mapsto \gamma _f f+\gamma _W A(x,V,f)\) with respect to f equals \(\gamma _f + \gamma _W \frac{\partial A}{\partial f}(x,V,f)\). As before, using the continuous embedding \(W \hookrightarrow C_{0}^{2}(E\times G_{d}(E) \times {\mathbb {R}})\), we have once again a certain constant \(C>0\) such that

$$\begin{aligned}&\left| \frac{\partial A}{\partial f}(x,V,f) \right| \nonumber \\&\quad \le C\, \sum _{i=1}^N\left( {\mathcal {H}}^d(X^i)+{\mathcal {H}}^d(X)\right) ,\quad \text {for all } (x,V,f) \in E\times G_d(E)\times {\mathbb {R}}. \end{aligned}$$
(105)

It results that for \(\gamma _f/\gamma _W\) large enough and for all \((x,V) \in E \times G_{d}(E)\), \(f \mapsto \gamma _f f+\gamma _W A(x,V,f)\) is a strictly increasing function going from \(-\infty \) at \(-\infty \) to \(+\infty \) at \(+\infty \), and thus, there is a unique solution \(\tilde{f}(x,V)\) to (104). Now, since the application \(f \mapsto \gamma _f f+\gamma _W A(x,V,f)\) is also \(C^{1}\) on \(E \times G_{d}(E) \times {\mathbb {R}}\), we deduce from the Implicit Function Theorem that \(\tilde{f}\) is a \(C^1\) function on \(E \times G_{d}(E)\). Going back to the solution \(\nu _{*}\), we know that for \(\nu _{*}\) almost every \((x,V,f) \in E \times G_{d}(E) \times {\mathbb {R}}\), we have \((x,V,f) \in K\) and \(f=\tilde{f}(x,V)\), so that \(|\tilde{f}| \le a\) a.e. For any continuous and bounded function \(\omega \):

$$\begin{aligned} \nu _{*}\left( \omega \right) = \int \omega \left( x,V,f\right) d \nu _{*} = \int \omega \left( x,V,\tilde{f}\left( x,V\right) \right) {\text {d}} \nu _{*} \end{aligned}$$

and if we denote by \(\tilde{\omega }(x,V)\doteq \omega (x,V,\tilde{f}(x,V))\) which is a continuous and bounded function on \(E \times G_{d}(E)\), we have by definition of the space \({\mathcal {M}}^X\) given by (51):

$$\begin{aligned} \nu _{*}(\omega ) = \nu _{*}(\tilde{\omega }) = \int _{X} \omega (x,T_{x}X,\tilde{f}(x,T_{x}X)) {\text {d}}{\mathcal {H}}^{d}(x) . \end{aligned}$$
(106)

Therefore, setting \(f_{*}(x) = \tilde{f}(x,T_{x}X)\) for \(x \in X\), we see that \(|f_{*}|\le a\) so that \(f \in L^{\infty }(X)\) and with (106), we deduce that \(\nu _{*} = \mu _{(X,f_{*})}\) which shows that the solution of the optimization is a fvarifold associated to a true fshape \((X,f_{*})\). In addition, if X is a \(C^{p}\) submanifold then \(x\mapsto T_{x}X\) is a \(C^{p-1}\) function on X and, if \(W\hookrightarrow C_{0}^{m}(E\times G_{d}(E)\times {\mathbb {R}})\) with \(m\ge 2\) and \(m\ge p\), A and \(\tilde{f}\) are \(C^{p-1}\) functions so \(f_{*}\) is also \(C^{p-1}\), which concludes the proof of Proposition 7.

Appendix 3: Proof of Theorem 6

We shall basically follow the same steps as in the previous simpler cases. First of all, exactly as in 5.2.2, existence of a template shape X is guaranteed with the same compacity and lower semicontinuity arguments. Thus, we may assume that X is fixed and we only have to show existence of minimizers to the simplified functional:

$$\begin{aligned} J_{X}\left( f,(\zeta ^{i})_i,(v^i)_i\right)\doteq & {} \frac{\gamma _f}{2}\int _X|f(x)|^2{\text {d}}{\mathcal {H}}^d(x)\\&+\frac{1}{2}\sum _{i=1}^N\left( \int _0^1 \left| v^i_t\right| ^2_V {\text {d}}t +\gamma _\zeta \int _X\left| \zeta ^i(x)\right| ^2{\text {d}}{\mathcal {H}}^d(x)\right. \\&+\left. \gamma _W\left\| \mu _{(X^i,f^i)}-\mu _{(\phi ^{v^i}_{1}(X),(f+\zeta ^i)\circ (\phi ^{v^i}_{1})^{-1})}\right\| ^2_{W'}\right) \end{aligned}$$

Now, as for \(v^{0}\), due to the presence of the penalizations \(\Vert v^i\Vert _{L^{2}([0,1],V)}\doteq \int _0^1 |v^i_t|^2_V {\text {d}}t\), \(1\le i \le N\), one can assume that all vector fields \(v^{i}\), \(1\le i\le N\), belong to a fixed closed ball B of radius \(r > 0\) in \(L^{2}([0,1],V)\). As in the proof of Proposition 7, we first show existence of a minimizer in a space of fvarifolds. Namely, extending the definitions of the previous subsections, we introduce the space \({\mathcal {M}}^X\) of measures \(\nu \) on \(E \times G_{d}(E) \times {\mathbb {R}} \times {\mathbb {R}}^{N}\) such that for all continuous and bounded function h on \(E \times G_{d}(E)\), we have:

$$\begin{aligned} \nu (h) = \int h(x,V) {\text {d}}\nu (x,V,f,(\zeta ^{i})_{i}) = \int _{X} h(x,T_{x}X) {\text {d}} {\mathcal {H}}^{d}(x) . \end{aligned}$$

For a measure \(\nu \) on \(E \times G_{d}(E) \times {\mathbb {R}} \times {\mathbb {R}}^{N}\) and a diffeomorphism \(\phi \), we denote by \(\phi \cdot \nu \) the transport of \(\nu \) by \(\phi \) defined by:

$$\begin{aligned} (\phi \cdot \nu )(g) = \int \left| {d_{x}\phi }_{\upharpoonright _{V}}\right| g(\phi (x),d_{x}\phi (V),f,(\zeta ^{i})_{i}) {\text {d}}\nu (x,V,f,(\zeta ^{i})_{i}) . \end{aligned}$$

We now introduce the extended functional:

$$\begin{aligned} \tilde{J}\left( \nu ,(v^i)_i\right)\doteq & {} \frac{\gamma _f}{2} \nu \left( |f|^{2}\right) +\frac{1}{2}\sum _{i=1}^N\Bigg (\int _0^1 \left| v^i_t\right| ^2_V {\text {d}}t +\gamma _\zeta \nu \left( \left| \zeta ^{i}\right| ^{2}\right) \\&+\,\gamma _W\left\| \mu _{(X^i,f^i)}-(\phi ^{v^i}_{1})\cdot \nu ^{i}\right\| ^2_{W'} \Bigg ) \end{aligned}$$

with \((v^{i})_i \in (L^{2}([0,1],V))^{N}\) and \(\nu \in {\mathcal {M}}^X\), \(\nu (|f|^2)\) denoting in short the integral of the application \((x,V,f,(\zeta ^{i})_i)\mapsto |f|^2\) with respect to \(\nu \). For all \(1\le i \le N\), \(\nu ^{i}\) is the fvarifold defined for all \(\omega \in W\) by:

$$\begin{aligned} \nu ^{i}(\omega ) = \int \omega (x,V,f+\zeta ^{i}) {\text {d}}\nu (x,V,f,\zeta ^{i}) . \end{aligned}$$

As previously, we can consider the perturbation function \(\rho _{t}\) acting on signals and the measures

$$\begin{aligned} \nu _{t}(g)\doteq \int g(x,V,\rho _{t}(f),(\rho _{t}( \zeta ^{i} ))_{i} ) {\text {d}}\nu (x,V,f,(\zeta ^{i})_{i}) . \end{aligned}$$

Denoting \(J_{t}=\tilde{J}(\nu _{t},(v^i)_i)\), we have, for \(t\in [0,1]\),

$$\begin{aligned} J'_t&= \nu \left( \frac{{\text {d}}}{{\text {d}}t}\left( \rho _t(f)\right) \left( \gamma _f\rho _t(f)+\gamma _W\sum _{i=1}^N \left|{d_{x}\phi ^{v^i}_{1}}_{\upharpoonright _{V}}\right| \, \frac{\partial \omega ^i}{\partial f}(\phi ^{v^{i}}(x),d_{x}\phi ^{v^{i}}(V),\rho _{t}(f)+\rho _{t}(\zeta ^{i}))\right) \right) \nonumber \\&\quad + \nu \left( \sum _{i=1}^{N}\frac{{\text {d}}}{{\text {d}}t}\left( \rho _t(\zeta ^{i})\right) \left( \gamma _f\rho _t(\zeta ^{i})+\gamma _W \left|{d_{x}\phi ^{v^i}_{1}}_{\upharpoonright _{V}}\right| \, \frac{\partial \omega ^i}{\partial f}(\phi ^{v^{i}}(x),d_{x}\phi ^{v^{i}}(V),\rho _{t}(f)+\rho _{t}(\zeta ^{i}))\right) \right) \end{aligned}$$
(107)

where, for all \(1\le i\le N\), \(\omega ^{i}=K_{W}(\mu _{(X^{i},f^{i})} - (\phi ^{v^i}_{1})\cdot \nu _{t}^{i})\). On the first hand, we know that there exists a constant \(C>0\) such that for all i, \(x \in E\) and \(V \in G_{d}(E)\), \(|{d_{x}\phi ^{v^{i}}}_{\upharpoonright _{V}}|\le C\, |{\text {d}}\phi ^{v^{i}}|_{\infty }\). In addition, it is a classical result on flows of differential equations (cf [38]) that there exists a non-decreasing continuous function \(\tau : {\mathbb {R}}^{+}\rightarrow {\mathbb {R}}^{+}\) independent of \(v \in L^{2}([0,1],V)\) such that \(|{\text {d}}\phi ^{v}_{1}|_{\infty }\le \tau (\Vert v\Vert _{L^{2}([0,1],V)})\). On the other hand, using the same controls as in the previous subsections, we have, for some \(C>0\) denoting a constant the value of which may change from one line to another :

$$\begin{aligned}&\left| \frac{\partial \omega ^i}{\partial f}(\phi ^{v^{i}}(x),d_{x}\phi ^{v^{i}}(V),\rho _{t}(f)+\rho _{t}(\zeta ^{i})) \right| \\&\quad \le \left| \frac{\partial \omega ^i}{\partial f} \right| _{\infty } \\&\quad \le C\, \left\| \omega ^{i}\right\| _{W} \\&\quad \le C\, \left( \left\| \mu _{(X^{i},f^{i})}\right\| _{W'}+\left\| (\phi ^{v^i}_{1})\cdot \nu _{t}^{i}\right\| _{W'}\right) \\&\quad \le C\, \left( {\mathcal {H}}^{d}(X^{i}) + \left( \phi ^{v^i}_{1}\right) \cdot \nu _{t}^{i}(E\times G_{d}(E)\times {\mathbb {R}})\right) . \end{aligned}$$

It is also straightforward that there exists a constant \(C>0\) such that \((\phi ^{v^i}_{1})\cdot \nu _{t}^{i}(E\times G_{d}(E)\times {\mathbb {R}})) \le C\, |d\phi ^{v^{i}}|_{\infty } \nu _{t}^{i}(E \times G_{d}(E) \times {\mathbb {R}})\) and, using the fact that \(\nu \in {\mathcal {M}}^X\) as already argued in 5.2.1, \(\nu _{t}^{i}(E \times G_{d}(E) \times {\mathbb {R}})={\mathcal {H}}^{d}(X)\). It results, from all the previous inequalities, the existence of a non-decreasing continuous function that we will still call \(\tau \) such that for all \(i,x,V,f,\zeta ^{i}\):

$$\begin{aligned}&\left| \left|{d_{x}\phi ^{v^i}_{1}}_{\upharpoonright _{V}}\right|\,\frac{\partial \omega ^i}{\partial f}(\phi ^{v^{i}}(x),d_{x}\phi ^{v^{i}}(V),\rho _{t}(f)+\rho _{t}(\zeta ^{i})) \right| \nonumber \\&\quad \le \tau (\Vert v^i\Vert _{L^{2}([0,1],V)}) \, ({\mathcal {H}}^{d}(X^{i})+{\mathcal {H}}^{d}(X)) \end{aligned}$$
(108)

Following the same path that previously lead to (96)

$$\begin{aligned} J'_t&\le \nu \left( -\left| \frac{{\text {d}}}{{\text {d}}t}\left( \rho _t(f)\right) \right| {\mathbf {1}}_{|f|>a}\left( \gamma _f\, a-\gamma _W \sum _{i=1}^N \tau (\left\| v^i\right\| _{L^{2}([0,1],V)})\left( {\mathcal {H}}^d(X^i)+{\mathcal {H}}^d(X)\right) \right) \right) \nonumber \\&\quad + \sum _{i=1}^{N}\nu \left( -\left| \frac{{\text {d}}}{{\text {d}}t}\left( \rho _t(\zeta ^{i})\right) \right| {\mathbf {1}}_{|\zeta ^{i}|>a}\left( \gamma _\zeta \, a-\gamma _W \tau (\left\| v^i\right\| _{L^{2}([0,1],V)}) \left( {\mathcal {H}}^d(X^i)+{\mathcal {H}}^d(X)\right) \right) \right) . \end{aligned}$$
(109)

Just as in 5.2.1, this implies that \(\tilde{J}(\nu _{1},(v^{i})_i) \le \tilde{J}(\nu _{0},(v^{i})_i)\) as soon as:

$$\begin{aligned} \left\{ \begin{array}{l} a\ge \frac{\gamma _{W}}{\gamma _{f}} \sum _{i=1}^{N} \tau \left( \left\| v^i\right\| _{L^{2}([0,1],V)}\right) \left( {\mathcal {H}}^d(X^i)+{\mathcal {H}}^d(X)\right) \\ \text { and }\\ a\ge \max _{i} \frac{\gamma _{W}}{\gamma _{\zeta }} \tau \left( \left\| v^i\right\| _{L^{2}([0,1],V)}\right) \left( {\mathcal {H}}^d(X^i)+{\mathcal {H}}^d(X)\right) \end{array}\right. \end{aligned}$$

Therefore, one may restrict the search of a minimum on a set of measures \(\nu \) that are supported on a compact subset K of \(E \times G_{d} \times {\mathbb {R}}\times {\mathbb {R}}^{N}\), which space we shall denote again \({\mathcal {M}}^{X,K}\). The rest of the proof is now very close to the one of 5.2.1. Due to lower semicontinuity of the functional and the compacity of \({\mathcal {M}}^{X,K}\) and B for the weak convergence topologies (respectively on the space of measures and on \(L^{2}([0,1],V)\)), we obtain the existence of a minimizer \((\nu _{*},(v^{i}_{*})_i)\) for the functional \(\tilde{J}\).

The last step is to prove that \(\nu _{*}\), which belongs a priori to the measure space \({\mathcal {M}}^{X,K}\), can be written under the form \(\nu _{*}= \nu _{X,f_{*},(\zeta ^{i}_{*})_i}\), i.e., that there exists functions \(f_{*}\) and \(\zeta ^{i}_{*}\), \(1\le i\le N\), on X such that, for all continuous and bounded function g on \(E \times G_{d}(E) \times {\mathbb {R}}\times {\mathbb {R}}^{N}\):

$$\begin{aligned} \nu _{*}(g) = \int _{X} g(x,T_{x}X,f_{*}(x),(\zeta ^{i}_{*}(x))_i) {\text {d}}{\mathcal {H}}^{d}(x) \end{aligned}$$
(110)

We then consider variations of the signals \((\delta f, (\delta \zeta ^{i})_i)\) all belonging to the space \(C_{b}(E \times G_{d}(E) \times {\mathbb {R}}\times {\mathbb {R}}^{N})\) and the path \(t\mapsto \nu _{t}\) defined by:

$$\begin{aligned} \nu _{t}(g)= & {} \int g(x,V,f+t\delta f(x,V,f,(\zeta ^{i})_i),\\&(\zeta ^{i}+t\delta \zeta ^{i}(x,V,f,(\zeta ^{i})_i))_i ) {\text {d}}\nu _{*}(x,V,f,(\zeta ^{i})_i ) . \end{aligned}$$

Now, if \(J_{t}\doteq \tilde{J}(\nu _{t},(v^{i}_{*})_i)\), expressing that \({J_{t}'}_{\upharpoonright _{t=0}}=0\) for all \(\delta f\) and \((\delta \zeta ^{i})_i\) gives, similarly to 5.2.1, the following set of equations:

$$\begin{aligned} (\gamma _{f} f, (\gamma _{\zeta } \zeta ^{i})_i ) = -A(x,V,f,(\zeta ^{i})_i ) \ \nu _{*}\text {-a.e} \end{aligned}$$
(111)

with \(A\left( x,V,f,\left( \zeta ^{i}\right) _i\right) \doteq \Big ( \sum _{i=1}^{N} \frac{\partial \omega ^{i}}{\partial f}\Big (\phi ^{v^{i}_{*}}_{1}(x),d_{x}\phi ^{v^{i}_{*}}_{1}(V),f+\zeta ^{i}\Big ), \Big (\frac{\partial \omega ^{i}}{\partial f}\Big (\phi ^{v^{i}_{*}}_{1}(x), d_{x}\phi ^{v^{i}_{*}}_{1}(V),f+\zeta ^{i}\Big )\Big )_i\Big )\). The derivatives \(\partial _{f} A\) and \(\partial _{\zeta ^{i}} A\) can be shown again to be uniformly bounded in \(x,V,f,\zeta ^{i}\), and a previous argument provides the existence of unique solutions \(f=\tilde{f}(x,V)\) and \(\zeta ^{i}=\tilde{\zeta }^{i}(x,V)\), \(1\le i\le N\), to (111). The rest of the proof is exactly the same as in the end of “Proof of the Proposition” in “Appendix 2”: We set \(f_{*}(x)=\tilde{f}(x,T_{x}X)\) and \(\zeta ^{i}_{*}(x)=\tilde{\zeta }^{i}(x,T_{x}X)\), \(1\le i\le N\), which are again \(L^{\infty }\) functions on X. In addition, one shows easily that the minimizing measure \(\nu _{*}\) equals \(\nu _{X,f_{*},(\zeta _{*}^{i})_i}\) in the sense of (110). Finally, the regularity of \(f_{*}\) and \(\zeta _{*}\) when X is a \(C^{p}\) submanifold is obtained again by applying the Implicit Function Theorem to (111).

Appendix 4: Proof of Theorem 7

We shall only sketch the essential steps to adapt the content of Appendix 3. We start by writing (58) in an extensive way. This gives:

$$\begin{aligned}&J\left( \left( v^{0},h^{0}\right) ,\left( v^{i},h^{i}\right) _i\right) \nonumber \\&\quad = \frac{\gamma _{V_0}}{2} \left\| v^{0}\right\| _{L^{2}([0,1],V_{0})}^{2} + \frac{\gamma _{f_0}}{2} \int _0^1\int _{X_0}\left| h^0_t\right| ^2\left| {d_x\phi ^{v^0}_1}_{\upharpoonright _{T_{x}X}}\right| {\text {d}}{\mathcal {H}}^d(x)\nonumber \\&\quad \quad + \sum _{i=1}^N\left( \frac{\gamma _{V}}{2} \left\| v^{i}\right\| _{L^{2}([0,1],V)}^{2} +\frac{\gamma _{f}}{2} \int _0^1\int _{X}\left| h^i_t\right| ^2|{d_x\phi ^{v^i}_1}_{\upharpoonright _{T_{x}X}}|{\text {d}}{\mathcal {H}}^d(x)\right. \nonumber \\&\quad \quad \left. +\frac{\gamma _W}{2}\left\| \mu _{(X^i,f^i)}-\mu _{\left( \phi ^{v^i}_{1}(X),\left( f+\zeta ^{h^{i}}_{1}\right) \circ \left( \phi ^{v^i}_{1}\right) ^{-1}\right) } \right\| ^2_{W'}\right) \end{aligned}$$
(112)

Now, with Lemma 3, we know that the optimal functions \(h^{0}_{*}\) and \(h^{i}_{*}\), \(1\le i\le N\), are given by (57), and thus, the variational problem of (112) can be replaced by the optimization with respect to residual functions \(\zeta ^0\) and \(\zeta ^{i}\), \(1\le i\le N\), living in \(L^{2}(X)\) of the functional:

$$\begin{aligned} J\left( \left( v^{0},\zeta ^{0}\right) ,\left( v^{i},\zeta ^{i}\right) _i\right)&= \frac{\gamma _{V_0}}{2} \left\| v^{0}\right\| _{L^{2}([0,1],V_{0})}^{2} + \frac{\gamma _{f_0}}{2} \int _{X_0} C^0(x)\, \left| \zeta ^0(x)\right| ^2 {\text {d}}{\mathcal {H}}^d(x) \\&\quad + \sum _{i=1}^N\left( \frac{\gamma _{V}}{2} \left\| v^{i}\right\| _{L^{2}([0,1],V)}^{2} +\frac{\gamma _{f}}{2} \int _{X}C^{i}(x)\, \left| \zeta ^i(x)\right| ^2 {\text {d}}{\mathcal {H}}^d(x)\right. \\&\quad \left. +\frac{\gamma _W}{2}\left\| \mu _{(X^i,f^i)}-\mu _{\left( \phi ^{v^i}_{1}(X), (f+\zeta ^{i})\circ \left( \phi ^{v^i}_{1}\right) ^{-1}\right) } \right\| ^2_{W'}\right) \end{aligned}$$

where \(C^{0}(x)\doteq {\left( \int _0^1\frac{1}{|{d_x\left[ \phi ^{v^{0}}_{s}\circ \left( \phi _1^{v_0}\right) ^{-1}\right] }_{\upharpoonright _{T_{x}X}}|}{\text {d}}s\right) ^{-1}}\) and for all \(1\le i\le N\), \(C^{i}(x)\doteq \left( \int _0^1\frac{1}{|{d_x\phi ^{v^{i}}_{s}}_{\upharpoonright _{T_{x}X}}|}{\text {d}}s\right) ^{-1}\). But we note that the previous, up to the weights in the \(L^2\) metrics given by functions \(C^i\), becomes now extremely close to the problem examined in Theorem 6. In fact, the proof of Appendix 3 can be adapted almost straightforwardly to this situation. As previously, the essential step is to reformulate the optimization problem in a space of measures. Defining the functions:

$$\begin{aligned} \tilde{C}^{0}(x,H)&= \left( \int _0^1\frac{1}{\left| {d_x[\phi ^{v^{0}}_{s}\circ (\phi _1^{v_0})^{-1}]}_{\upharpoonright _{H}}\right| }{\text {d}}s \right) ^{-1} \\ \tilde{C}^{i}(x,H)&= \left( \int _0^1\frac{1}{\left| {d_x\phi ^{v^{i}}_{s}}_{\upharpoonright _{H}}\right| }{\text {d}}s \right) ^{-1} \end{aligned}$$

for \(1\le i\le N\) and \((x,H) \in E \times G_{d}(E)\), we can set, with the same definitions as in Appendix 3:

$$\begin{aligned} \tilde{J}\left( \nu ,\left( v^i\right) _i\right)\doteq & {} \frac{\gamma _{f_0}}{2} \nu \left( \tilde{C}^0 \, |f|^2\right) + \sum _{i=1}^N\left( \frac{\gamma _{V}}{2} \left\| v^{i}\right\| _{L^{2}([0,1],V)}^{2} +\dfrac{\gamma _{f}}{2} \nu \left( \tilde{C}^{i}\, \left| \zeta ^i\right| ^2\right) \right. \nonumber \\&\left. + \frac{\gamma _W}{2} \left\| \mu _{\left( X^i,f^i\right) }-\left( \phi ^{v^i}\right) \cdot \nu ^i \right\| ^2_{W'}\right) \end{aligned}$$
(113)

for \(\nu \in {\mathcal {M}}^X\). The rest of the proof follows the same path, relying on the fact that we can assume the vector fields \(v^0\) and \(v^i\) to be bounded in \(L^{2}([0,1],V_{0})\) and \(L^{2}([0,1],V)\) as we explained in the beginning of Appendix 3. This implies, as already argued in the same section, that we have uniform lower and upper bounds for \(|{d_x\phi ^{v^{i}}_{s}}_{\upharpoonright _{H}}|\), \(s \in [0,1]\) and \(1\le i\le N\) and for the quantities \(|{d_x[\phi ^{v^{0}}_{s}\circ (\phi _1^{v_0})^{-1}]}_{\upharpoonright _{H}}|\). Consequently, we can assume that we have \(\alpha ,\beta >0\) such that for all \(1\le i\le N\), \(\alpha \le \Vert \tilde{C}^i\Vert _{\infty } \le \beta \). Using these inequalities, one can check that we get equivalent controls as in the proof of Theorem 6 which allows us to conclude the existence of a measure minimizer for the extended functional \(\tilde{J}\) and then go back to a fshape solution for J with a similar implicit functions’ argument.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Charlier, B., Charon, N. & Trouvé, A. The Fshape Framework for the Variability Analysis of Functional Shapes. Found Comput Math 17, 287–357 (2017). https://doi.org/10.1007/s10208-015-9288-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10208-015-9288-2

Keywords

Mathematics Subject Classification

Navigation