[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Representation Learning via Manifold Flattening and Reconstruction

Michael Psenka, Druv Pai, Vishal Raman, Shankar Sastry, Yi Ma.

Year: 2024, Volume: 25, Issue: 132, Pages: 1−47


Abstract

A common assumption for real-world, learnable data is its possession of some low-dimensional structure, and one way to formalize this structure is through the manifold hypothesis: that learnable data lies near some low-dimensional manifold. Deep learning architectures often have a compressive autoencoder component, where data is mapped to a lower-dimensional latent space, but often many architecture design choices are done by hand, since such models do not inherently exploit mathematical structure of the data. To utilize this geometric data structure, we propose an iterative process in the style of a geometric flow for explicitly constructing a pair of neural networks layer-wise that linearize and reconstruct an embedded submanifold, from finite samples of this manifold. Our such-generated neural networks, called Flattening Networks (FlatNet), are theoretically interpretable, computationally feasible at scale, and generalize well to test data, a balance not typically found in manifold-based learning methods. We present empirical results and comparisons to other models on synthetic high-dimensional manifold data and 2D image data. Our code is publicly available.

PDF BibTeX code