Tuesday, March 22, 2016

manifold_finance

Nash embedding theorem

From Wikipedia, the free encyclopedia
The Nash embedding theorems (or imbedding theorems), named after John Forbes Nash, state that every Riemannian manifold can be isometrically embeddedinto some Euclidean spaceIsometric means preserving the length of every path. For instance, bending without stretching or tearing a page of paper gives anisometric embedding of the page into Euclidean space because curves drawn on the page retain the same arclength however the page is bent.
The first theorem is for continuously differentiable (C1) embeddings and the second for analytic embeddings or embeddings that are smooth of class Ck, 3 ≤ k ≤ ∞. These two theorems are very different from each other; the first one has a very simple proof and leads to some very counterintuitive conclusions, while the proof of the second one is very technical but the result is not that surprising.
http://arxiv.org/pdf/1508.02201.pdf

EXTRINSIC LOCAL REGRESSION ON MANIFOLD-VALUED DATA LIZHEN LIN, BRIAN ST. THOMAS, HONGTU ZHU, AND DAVID B. DUNSON Abstract. We propose an extrinsic regression framework for modeling data with manifold valued responses and Euclidean predictors. Regression with manifold responses has wide applications in shape analysis, neuroscience, medical imaging and many other areas. Our approach embeds the manifold where the responses lie onto a higher dimensional Euclidean space, obtains a local regression estimate in that space, and then projects this estimate back onto the image of the manifold. Outside the regression setting both intrinsic and extrinsic approaches have been proposed for modeling i.i.d manifold-valued data. However, to our knowledge our work is the first to take an extrinsic approach to the regression problem. The proposed extrinsic regression framework is general, computationally efficient and theoretically appealing. Asymptotic distributions and convergence rates of the extrinsic regression estimates are derived and a large class of examples are considered indicating the wide applicability of our approach. Keywords: Convergence rate; Differentiable manifold; Geometry; Local regression; Object data; Shape statistics. 1. Introduction Although the main focus in statistics has been on data belonging to Euclidean spaces, it is common for data to have support on non-Euclidean geometric spaces. Perhaps the simplest example is to directional data, which lie on circles or spheres. Directional statistics dates back to R.A. Fisher’s seminal paper (Fisher, 1953) on analyzing the directions of the earth’s magnetic poles, with key later developments by Watson (1983), Mardia and Jupp (2000), Fisher et al. (1987) among others. Technological advances in science and engineering have led to the routine collection of more complex geometric data. For example, diffusion tensor imaging (DTI) obtains local information on the directions of neural activity through 3 × 3 positive definite matrices at each voxel (Alexander et al., 2007). In machine vision, a digital image can be represented by a set of k-landmarks, the collection of which form landmark based shape spaces (Kendall, 1984). In engineering and machine learning, images are often preprocessed or reduced to a collection of subspaces, with each data point (an image) in the sample data represented by a subspace. One may also encounter data that are stored as orthonormal frames (Downs et al., 1971), surfaces, curves, and networks. Statistical analysis of data sets whose basic elements are geometric objects requires a precise mathematical characterization of the underlying space and inference is dependent on the geometry of the space. In many cases (e.g., space of positive definite matrices, spheres, shape spaces, etc), the underlying space corresponds to a manifold. Manifolds are general topological spaces equipped with a differentiable/smooth structure which induces a geometry that does not in general adhere to the usual Euclidean geometry. Therefore, new statistical theory and models have to be developed for statistical inference of manifold-valued data. There have been some developments on inferences based on i.i.d (independent and identically distributed) observations on a known manifold. Such approaches are mainly based on obtaining statistical estimators for appropriate notions of location and spread on the manifold. For example, one could base inference on the center of a distribution on the Fr´echet mean, with the asymptotic distribution of sample estimates obtained (Bhattacharya and Patrangenaru, 2003, 2005; Bhattacharya and Lin, 2013). There has also been some consideration of nonparametric density estimation on manifolds (Bhattacharya and Dunson, 2010; Lin et al., 2013; Pelletier, 2005). Bhattacharya and Bhattacharya (2012) provides a recent overview of such developments

No comments:

Post a Comment