creators_name: Edelman, Shimon creators_name: Intrator, Nathan type: preprint datestamp: 1997-10-17 lastmod: 2011-03-11 08:54:04 metadata_visibility: show title: Learning as Extraction of Low-Dimensional Representations subjects: cog-psy full_text_status: public abstract: Psychophysical findings accumulated over the past several decades indicate that perceptual tasks such as similarity judgment tend to be performed on a low-dimensional representation of the sensory data. Low dimensionality is especially important for learning, as the number of examples required for attaining a given level of performance grows exponentially with the dimensionality of the underlying representation space. In this chapter, we argue that, whereas many perceptual problems are tractable precisely because their intrinsic dimensionality is low, the raw dimensionality of the sensory data is normally high, and must be reduced by a nontrivial computational process, which, in itself, may involve learning. Following a survey of computational techniques for dimensionality reduction, we show that it is possible to learn a low-dimensional representation that captures the intrinsic low-dimensional nature of certain classes of visual objects, thereby facilitating further learning of tasks involving those objects. date: 1997 date_type: published refereed: TRUE citation: Edelman, Shimon and Intrator, Nathan (1997) Learning as Extraction of Low-Dimensional Representations. [Preprint] document_url: http://cogprints.org/562/2/199710004.ps