title: Body Scheme Acquisition by Cross Modal Map Learning among Tactile, Visual, and Proprioceptive Spaces creator: Yoshikawa, Yuichiro creator: Kawanishi, Hiroyoshi creator: Asada, Minoru creator: Hosoda, Koh subject: Machine Learning subject: Artificial Intelligence subject: Robotics description: How to represent own body is one of the most interesting issues in cognitive developmental robotics which aims to understand the cognitive developmental processes that an intelligent robot would require and how to realize them in a physical entity. This paper presents a cognitive model how the robot acquires its own body representation, that is body scheme for the body surface. The internal observer assumption makes it difficult for a robot to associate the sensory information from different modalities because of the lacking of references between them that are usually given by the designer in the prenatal stage of the robot. Our model is based on cross-modal map learning among join, vision, and tactile sensor spaces by associating different pairs of sensor values when they are activated simultaneously. We show a preliminary experiment, and then discuss how our model can explain the reported phenomenon on body scheme and future issues. publisher: Lund University Cognitive Studies contributor: Prince, Christopher G. contributor: Demiris, Yiannis contributor: Marom, Yuval contributor: Kozima, Hideki contributor: Balkenius, Christian date: 2002 type: Conference Poster type: PeerReviewed format: application/pdf identifier: http://cogprints.org/3253/1/Yoshikawa.pdf identifier: Yoshikawa, Yuichiro and Kawanishi, Hiroyoshi and Asada, Minoru and Hosoda, Koh (2002) Body Scheme Acquisition by Cross Modal Map Learning among Tactile, Visual, and Proprioceptive Spaces. [Conference Poster] relation: http://cogprints.org/3253/