creators_name: Hattori, Yuya creators_name: Kozima, Hideki creators_name: Komatani, Kazunori creators_name: Ogata, Tetsuya creators_name: Okuno, Hiroshi G. editors_name: Berthouze, Luc editors_name: Kaplan, Frédéric editors_name: Kozima, Hideki editors_name: Yano, Hiroyuki editors_name: Konczak, Jürgen editors_name: Metta, Giorgio editors_name: Nadel, Jacqueline editors_name: Sandini, Giulio editors_name: Stojanov, Georgi editors_name: Balkenius, Christian type: confposter datestamp: 2006-07-23 lastmod: 2011-03-11 08:56:31 metadata_visibility: show title: Robot Gesture Generation from Environmental Sounds Using Inter-modality Mapping ispublished: pub subjects: comp-sci-mach-learn subjects: comp-sci-robot full_text_status: public keywords: iconic gesture generation, inter-modal learning, auditory distance, Keepon robot abstract: We propose a motion generation model in which robots presume the sound source of an environmental sound and imitate its motion. Sharing environmental sounds between humans and robots enables them to share environmental information. It is difficult to transmit environmental sounds in human-robot communications. We approached this problem by focusing on the iconic gestures. Concretely, robots presume the motion of the sound source object and map it to the robot motion. This method enabled robots to imitate the motion of the sound source using their bodies. date: 2005 date_type: published volume: 123 publisher: Lund University Cognitive Studies pagerange: 139-140 refereed: TRUE citation: Hattori, Yuya and Kozima, Hideki and Komatani, Kazunori and Ogata, Tetsuya and Okuno, Hiroshi G. (2005) Robot Gesture Generation from Environmental Sounds Using Inter-modality Mapping. [Conference Poster] document_url: http://cogprints.org/4990/1/hattori.pdf