title: Robot Gesture Generation from Environmental Sounds Using Inter-modality Mapping creator: Hattori, Yuya creator: Kozima, Hideki creator: Komatani, Kazunori creator: Ogata, Tetsuya creator: Okuno, Hiroshi G. subject: Machine Learning subject: Robotics description: We propose a motion generation model in which robots presume the sound source of an environmental sound and imitate its motion. Sharing environmental sounds between humans and robots enables them to share environmental information. It is difficult to transmit environmental sounds in human-robot communications. We approached this problem by focusing on the iconic gestures. Concretely, robots presume the motion of the sound source object and map it to the robot motion. This method enabled robots to imitate the motion of the sound source using their bodies. publisher: Lund University Cognitive Studies contributor: Berthouze, Luc contributor: Kaplan, Frédéric contributor: Kozima, Hideki contributor: Yano, Hiroyuki contributor: Konczak, Jürgen contributor: Metta, Giorgio contributor: Nadel, Jacqueline contributor: Sandini, Giulio contributor: Stojanov, Georgi contributor: Balkenius, Christian date: 2005 type: Conference Poster type: PeerReviewed format: application/pdf identifier: http://cogprints.org/4990/1/hattori.pdf identifier: Hattori, Yuya and Kozima, Hideki and Komatani, Kazunori and Ogata, Tetsuya and Okuno, Hiroshi G. (2005) Robot Gesture Generation from Environmental Sounds Using Inter-modality Mapping. [Conference Poster] relation: http://cogprints.org/4990/