title: Visual feedback in motion creator: Smagt, P. van der creator: Groen, F. subject: Animal Cognition subject: Machine Vision subject: Neural Nets subject: Robotics description: In this chapter we introduce a method for model-free monocular visual guidance of a robot arm. The robot arm, with a single camera in its end effector, should be positioned above a stationary target. It is shown that a trajectory can be planned in visual space by using components of the optic flow, and this trajector can be translated to joint torques by a self-learning neural network. No model of the robot, camera, or environment is used. The method reaches a high grasping accuracy after only a few trials. publisher: Academic Press, Boston, Massachusetts contributor: Omidvar, O. and Smagt date: 1997 type: Book Chapter type: NonPeerReviewed format: application/postscript identifier: http://cogprints.org/493/2/SmaGro97.ps identifier: Smagt, P. van der and Groen, F. (1997) Visual feedback in motion. [Book Chapter] relation: http://cogprints.org/493/