Subsequent to recording the human task execution, temporal task segmentation is carried out to identify task breakpoints. This step facilitates human grasp recognition and object motion extraction for robot execution of the task. This paper describes how an observed human grasp can be mapped to that of a given general-purpose manipulator for task replication.
Planning the manipulator grasp based upon the observed human grasp is done at two levels: the functional and physical levels. Initially, at the functional level, grasp mapping is achieved at the virtual finger level; the virtual finger is a group of fingers acting against an object surface in a similar manner. Subsequently, at the physical level, the geometric properties of the object and manipulator are considered in fine-tuning the manipulator grasp. Our work concentrates on power or enveloping grasps and the fingertip precision grasps. We conclude by showing an example of an entire programming cycle from human demonstration to robot execution.