Many of the tasks that are potential candidates for automation involve grasping. We are interested in the programming of robots to perform grasping tasks. To do this, we propose the notion of "perceptual programming," where the key idea is to enable a system to observe a human performing a task, understand it, and perform the task with minimal human intervention. This allows the programmer to easily choose the grasp strategy.

A grasping task is composed of three phases: pre-grasp phase, static grasp phase, and manipulation phase. The first step in recognizing a grasping task is identifying the grasp itself (within the static grasp phase).

We propose to identify the grasp by mapping the low-level hand configuration to increasingly more abstract grasp descriptions. To this end, we introduce a grasp representation called the contact web which is composed of a pattern of effective contact points between the hand and the object. We also propose a grasp taxonomy based on the contact web to systematically identify a grasp. Results from grasping experiments show that it is possible to distinguish between various types of grasps.