One fundamental problem in imitation learning arises from the fact that embodied agents often have different morphologies. Thus, a direct skill transfer from human to a robot is not possible in the general case. Therefore, a systematic approach to PbD is needed, which takes the capabilities of the robot into account-regarding both perception and body structure. In addition, the robot should be able to learn from experience and improve over time. This raises the question of how to determine the demonstrator's goal or intentions. It is shown that this is possible-to some degree-to infer from multiple demonstrations.
This thesis address the problem of generation of a reach-to-grasp motion that produces the same results as a human demonstration. It is also of interest to learn what parts of a demonstration provide important information about the task.
The major contribution is the investigation of a next-state-planner using a fuzzy time-modeling approach to reproduce a human demonstration on a robot. It is shown that the proposed planner can generate executable robot trajectories based on a generalization of multiple human demonstrations. The notion of hand-states is used as a common motion language between the human and the robot. It allows the robot to interpret the human motions as its own, and it also synchronizes reaching with grasping. Other contributions include the model-free learning of human to robot mapping, and how an imitation metric can be used for reinforcement learning of new robot skills.
The experimental part of this thesis presents the implementation of PbD of pick-and-place-tasks on different robotic hands/grippers. The different platforms consist of manipulators and motion capturing devices.