Object-Throwing with a Robot Arm ()
Dynamic motions such as catching, juggling, hitting and throwing require accurate motion planning and motor control. In this work, we consider the problem of precisely throwing an object (namely a ball) to a predefined target. Throwing an object precisely is a challenging problem and requires the solution of three challenging problems. First, the release point needs to be determined with respect to the workspace of the robot and the target position. To address this challenge, a potential direction is estimating a set of trajectories that pass through the desired target position. Then one of these trajectories is chosen to provide a release point (position and velocity) which satisfies the kinematic constraints of the robot. The second challenge is coordinating hand and arm motions. A throwing motion needs to coordinate the arm motion with the hand opening such that the fingers open at a desired point. This motion is modeled through a reverse coupling of the arm and finger motions, where the fingers are closed at the beginning of the motion and then will open at the desired release point. The third challenge is generating a throwing motion. To address this problem, a DS (Dynamical System) needs to be devised to generate a throwing motion from the initial point to the release point with respect to the robot constraints. The throwing motion will be implemented and tested on a 7DOF KUKA LWR robot arm.
Project: Master Project at EPFL
Period: 15.02.2017 – 15.06.2017
Section(s): EL IN MA ME
Type: 10% theory, 70% software, 20% experiments,
Knowledge(s): C++, Linux, Robotics
Subject(s): Fast Adaptive control. Machine learning
Responsible(s): Seyed Sina Mirrazavi Salehian, Nadia Figueroa
| Learning from noisy demonstrations: the role of compliance in exploration-exploitation trade-off|
Nicolas Talabot (MT), Louis Faury ()
Learning by demonstration provides us with a powerful framework to enable a robot to perform desirable tasks. However, real-world demonstrations are prone to noise and other uncertainties; especially, when a teacher (i.e., the person who provides the demonstrations) can only provide sub-optimal solutions. While these noisy demonstrations can speed up the learning process at the beginning, it is favorable if our robot can go beyond this sub-optimality and reach the optimal solution to perform the task. This requires a delicate balance between the exploitation of the noisy demonstrations and the exploration for the optimal solution. As human, one key element that enables us to exhibit this behavior naturally is our physical compliance. When we start to learn a new task (e.g., learning to dance from a teacher), we stay compliant. This allows the teacher to easily provide us with new demonstrations (or in another words, we exploit the teacher). Reaching a satisfactory confidence in performance, we start to reduce our compliance (i.e., neglecting the demonstration) and search for small improvements. This project aims to study this approach from a machine learning point-of-view.
The student will start by formulating the problem as a RL problem where given an initial condition, the learning agent tries to reach a goal state. However, at each trial, there exists a noisy demonstration which with a low probability provide a sub-optimal solution. As a part of the learning process, the agent learns either to reject or to comply to this solution. We are interested in investigating the hypothesis that the agent prefers to comply to such noisy demonstrations at the beginning of the learning, and starts to reject them as the agent learns to reach the goal on its own. Understanding the underlying principles of this task, we can move on to more realistic scenarios involving physical human-robot interaction.
Project: Master Project at EPFL
Period: 20.02.2017 – 23.06.2017
Section(s): EL IN MA ME MT MX
Type: 35% theory, 15% software, 50% testing
Knowledge(s): Reinforcement learning, Matlab
Subject(s): Machine learning
Responsible(s): Mahdi Khoramshahi, Laura Cohen