### Source codes

The Matlab/Octave programs on this page are free for academic use.

All of the source code provided here are documented in scientific publications, a complete list of which you can find here.

**Please acknowledge the authors and refer to the authors publications in work that you will publish that uses these codes. Also let us know if you find bugs or if you were pleased with the results you obtain using these. Thank you !!!**

Source Code by topic:

##### Machine Learning and Density Modeling

##### Robot Control

You can find more of our code in our ROS repository: https://github.com/LASA-ros-pkg/

** Author: **Basilio Noris (2010)

** Instructions: **Unzip the file and compile using the .sln file on Windows, or the Makefile on linux/Mac.

*MLDemos:*

MLDemos is a graphical interface for the visualization and the study of various algorithms for classification, regression and clustering.

MLDemos has its own website HERE, where you are encouraged to go to get the latest version of the source code, precompiled binaries for Windows and Apple platforms, and more detailed information about how to use it.

** Download: **git clone https://bitbucket.org/khansari/seds SEDS

** Authors: **Mohammad Khansari

** Instructions: **Unzip the file and read ‘Readme.txt’ for the instructions.

*Reference*:

*Reference*

#### Learning Stable Non-Linear Dynamical Systems with Gaussian Mixture Models

*IEEE Transactions on Robotics*. 2011. Vol. 27, num. 5, p. 943-957. DOI : 10.1109/TRO.2011.2159412.

*General Scope:*

We consider robot tasks that can be decomposed into sequences of point-to-point motions, i.e. movements in space stopping at a given target. Modeling point-to-point motions thus provides basic components, so-called motion primitives, for robot control. We model these motion primitives based on non-linear time-independent Dynamical Systems (DS). Use of time-independent DS is advantageous in that:

1) it ensures the convergence of all trajectories to the goal from any point in the space.

2) it is inherently robust to external perturbations.

3) it enables the robot to instantly re-act in the face of perturbations.

All these three properties are crucial when modeling robot motions. The first properties guarantees task accomplishment from any point in space. The second characteristic compensates for uncertainties in the model (e.g. estimation error in the vision system, inaccuracies in a robot’s controller, unexpected changes in the environment, etc.), and the last one enables a robot to safely and robustly perform super agile task (e.g. tennis swing).

*Stable Estimator of Dynamical Systems (SEDS):*

Stable Estimator of Dynamical Systems (SEDS) is powerful method to tackle the big challenge in using DS. SEDS learns the parameters of the DS to ensure that all motions follow closely the demonstrations while ultimately reaching in and stopping at the target. More precisely, SEDS is a constrained optimization algorithm that formulates any arbitrary motion as a Mixture of Gaussian Functions. The objective function of SEDS could be mean square error or likelihood. The constraints in SEDS guarantees the global asymptotic stability of a non-linear time-independent DS.

**Fig. 1** An example of of learning an arbitrary nonlinear function from 3 demonstrations using SEDS.

*To get more details about Dynamical Systems approach please visit here.*

*Last updated on March 24th, 2015.*

** Download: **git clone https://bitbucket.org/khansari/obstacleavoidance ObstacleAvoidance

** Authors: **Mohammad Khansari

** Instructions: **Unzip the file and read ‘Readme.txt’ for the instructions.

*Reference*:

*Reference*

#### A Dynamical System Approach to Realtime Obstacle Avoidance

*Autonomous Robots*. 2012. Vol. 32, num. 4, p. 433-454. DOI : 10.1007/s10514-012-9287-y.

*General Scope:*

Obstacle avoidance is a classical problem in robotics and many approaches have been proposed to solve it. The above sourcecode provides a novel approach to real-time obstacle avoidance based on dynamical systems (DS) that ensures impenetrability of multiple convex shaped objects. In the presented method, we assume that the robot motion is driven by a continuous and differentiable DS in the absence of obstacle(s). This DS is provided by the user, and henceforth we will call it the original DS. Given the original DS and an analytical formulation describing the surface of obstacles, our algorithm is able to instantly modify the robot’s trajectory to avoid collisions with obstacles by modulating the original dynamics. The modulation is parameterizable and allows to determine a safety margin and to increase the robot’s reactiveness in the face of uncertainty in the localization of the obstacle. Our approach has the following main features:

1) it *guarantees* the safe collision avoidance.

2) As it only requires the differentiability of the original DS, it can be applied on a large set of DS including locally and globally asymptotically stable DS, autonomous and non-autonomous DS, limit cycles, unstable DS, etc.

3) It does not modify the critical points of the original DS. Thus the attractors of the original DS are also the attractors of the modulated DS.

4) it can be applied to *multiple obstacles*.

5) it can be applied to perform obstacle avoidance in both Cartesian and Joint spaces.

If the original DS is modeled with SEDS, the modulated DS is inherently robust to perturbations, and can *instantly* adapt its motion to a dynamically changing environment.

**Fig. 1** In the experiment presented in this figure, the robot is required to put a glass on the desk and in front of the person, while avoid hitting several objects including a desk lamp, a pile of books, a Wall-E toy, a pencil sharpener, an open book, a (red) glass, and a desk. All the objects except the red glass are fixed and their convex envelope are shown in green. The trajectory of the red glass is indicated by red diamonds (for the clarity of the graph, we do not display the envelope of the red glass).

*To get more details about DS-based obstacle avoidance approach please visit here.*

** Author: **Elena Gribovskaya (2010)

**Unrar the source code and run file ‘ManipulationPlanning.m’ in Matlab.**

*Instructions:**References:*

*References:*

#### Learning Nonlinear Multi-Variate Motion Dynamics for Real- Time Position and Orientation Control of Robotic Manipulators

2009. 9th IEEE-RAS International Conference on Humanoid Robots.** Authors: **Micha Hersch and Eric Sauser (2008)

**In order to compile, this code requires some dependencies that are solved through the use of the iCub software.**

*Instructions:**References:*

*References:*

#### Online learning of the body schema

*International Journal of Humanoid Robotics*. 2008. Vol. 5, num. 2, p. 161-181. DOI : 10.1142/S0219843608001376.

#### Reaching with Multi-Referential Dynamical Systems

*Autonomous Robots*. 2008. Vol. 25, num. 1-2, p. 71-83. DOI : 10.1007/s10514-007-9070-7.

*Important Note:*

This work was part of the E.U. project RobotCub and was applied on the iCub robot. Therefore, in order to get the full source code, one is suggested to follow the installation instructions on the iCub manual page.

*Generalized Inverse Kinematics:*

This specific inverse kinematic solver is part of the iKin library of the iCub software source, and is documented here.

Additional online documentation for this software can be found here.

*Online learning of the body schema:*

Online documentation for this specific software is provided here.

** Authors: **Sylvain Calinon, Micha Hersch (2008)

##### List of source codes available

** Instructions: **Unzip the file and run ‘demo1’, ‘demo2’ or ‘demo3’ in Matlab.

*References:*- Hersch, M, Guenter, F, Calinon, S. and Billard, A. (2008) Dynamical System Modulation for Robot Adaptive Learning via Kinesthetic Demonstrations. IEEE Transaction in Robotics

#### On Learning, Representing and Generalizing a Task in a Humanoid Robot

*IEEE transactions on systems, man and cybernetics, Part B*. 2007. Vol. 37, num. 2, p. 286-298. DOI : 10.1109/TSMCB.2006.886952.

*Demo1*

*Demo1*

Demonstration of the generalization process using Gaussian Mixture Regression (GMR).

The program loads a 3D dataset, trains a Gaussian Mixture Model (GMM), and retrieves a generalized version of the dataset with associated constraints through Gaussian Mixture Regression (GMR). Each datapoint has 3 dimensions, consisting of 1 temporal value and 2 spatial values (e.g., drawing on a 2D Cartesian plane). A sequence of temporal values is used as query points to retrieve a sequence of expected spatial distribution through Gaussian Mixture Regression (GMR).

*Demo2*

*Demo2*

Demonstration of Gaussian Mixture Regression (GMR) using spatial components as query points of arbitrary dimensions.

The programs loads a 4D dataset, trains a Gaussian Mixture Model (GMM), and uses query points of 2 dimensions to retrieve a generalized version of the data for the remaining 2 dimensions, with associated constraints, through Gaussian Mixture Regression (GMR). Each datapoint has 4 dimensions, consisting of 2×2 spatial values (e.g., drawing on a 2D Cartesian plane simultaneously with right and left hand). A new sequence of 2D spatial values (data for left hand) is loaded and used as query points to retrieve a sequence of expected spatial distribution for the remaining dimensions (data for right hand), through Gaussian Mixture Regression (GMR).

*Demo3*

*Demo3*

Demonstration of the smooth transitions properties of data retrieved by Gaussian Mixture Regression (GMR).

This program loads two 3D datasets, trains two separates Gaussian Mixture Model (GMM), and retrieves a generalized version of the two datasets concatenated in time, with associated constraints, through Gaussian Mixture Regression (GMR). Each datapoint has 3 dimensions, consisting of 1 temporal value and 2 spatial values (e.g., drawing on a 2D Cartesian plane). A sequence of temporal values is used as query points to retrieve a sequence of expected spatial distribution through Gaussian Mixture Regression (GMR). The position of the last datapoint in the first dataset is not consistent with the first datapoint of the second dataset. However, by encoding separately the two datasets in GMM and concatenating the components in a single model, a smooth signal with smooth transition between the two data is retrieved through regression.

*Demo1*

*Demo1*

Demonstration of a probabilistic encoding through Gaussian Mixture Model (GMM) in a latent space of motion extracted by Principal Component Analysis (PCA).

This programs loads a dataset, finds a latent space of lower dimensionality encapsulating the important characteristics of the motion using Principal Component Analysis (PCA), trains a Gaussian Mixture Model (GMM) using the data projected in this latent space, and projects back the Gaussian distributions in the original data space. Training a GMM with EM algorithm usually fails to find a good local optimum when data are high-dimensional. By projecting the original dataset in a latent space as a pre-processing step, GMM training can be performed in a robust way, and the Gaussian parameters can be projected back to the original data space.

** Instructions: **Unzip the file and run ‘demo1’ in Matlab.

*References:*- Calinon, S. (2009)
**Robot Programming by Demonstration: A Probabilistic Approach**. EPFL/CRC Press.

#### What is the Teacher’s Role in Robot Programming by Demonstration? – Toward Benchmarks for Improved Learning

*Interaction Studies*. 2007. Vol. 8, num. 3, p. 441-464. DOI : 10.1075/is.8.3.08cal.

*Demo1*

*Demo1*

Demonstration of the reproduction of a generalized trajectory through Gaussian Mixture Regression (GMR), when considering two independent constraints represented separately in two Gaussian Mixture Models (GMMs). Through regression, a smooth generalized trajectory satisfying the constraints encapsulated in both GMMs is extracted, with associated constraints represented as covariance matrices.

The program loads two datasets, which are encoded separetely in two GMMs. GMR is then performed separately on the two datasets, and the resulting Gaussian distributions at each time step are multiplied to find an optimal controller satisfying both constraints, producing a smooth generalized trajectory across the two datasets.

** Instructions: **Unzip the file and run ‘demo1’ or ‘demo2’ in Matlab.

*References:*- Calinon, S. (2009)
**Robot Programming by Demonstration: A Probabilistic Approach**. EPFL/CRC Press.

#### Incremental Learning of Gestures by Imitation in a Humanoid Robot

2007. ACM/IEEE International Conference on Human-Robot Interaction (HRI), Arlington, VA, USA, March 9-11. p. 255-262.*Demo1*

*Demo1*

Demonstration of an incremental learning process of Gaussian Mixture Model (GMM) using a **direct update method**.

The demonstration loads a dataset consisting of several trajectories which are presented one-by-one to update the GMM parameters by using an incremental version of the Expectation-Maximization (EM) algorithm (direct update method). The learning mechanism only uses the latest observed trajectory to update the models (no historical data is used).

*Demo2*

*Demo2*

Demonstration of an incremental learning process of Gaussian Mixture Model (GMM) using a **generative method**.

The demonstration loads a dataset consisting of several trajectories which are presented one-by-one to update the GMM parameters by generating stochastically a new dataset from the current model, adding the new trajectory to this dataset and updating the GMM parameters using the resulting dataset, through a standard Expectation-Maximization (EM) algorithm (generative method). The learning mechanism only uses the latest observed trajectory to update the models (no historical data is used).

*Demo1*

*Demo1*

** Instructions: **Unzip the file and run ‘demo1’ in Matlab.

*References:*- Calinon, S. (2009)
**Robot Programming by Demonstration: A Probabilistic Approach**. EPFL/CRC Press.

#### Teaching a Humanoid Robot to Recognize and Reproduce Social Cues

2006. IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Hatfield, UK, 6-8 September. p. 346-351.*Demo1*

*Demo1*

Demonstration of a cone-plane intersection interpreted in terms of Gaussian Probability Density Function (PDF).

This program computes the intersection between a cone and a plane, represented as a Gaussian Probability Density Function (PDF). The algorithm can be used to extract probabilistically information concerning gazing or pointing direction. Indeed, by representing a visual field as a cone and representing a table as a plane, the Gaussian distribution can be used to compute the probability that one object on the table is observed/pointed by the user.

** Instructions: **Unzip the file and run ‘demo1’ in Matlab.

*References:*- Calinon, S. (2009)
**Robot Programming by Demonstration: A Probabilistic Approach**. EPFL/CRC Press.

#### A Probabilistic Programming by Demonstration Framework Handling Constraints in Joint Space and Task Space

2008. IEEE/RSJ Intl Conf. on Intelligent Robots and Systems (IROS), Nice, France, 22-26 September, 2008. DOI : 10.1109/IROS.2008.4650593.*Demo1*

*Demo1*

Demonstration of the use of Gaussian Mixture Regression (GMR) and inverse kinematics to reproduce a task by considering constraints both in joint space and in task space. An arm of 2 links moving in 2D space is considered. Several demonstrations of a skill are provided, by starting from different initial positions. The skill consists of moving each joint sequentially and then writing the alphabet letter ‘N’ at a specific position in the 2D space.

Constraints in joint space and in task space are represented through Gaussian Mixture Models (GMMs) and Gaussian Mixture Regression (GMR). By using an inverse kinematics process based on a pseudo-inverse Jacobian, the constraints in task space are then projected in joint space. By considering the projected constraints with the ones originally encoded in joint space, an optimal controller is found for the reproduction of the task. We see through this example that the system is able to generalize the learned skill to new robotic arms (different links lengths) and to new initial positions of the robot.

** Instructions: **Unzip the file and run ‘demo1’ in Matlab.

*References:*- Hersch, M, Guenter, F, Calinon, S. and Billard, A. (2008)
**Dynamical System Modulation for Robot Adaptive Learning via Kinesthetic Demonstrations.**IEEE Transaction in Robotics

*Demo1*

*Demo1*

Demonstration of a trajectory learning system robust to perturbation based on Gaussian Mixture Regression (GMR).

This program first encodes a trajectory represented through time ‘t’, position ‘x’ and velocity ‘dx’ in a joint distribution P(t,x,dx) through Gaussian Mixture Model (GMM) by using Expectation-Maximization (EM) algorithm. Gaussian Mixture Regression (GMR) is then used to estimate P(x,dx|t), which retrieves another GMM refining the joint distribution model of position and velocity.

The learned skill can then be reproduced by combining an estimation of P(dx|x) with an attractor to the demonstrated trajectories.

** Author: **Eric Sauser (2011)

RobotToolKit:RobotToolKit is an open-source robot simulator developed for researchers and robot hackers. RobotToolKit has its own website HERE, where you are encouraged to go to get the latest version of the source code and more detailed information about how to use it. |

** Author: **Eric Sauser, Brenna Argall (2011)

*Reference:**Iterative Grasp Adaptation Learning with Tactile Corrections.*

E. Sauser, B. Argall, G. Metta, and A. Billard *Submitted to Robotics and Autonomous Systems, 2011*

*Learning Grasp Adaptation:*

The source code is part of the iCub code repository, the latter of which can be installed following the instruction here .

The full documentation of this application can hence be found in the contrib section of the iCub code documentation, or more quickly, here

** Author: **Eric Sauser (2011)

*iCub tools for Skin technology and more:*

A set of tools that can be used for programming the iCub has been developed, and can be used for: getting a 3D representation of the icub’s skin, to extract contact location and even more…

The source code is part of the iCub code repository, the latter of which can be installed following the instruction here .

The full documentation of this application can hence be found in the contrib section of the iCub code documentation, or more quickly, here