Design and Assessment of a Robot Curriculum based on the E

Transcription

Design and Assessment of a Robot Curriculum based on the E
Start:
Finish:
Fall 2008-2009, IC-SIN
15.09.2008
13.03.2009
Design and Assessment of a Robot Curriculum
based on the E-puck Robot and Webots
Master project report
Nicolas Heiniger
Professor:
Assistant:
Dario Floreano
Adam Klaptocz
External supervisor: Olivier Michel
FACULTE SCIENCES ET TECHNIQUES DE L’INGENIEUR
LABORATORY OF INTELLIGENT SYSTEMS (LIS)
Station 11
CH-1015 LAUSANNE
MASTER PROJECT — FALL 2008 / 2009
Title:
Design and Assessment of a Robot Curriculum
based on the E-puck Robot and Webots
Candidate:
Nicolas Heiniger
Professor:
Dario Floreano
Assistant 1:
Adam Klaptocz
Section:
Computer science
Assistant 2:
Olivier Michel
The publication of a version of the curriculum on the
web was done using a wikibook on wikibooks.org.
This solution was chosen for the ease of modificaRobotics is a very interesting topic at the crossing
tion of the document by a community and for the
of mechanics, electronics, microtechnology and compossibility to create a clean PDF version of the
puter science. But learning robotics is not easy and
wikibook. The existing LyX document has been
require prior knowledge. At EPFL the e-puck robot
converted to wiki markup and is now available on
is used as an educational tool. Cyberbotics develops
http://en.wikibooks.org/Cyberbotics’ Robot Curriculum.
the Webots software which is a mobile robot simA PDF version is downloadable on the same page.
ulator that can simulate the e-puck. To make the
learning easier, the writing of a curriculum based on All the beginner exercises were tested with highWebots and the e-puck robot was started. The goal school students using a graphical programming inof this curriculum is to give theory and exercises to terface called BotStudio. Those tests were made
the reader such that he can grow from a beginner with two sets of exercises for a total of 64 students.
to an advanced level in robotics. A secondary goal A survey was completed by the students at the end
is to prepare the reader to take part in a robotic of the exercise session and using their answers the
exercises were improved.
benchmark such as Rat’s Life.
This master project takes place in this context, the The results were very satisfying, showing that the
curriculum was already started and had to be fin- exercises were interesting and useful to learn. The
ished, tested and widely distributed. These were the figure below shows the answer to the question ”Did
you learn something thanks to these exercises?” The
three main goals of the project:
graph shows that every student learned something
- Finish the curriculum
and most of them learned a lot.
- Publish a first stable version on the web
Project Summary
- Test the exercises through in-class experiments
with high-school and/or university students
The first goal was achieved by writing four new exercises for the curriculum. The subjects of the exercises were chosen in order to give clues to the reader
for a possible participation in the Rat’s Life contest.
The topics were the following:
Odometry, estimate the position of the robot
relative to the initial position
Path planning, move to a goal avoiding known
obstacles
Agreement for question 4 and both sets of exercise
The curriculum is now included in the Webots distribution (since version 6.1.0). The diffusion of the
Particle swarm optimization, optimization to document will be accelerated and hopefully users
introduce unsupervised machine learning
will help write new exercises and improve the existSimultaneous localization and mapping, cre- ing ones. This evolution of the document is possible
ate a map of an unkown environment and use it thanks to its collaborative (wiki) style and to its
open source license, anyone can contribute.
to navigate
Table of Contents
Project proposal
3
Project summary
5
Table of Contents
7
List of Figures
9
List of Tables
10
About this document
11
1 Introduction
1.1 Context . . . . . . . . . . . . .
1.2 Cyberbotics’ Robot Curriculum
1.2.1 State of the Curriculum
1.3 Goals of this Project . . . . . .
.
.
.
.
12
12
13
13
14
.
.
.
.
.
.
.
.
.
15
15
15
15
16
16
16
18
20
23
.
.
.
.
.
.
24
24
26
26
27
28
28
.
.
.
.
.
.
.
.
.
.
.
.
2 Finish the Curriculum
2.1 Getting in touch with the document
2.1.1 Full review . . . . . . . . . .
2.1.2 Choice of the topics . . . . .
2.2 New exercises . . . . . . . . . . . . .
2.2.1 Odometry . . . . . . . . . . .
2.2.2 Path planning . . . . . . . . .
2.2.3 Particle swarm optimization .
2.2.4 Simultaneous localization and
2.3 Discussion of the results . . . . . . .
3 Distribute the Curriculum
3.1 Form of the document . . . . . . . .
3.2 Choice of the solution . . . . . . . .
3.2.1 Which wiki . . . . . . . . . .
3.3 Transposing from LyX to wikibooks
3.3.1 javaLatex: how to get a clean
3.4 Discussion of the results . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
mapping
. . . . . .
. . .
. . .
. . .
. . .
PDF
. . .
. . .
. . .
. . .
. . .
from
. . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . .
. . . . . .
. . . . . .
. . . . . .
wikibook
. . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
7
4 Evaluate the Curriculum
4.1 What do we want to evaluate ? . . . . .
4.1.1 Choosing the exercises . . . . . .
4.2 Get in contact with high-school teachers
4.3 TP with EPFL students . . . . . . . . .
4.4 Discussion of the results . . . . . . . . .
4.4.1 Evaluation and comparison . . .
4.4.2 Synthesis . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
31
31
33
34
34
35
35
45
5 Conclusion
46
5.1 Summary of the results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.2 Personal conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.3 Further work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Acknowledgments
48
Bibliography
49
Appendices
51
A Glossary of technologies
51
B PDF document generation
53
C High-shool survey
55
D Full results
58
D.1 Exercise set 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
D.2 Exercise set 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
E Cyberbotics’ Robot Curriculum
8
79
List of Figures
2.1
2.2
2.3
2.4
2.5
2.6
2.7
World configuration for the exercise on odometry . . . . . .
World configuration for the exercise on path planning . . .
World representation for the potential field technique . . . .
World representation in the NF1 algorithm . . . . . . . . .
Flowchart of the main evolutionary loop of PSO . . . . . .
4 resulting maps of the simple mapping algorithm . . . . .
The resulting map for our simple grid localization algorithm
3.1
3.2
3.3
Current layout of the SVN repository on sourceforge . . . . . . . . . . . . . 25
Screenshot of the wikibook’s home page . . . . . . . . . . . . . . . . . . . . 29
Process to convert a wikibook to a PDF document . . . . . . . . . . . . . . 30
4.1
Interface of BotStudio, on the left
sensors and actuators . . . . . . .
4.2 Results for question 1 . . . . . .
4.3 Results for question 2 . . . . . .
4.4 Results for question 3 . . . . . .
4.5 Results for question 4 . . . . . .
4.6 Results for question 5 . . . . . .
4.7 Results for question 6 . . . . . .
4.8 Results for question 7 . . . . . .
4.9 Results for question 8 . . . . . .
4.10 Results for question 9 . . . . . .
4.11 Results for question 10 . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
the finite state machine, on the right the
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
17
19
19
19
21
22
23
33
36
37
37
38
39
40
41
41
42
43
9
List of Tables
10
3.1
Comparison of three wiki solutions . . . . . . . . . . . . . . . . . . . . . . . 27
4.1
List of all introductions to robotics . . . . . . . . . . . . . . . . . . . . . . . 34
About this document
The Project
As mentioned on the title page this project started on the 15th of September 2008 to
finish on the 13th of march 2009. It was conducted outside EPFL in the Cyberbotics Ltd.
Company. During the whole project I was supervised at the EPFL by the Laboratory of
Intelligent Systems, its professor, Dario Floreano, and an assistant, Adam Klaptocz. In
the company I was supervised by the CEO, Olivier Michel, and a collaborator, Fabien
Rohrer.
The Document
This document is the final report of my Master’s Thesis. It is required as a final diploma
work to obtain the title of “EPFL Engineer in Computer Science”. It was written using
the LATEX typesetting system and either the Kile editor with Linux or the TeXnicCenter
editor when running Windows.
11
Chapter 1
Introduction
1.1
Context
Robotics is an attractive topic, at a crossing between mechanics, microtechnology and
computer science. However the required notions to build and fully understand a robot
are such that robotics is almost only present in master courses. There exist some robotic
platforms that can be used to learn the fundamentals of robotics in high-shool, see [1] or
[2] but they are often limited to simple applications.
The e-puck robot was started at the École Polytechnique Fédérale de Lausanne (EPFL)
as a collaborative project between the Autonomous Systems Lab (ASL), the Swarm Intelligence System Group1 (SWIS) and the Laboratory of Intelligent Systems (LIS) [3]. Its
first aim is to be an educational robot at university level. To fulfill this goal its main
feature is a good design making it robust and user friendly. The good structure allows
simple maintenance and flexibility for a reasonable price.
Cyberbotics Ltd. develops Webots, which is a professional mobile robotics simulation
software [4]. This software offers the possibility to prototype, model and simulate mobile
robots thus limiting the amount of time and hardware spent in developing robotic applications. Webots has yet another asset, thanks to its intuitive interface it can be used as
an educational tool to learn mobile robotics.
With these two tools in the hand Cyberbotics Ltd. had the project to improve the
integration of the e-puck in Webots, creating a tight relationship between the two products.
The need for a documentation including the interactions between the e-puck and Webots
was addressed by developing a curriculum of exercises (see subsection 1.2). This is the
part we were interested in during this project.
Finally Rat’s life was developed. Rat’s Life is a robot programming contest developed
by Cyberbotics for the ICEA project (Integrating Cognition, Emotion, Autonomy) [5].
Two rat robots compete in an unknown environment where resources are limited. The
rats are actually two e-puck robots and the resource is presented as energy. The robots
can reload their energy when finding a source and the one who can survive the longest
is the winner. The basic knowledge needed to take part in Rat’s Life is brought by the
curriculum (see subsection 2.1.2).
1
12
Now renamed DISAL: Distributed Intelligent Systems and Algorithm Laboratory
1.2
Cyberbotics’ Robot Curriculum
A curriculum, generally speaking, is a document that gathers information, exercises or
courses about a specific topic. The goal of such a document is to help the reader grow
from a beginner to an advanced level in the chosen topic. In education a curriculum is
a set of courses offered at school or university. In our case the curriculum is aimed at
discovering mobile robots with a high level approach. It collects many exercises covering
robotics material from a complete beginner level up to a masters course level for the
advanced exercises. Since Cyberbotics develops Webots and sells the e-puck robot, this
pair was chosen as a base to develop this curriculum. In the rest of the report I will speak
of the curriculum built by Cyberbotics as ”the curriculum”.
At the beginning, the curriculum had the goal of teaching users in the following topics
[6]:
. Introduction to mobile robotics
. Setup Webots and the e-puck for combined use
. Use of Webots
. Use of the e-puck devices
. Utility of the e-puck devices
. Robot behavior programming
. Rat’s Life contest
1.2.1
State of the Curriculum
Before my Master project another student worked on the curriculum for his Master project
[6]. At the end of his project the curriculum was already well advanced. It had the form
of a LyX2 document and the following was already completed:
. Theoretical part: a quick review of robotics and artificial intelligence with an introduction to Webots and the e-puck
. Getting started : a tutorial on how to set up a running environment with Webots
and the e-puck
. Beginner exercises: 9 introductory exercises
. Novice exercises: 8 not so simple exercises
. Intermediate exercises: 6 medium exercises
. Advanced exercises: 1 hard exercise
. Cognitive benchmarks: presentation of some robotic benchmarks, especially the
Rat’s Life contest[5]
2
LyX is a document processor based on LATEX
13
1.3
Goals of this Project
Given the state of the project and the goals of the company, three main goals were proposed
in the project description:
. Finish the curriculum
This was the first goal to come in the discussion when defining the project more
precisely. This had to be completed by designing several new exercises in the
advanced section.
. Publish a first stable version on the web
Cyberbotics wants anyone to have access to the curriculum and Internet is a great
way of improving the diffusion of documents. Another advantage of this solution
was the possible feedback of every single user. Thus, a clean PDF file was to be
produced and released on the web. Moreover a wiki solution was also chosen to
allow a constant and interactive improvement of the curriculum by any contributor.
. Test the exercises through in-class experiments with high-school and/or grad-school
students
To assess the utility and the usability of the curriculum and to further improve the
exercises several practical work sessions were planned. These had to be evaluated
by a survey to gather the opinion and the remarks of the students.
The remainder of this report is organized in three chapters corresponding to the goals,
chapters 2, 3 and 4. Each of them will explain in details what was done to achieve those
goals and show the results we obtained. We will conclude in chapter 5, some appendix are
then listed. The most important is the Glossary of technologies in appendix A. It briefly
explains all the acronyms and every technology used during this project. You might find
it useful during your reading to understand what is PSO, the NetBeans IDE or an ANN.
14
Chapter 2
Finish the Curriculum
2.1
2.1.1
Getting in touch with the document
Full review
Before starting to write new exercises a full review of the text was performed. Many small
mistakes were spotted and corrected during this review improving the quality of the whole
work. This process was necessary to have a good idea on how the curriculum was written
and on how to write an exercise while keeping the same style.
During this review we also found out that the text, the figures and the code would
have to be adapted to the new version of Webots which was about to be released. This
adaptation was completed as quick as possible during the beta testing of Webots 6. It
implied changes in the code since the API was changed, some screenshots had to be
updated too because of the changes in the interface. Currently only the latest version of
Webots (version 6) is documented in the curriculum. This is a choice of Cyberbotics and
it has been taken to reduce the maintaining work and to encourage Webots user to use
the latest version of Webots.
2.1.2
Choice of the topics
The logical progression of the exercises in the curriculum lead to the final part presenting
robotic benchmarks and especially the Rat’s Life project [5]. Rat’s Life is a robotics
contest in which two e-pucks, called rats, compete for survival in a maze. To be able to
take part in such a contest several robotic techniques must be known. For Rat’s Life a
robot controller needs at least to be able to move in an unknown environment, to recognize
resources and to deal with the other robot.
Covering all these domains extensively is not realistic so we focus on navigation with
the exercises on odometry, path planning and simultaneous localization and mapping
(SLAM). Machine learning is also addressed with a first exercise on supervised learning
using an artificial neural network (ANN) and a second on unsupervised learning using
particle swarm optimization (PSO). These exercises only provide a basis, for the interested
reader we give pointers to different and advanced resources in each exercise. This gives
the opportunity to get further knowledge in advanced topics.
15
2.2
New exercises
Several new exercises were designed for the curriculum. We will give here a description of
each of them and explain why it is important to have it in the curriculum.
2.2.1
Odometry
Odometry is a technique used by some robots to estimate their position relative to a
starting location. It can be considered as a building brick because it is useful in several
advanced techniques. Path planning and SLAM use odometry, it is a necessary basis to
understand those algorithms. The exercise itself is made of a theoretical part followed by
a practical part in which we give a calibration procedure for the e-puck robot using our
code.
The theory on odometry is based on section 5.2.4 of [7]. The computation of the new
estimated position p0 is done using the data provided by the actuators, at each step we
compute the change in position, p0 = p+∆p. This can be seen as estimating the trajectory
of the 
robot by
 small straight segments. The length and direction of a segment is a vector
∆x
∆p =  ∆y , the vector ∆p is expressed using the change of the motor encoders position,
∆θ
∆sl and ∆sr . The position estimation is done as often as possible to have best fit between
the estimation and the real trajectory.
The major drawback with this procedure is that the error on the estimated position is
additive, the longer the algorithm runs the bigger the error will be. To limit the error we
can calibrate the robot, this allows a better precision in the position estimation. To give a
better understanding of what is the calibration a calibration procedure is proposed. The
reader is asked to follow the procedure step by step to understand what benefits he can
get from a well calibrated robot. The code for position estimation has been written and
placed in a module to be easily reusable for the exercises on path planning and SLAM.
The code and the calibration procedure have been adapted from an open-source toolbox
for the Khepera III robot [8]. The Khepera III is a differential drive robot, as the e-puck.
The motor encoders are different and the code had to be adapted because of the difference
in the size of the robots but on the whole the computation of odometry is the same on
both robots.
At the end of the exercise a simple application is proposed. In the Webots world an
hexagon is drawn on the floor and the goal is to estimate the area of this polygon using
odometry. The idea is to move the robot in order
√ to measure the side of the hexagon,
3 3 2
t. Then the area is given by the formula A = 2 t . The world configuration for this
exercise is presented in figure 2.1, notice the blue hexagon on the floor.
2.2.2
Path planning
The path planning problem is to produce a continuous motion that connects a start configuration and a goal configuration while avoiding collision with known obstacles. The
robot and obstacle geometry is described in 2D or 3D, while the motion is represented as
16
Figure 2.1: World configuration for the exercise on odometry
a path in configuration space or in a higher dimensional space if needed. Path planning is
an important notion in mobile robotics because it simply endows a robot with the capacity
to move in its environment. The goal of this exercise is to implement and experiment with
two methods of path planning: potential field and NF1.
The theory for both exercises was provided by [7] in section 6.2.1 and the slides of the
Mobile Robots course of Jean-Christophe Zufferey at EPFL. Actually the whole exercise
was aimed at replacing a TP of the Mobile Robots course but in the end this was not
done, more details are given in section 4.3. The exercise is a modified version of the
exercise proposed in the TP E of the course. As such the code of the controller could be
transposed from MATLAB to Webots to keep exactly the same content in the exercise
and was a requirement of Jean-Christophe Zufferey.
Both of the algorithms run on the same Webots world. This world is presented on
figure 2.2. You can see the robot, two circles (red and green) and a blue triangle sketched
on the floor. The circles are the start/goal positions and the triangle is the obstacle. The
goal of the robot is to move from the red to the green circle while avoiding the obstacle.
The idea behind potential field is to create an attractive goal and repulsive obstacles.
Then the robot is guided to the goal as a ball rolling down the hill. This can be seen on
figure 2.3 where the goal is the red cross and the big black triangle is the obstacle. In
practice this is done using the mathematical representation of a field U (x, y) and computing
the resulting force in each point as
∂U ∂x
F (x, y) = −∇U (x, y) = − ∂U
∂y
In the exercise the reader has to implement the attractive and repulsive force and the
motion control algorithm.
The NF1 algorithm, also called grassfire, is based on a discretized representation of
the workspace. For this exercise a rectangular grid is used but one could use any kind of
grid. Each cell of the grid will be initialized with a value called the distance. The goal will
have distance 0 and the obstacles have distance ∞. Then the other cells are initialized
17
recursively as in algorithm 1 by calling Cell initialize(x0 , y0 , 0) where (x0 , y0 ) are the
coordinates of the goal. After this initialization we obtain a map that is represented on
figure 2.4 for the same environment as before. The shades of gray represents the distance
to the goal. The control algorithm is the simplest possible, move to a cell with distance
inferior to the current cell until my distance to the goal is 0.
Algorithm 1 Cell initialize(x, y, dist)
Grid(x, y) ← dist
if (x + 1, y) ∈ Grid ∧ Grid(x + 1, y) > dist + 1 ∧ Grid(x + 1, y) 6= ∞
Cell initialize(x + 1, y, dist + 1)
end if
if (x, y + 1) ∈ Grid ∧ Grid(x, y + 1) > dist + 1 ∧ Grid(x, y + 1) 6= ∞
Cell initialize(x, y + 1, dist + 1)
end if
if (x − 1, y) ∈ Grid ∧ Grid(x − 1, y) > dist + 1 ∧ Grid(x − 1, y) 6= ∞
Cell initialize(x − 1, y, dist + 1)
end if
if (x, y − 1) ∈ Grid ∧ Grid(x, y − 1) > dist + 1 ∧ Grid(x, y − 1) 6= ∞
Cell initialize(x, y − 1, dist + 1)
end if
return
then
then
then
then
In the third phase of the exercise the reader is asked to test both algorithms with the
real e-puck using a sample world given on a separate PDF file. Then it has to design a
world where the potential field algorithm will fail but NF1 will succeed. Finally he should
compare both algorithms based on the whole exercise. The idea is to notice the following
points:
. Potential field may lead to local minima whereas NF1 will always return a path (if
it exists)
. Potential field is convenient because the motion control algorithm is provided by
the force field. NF1 needs a lower-level motion control algorithm to move between
the cells
. As all grid-based algorithms, NF1 has to deal with big memory space if the resolution is fine and with limited precision if the resolution is coarse
2.2.3
Particle swarm optimization
Particle swarm optimization (PSO) is a general technique of optimization based on the
idea of exploring the search space with a team of particle. In our case we choose to
use this algorithm to perform unsupervised learning to contrast with the exercise on supervised learning. PSO was chosen among optimization techniques (genetic algorithms
(GA), simulated annealing and others) for its novelty and for the promising results of the
approach.
The key idea of PSO is to explore a search space using a population of particles. Each
of these particles represents a candidate solution and is characterized by a position x~i and
18
Figure 2.2: World configuration for the exercise on path planning
Figure 2.3: World representation for the potential field technique
Figure 2.4: World representation in the NF1 algorithm
19
a velocity vector v~i in the n-dimensional search space. To evaluate the performance of the
solution we design a specific fitness function for the problem: f : Rn → [0; 1]. The personal
best solution and the global best solution can then be computed, knowing that we can
also update the position and speed of the particles. The main PSO loop is presented in
figure 2.5 and the update equations are the following:
vi,j = w · (vi,j + pw · rand() · (x∗i,j − xi,j ) + nw · rand() · (x∗i0 ,j − xi,j ))
xi,j = xi,j + vi,j
where x∗i,j and x∗i0 ,j denotes the personal and the global best solution. w, pw and nw
are parameters and the rand() function gives a random and uniformly distributed value
between 0 and 1. The stop condition can either be based on performance (stop when the
solution is better than a given value) or on time (stop after m iterations).
This exercise is based on a research paper by Pugh et al. [9]. The paper presents
an experiment where the behavior of robots is optimized using PSO and an artificial
neural network to obtain an obstacle avoidance behavior. This paper was chosen because
of the simplicity of the goal, obstacle avoidance was introduced in the curriculum at the
beginner level, and the use of an artificial neural network which is explained in the exercise
on supervised learning. In the exercise we will reproduce this experiment in a simplified
manner, the goal is thus to use PSO to optimize the weights of an ANN.
The exercise itself starts with a theoretical part which explains the basis of PSO (just
what was explained in the paragraphs above). Then an implementation phase asks the
user to design its own fitness function to get the desired behavior. The function we want
to obtain is the one given in [10].
The test of this function is done in simulation with ten robots representing ten particles
in the search space. The number of evolutionary loops and the length of those runs
can be easily adapted in the code. Of course this is not optimal but this solution was
chosen to explain the principles without sacrifying clarity and simplicity. A reflexion on
improvements that can be made to this simple PSO algorithm concludes this exercise and
further references are given, see [11, 12, 13].
2.2.4
Simultaneous localization and mapping
The SLAM problem is an open topic in robotic research. Several papers are published
each year about new ways of solving this problem. This difficulty is acceptable because
this is the last exercise of the curriculum. After this exercise the reader should be able
to find information by himself to participate in the Rat’s Life contest. The use of the
localization and mapping is evident in the Rat’s Life context, the e-puck should be able to
build a map and remember where are the sources of energy in order to survive the longest
possible.
The approach used in this exercise is different than for the other exercises. This time
we don’t give a full solution at the end of the exercise. Another solution was chosen after
a discussion with Olivier Michel and Fabien Rohrer. I already started to implement a
vision based solution to the SLAM problem using and the Rat’s Life framework when we
agreed to shape the exercise as a trial and error process. This has been done this way
for one good reason. The SLAM is a hard problem and a full solution could be hard to
20
Figure 2.5: Flowchart of the main evolutionary loop of PSO
21
Figure 2.6: 4 resulting maps of the simple mapping algorithm
understand even for a good student. In this exercise we put a slant on the understanding
of the problem, not on the solution.
The exercise is divided in three parts. In the first one we try to create a map using
the IR sensors and the odometry of the robot. We use a grid and when a sensor value is
above a threshold we declare the corresponding cell as being an obstacle. The limitations
of this method are shown and some possible solutions are presented. The resulting maps
can be shown on figure 2.6. The black points are the obstacles as detected by the robot,
the red point is the current robot position. On the top two maps we see the runs of an
e-puck which is not well-calibrated. The third map shows a run for a calibrated e-puck
and the last one presents the same run but after several minutes to an hour of simulation.
This shows that the simple method used here is not sufficient and the rest of this part is
directed towards possibles solution for this mapping task.
The second part is about localization, we provide a map to the robot and the robot has
to localize itself on the map. We used a grid localization algorithm, the idea is to compute
the most likely position given the sensor measurements. This probability is called the
belief and is computed for each cell of a grid at each iteration. More details are given
in the exercise itself but you can see the result in figure 2.7. You see the robot in its
Webots world (in green). On the minimap the light-grey lines are the given representation
of the environment, in shades of red you have for each cell the probability that the robot
is located here at the current time, the belief, and the cyan point is the most probable
position. We believe that the robot is located here at the current time. The grey area
next to the map is juste a fill color because we use the same display for the first part of
the exercise with a bigger map.
The last part is a list of the key challenges in SLAM, we present what has to be
overcome in order to have a SLAM algorithm and some pointers to further resources are
22
Figure 2.7: The resulting map for our simple grid localization algorithm
given. The following papers could be cited [14, 15, 16, 17] but also the book of Siegwart
[7] and the slides of the Mobile Robots course.
2.3
Discussion of the results
The exercises presented in this chapter are a result on their own. The goal was to finish
the curriculum i.e. to present a first ”stable” version of the text. The exercises were a way
of completing this goal and now the curriculum is a full document (shown in appendix
E). But beyond the evidence - the exercises are here, they are written and coded. This
work was done according to the objectives and guidelines fixed in [6] in chapter 2, the
objectives given in the project description were identical but summarized. Thus I claim
that the curriculum is now finished and that this goal is fulfilled.
However this does not stop here, this document is not meant to be frozen in the current
state. As you will see in the chapter 3, the text is now available on wikibooks.org and this
allows anyone to contribute and improve the document. Chapter 4 will present tests that
were performed in order to assess the content and the utility of the exercises.
23
Chapter 3
Distribute the Curriculum
As stated in section 1.3 the second goal of this project was to publish a first version of
the curriculum on the Internet. This chapter explains how this task was completed and
discusses the reasons of this choice.
3.1
Form of the document
In the project description the form of the document was explicitly stated as ”a clean PDF
version and an interactive wiki version”. The PDF version will be included in further
releases of Webots as a part of the documentation. This is why it needs to be as clean as
possible. This requirement was already completed with the LyX version of the curriculum
which produced a good quality print version but as we will see later this possibility was
not chosen.
We had a clean print version, so why add this wiki requirement ? Cyberbotics wants
this document to be useful for the biggest number of people. We could have proposed the
curriculum as a succession of HTML pages or simply offer the PDF version as a download
on Cyberbotics website. However a wiki has some advantages over a website or a PDF
version:
. Anyone on the Internet can contribute and improve the content of the curriculum
on a wiki
. The syntax is simple and does not require any knowledge of LyX, LaTeX or HTML
. The older versions of a page are stored and a rollback to a previous version can be
performed if needed
. The update process is simple and can be done online, any user can then see the
modifications immediately without downloading a heavy PDF file
. Several open source wiki software exist and can be customized to the needs of
the user. Some of them can even be used without bothering about the hosting
(especially those of the wikimedia foundation)
There are also drawbacks of course but they are less important and some of them are
already solved:
24
Figure 3.1: Current layout of the SVN repository on sourceforge
. Anyone can edit means that the wiki is open to vandalism
Actually this is not completely true, you can give the editing rights only to registered user and selectively accept registrations. One could also just revert the pages
to their last clean version.
. Internet access is required to read the content
This drawback is not a problem for us since we have both version, PDF and wiki.
. The flexibility of the structure can quickly become disorganized
Here the argument is basically the same as for vandalism, this can be limited by
moderators and/or community work.
Beside the document itself we had to decide how to distribute the files to the potential
users. To deal with the updates during the development process and for the maintenance a
version control system was necessary. Instead of postponing this decision to the end of the
project and the distribution phase one of my first tasks was to set up such an architecture.
Since all the files and the documentation are released under the Gnu Free Documentation
License1 (GFDL), we decided to create a project on http://sourceforge.net. With this
solution we have access to a free Subversion (SVN) repository to manage the source code.
As a start the whole LyX document with its source code was uploaded to this repository
with the code of the existing exercises. Then the new exercises were added step by step.
Now anyone can download the files at this address:
http://robotcurriculum.svn.sourceforge.net/svnroot/robotcurriculum
In addition to the repository, all the files can be found in the Webots installation since
the version 6.1.0. They are located here:
(WEBOTS_HOME)/projects/samples/curriculum
The layout of the repository is shown on figure 3.1. We have the doc folder with
additional files needed for some exercises, the misc document with the javaLatex program
(see section 3.3.1) and the project folder with all the webots files.
1
More informations and the full text can be found on http://www.gnu.org/copyleft/fdl.html
25
3.2
Choice of the solution
Since we needed a PDF and a wiki version we had to find a way of synchronizing them.
We have at least 3 possibilities:
. Edit the content twice (once in the wiki and once in the PDF version)
. Find an automated way to convert the LyX version into a wiki formatting
. Find an automated way to convert the wiki version into a clean PDF
The first possibility was immediately abandoned because it would need twice as much work
as the other ones and increase the redundancy of the information which is not desirable.
Both of the other solution were possible but we chose to convert the wiki to a PDF version.
The reason is that the wiki is editable by anyone whereas the LyX or LATEX file is not.
This argument was the most important and even if we had to convert manually the LyX
file into wiki pages it was our final decision. The process of converting the wiki pages into
a single PDF file is explained in section 3.3.1.
3.2.1
Which wiki
After choosing the wiki solution we now have to decide which wiki we will use. We consider
three possible solution:
. Customized wiki hosted on Cyberbotics server
. Sourceforge built in wiki
. Create a new book on Wikibooks
We will now compare the wikis based on the following criteria:
PDF Since we need a clean PDF output this was an important criterion
Community If a community already exists it could improve the diffusion of the document
and help in its evolution
Setup & maintenance This is the simplicity of the setup and configuration, the maintenance cost should also be minimal
Backup We want to be sure to keep the data in case of a failure, we need to be able to
backup everything
Control Cyberbotics needs a kind of control over what is done with this document since
it will be included in its own documentation
As you can see in table 3.1 the choice is not so easy. In particular the question of the
control over the text is hard. If we put the text on a public wiki as wikibooks is, there is
a risk for the text to become unusable by Cyberbotics after several users modifications.
This is the main advantage of the custom wiki, it is hosted on Cyberbotics server and
fully controlled. It can also be backed up very easily, depending on the wiki you can either
copy the files or dump the database.
26
Table 3.1: Comparison of three wiki solutions
Custom
Sourceforge
Wikibooks
PDF
++
Community
+/+
Setup & maintenance
+
+
Backup
++
+
+
Control
++
+
-
The solutions on sourceforge and wikibooks are similar, the setup is easy, there is no
cost, just start the wiki. Maintainability is also simple since the wiki updates are done
by the community. The differences are on the PDF output, which is simple on wikibooks,
and on the control, the wiki on sourceforge is controlled by the project leaders.
The choice was based upon these criteria, the custom wiki solution was dropped because of the setup and maintainability aspects. Then the choice between sourceforge and
wikibooks is based upon the PDF output which was already possible via some scripts programmed by the wikibook community. The control on the text was the main issue but the
reasoning was that even in the worst case (the text becoming really unusable by Cyberbotics because of the community of users) we could retrieve an older version and export it
to a custom wiki if needed. The choice was also based on our personal impressions of each
possibility. We have done some test on sourceforge and wikibooks. We finally decided on
wikibooks.
3.3
Transposing from LyX to wikibooks
Once the solution was chosen the next step was to come up with a structured wikibook
which would be close to the original LyX version. The process is not complex but it
deserves some explanation. The simple way of doing that is to copy/paste the text from
LyX to the editing text area on wikibooks. We obtain a text without images or tables and
no formatting. We need to structure the text with wiki markup. The syntax is simple, as
an example
= Header 1=
would produce a header of level 1 and
* item 1
* item 2
* item 3
would create an unnumbered list. The whole syntax can be learned on the different help
pages on the wikibooks website.
The tables only have to be reformatted to correspond to the wikibooks standard but
for the images the solution is different. First we must check that the image is legally
usable for our purpose (under a suitable license or in public domain). Then this image has
to be uploaded on wikibooks or on wikimedia commons. Only then can we include the
wiki markup to display the image. This forced us to find new images to be sure of their
source. The pictures created by Cyberbotics were uploaded and double licensed under
27
the Creative Commons Attribution-Share Alike 3.0 Unported: CC-BY-SA 3.0 2 and GNU
Free Documentation License: GFDL 3 .
The wikibook is now completed and it can be accessed here [18]:
http://en.wikibooks.org/wiki/Cyberbotics’_Robot_Curriculum
The structure is simple, there is a main page with some explanation about the curriculum
and a table of contents (TOC). Then on can follow the links to get on the pages. Each new
page is a chapter of the curriculum. This will be useful when converting to the PDF, each
page of the TOC is a LATEX chapter. Figure 3.2 gives a visual preview of the wikibook’s
home page. To access the source code of the exercises the user can either get it from the
Webots installation (since Webots 6.1.0) or download it from the SVN repository for an
up-to-date version of the code. This is explained in a chapter of the curriculum (Getting
started).
3.3.1
javaLatex: how to get a clean PDF from a wikibook
As explained before the PDF output is important for Cyberbotics because it will be
included in the official Webots documentation. The wikibooks solution was chosen partly
because of the possibility to generate a PDF easily. To do that there exist more than
one script/program converting the wiki markup to OpenOffice document, to HTML pages
or to LATEX source code. We wanted the most clean PDF possible so we chose a tool
performing a conversion to LATEX: javaLatex [19].
javaLatex is a Java program released under the MIT license which permits us to use
and modify the program for our needs. Its utility is to fetch all the chapters and pictures
of a wikibook from the Internet and to convert them to several LATEX files. One can
then compile the generated code to obtain a single PDF file. This operation is detailed
in appendix B. An overview of the process is shown of figure 3.3. We start from the
wikibook, downloading all necessary text and pictures from the Internet and converting
the text to LATEX source code. This operation is done by javaLatex. Then we compile the
source code to obtain the PDF output, this can be done with any LATEX distribution. At
this step we can finish the work by uploading the PDF file as a new version of the file on
wikibooks but this is not mandatory.
3.4
Discussion of the results
The wikibook is now complete and can be used directly from the wikibooks website [18].
We have shown how to generate a PDF version of the document. This version is already
available as a download on the wikibook’s main page. This PDF shall be updated when
the text is modified. We claim that the goal of publishing an interactive version is achieved
by the wiki. The clean PDF version which was required is on the same page and meets
the requirements of the project description. Therefore the objectives are fulfilled.
2
3
28
See this website for more information: http://creativecommons.org/licenses/by-sa/3.0/
See this website for more information: http://www.gnu.org/copyleft/fdl.html
Figure 3.2: Screenshot of the wikibook’s home page
29
Figure 3.3: Process to convert a wikibook to a PDF document
To give some details we would like to say that the curriculum is already read by some
users. During the project we received e-mails from people interested in the e-pucks and
asking for documentation on how to use them with Webots. We directed them to the SVN
repository on sourceforge. We also had some people who found the curriculum and were
searching for the files. This interest shows one thing, the curriculum answers a demand.
Although it was only in development stage people were already interested in using it and
we had only positive feedback.
30
Chapter 4
Evaluate the Curriculum
This chapter deals with the test of the curriculum. To be more precise we will explain how
we tested the beginner exercises with high-school students. We will show the survey which
served to gather the feedback of the students, analyze the results and draw conclusions.
4.1
What do we want to evaluate ?
The main question is in the title, “What do we want to evaluate ?” or “What do we want
to know ?”. It is important for us to know what we are searching before beginning the
research. At best we would like to know if the curriculum is usable to learn robotics but
this can’t be answered unless someone who does not know anything about robotics learns
with the help of the curriculum. Thus instead of focusing on the whole curriculum we
narrow the question to the exercises. We will try to find out if the exercises are useful (in
learning robotics), if they are interesting, if the difficulty level is adapted to the students,
and so on.
To be able to answer the questions about the exercises we designed a survey which is
distributed to each student at the end of an exercise session. This survey is included in
appendix C. We will take the questions one by one to explain their purpose. The first ten
questions are multiple choice questions. The student has to tell whether he agrees or not
with the affirmation on a scale from 1 to 6.
1. The instructions were easy to understand.
This question is to be sure that the exercises were understandable. If the result is 6
it means that the question was easily understandable. Note that it does not mean
the question was easy.
2. The exercises were interesting.
More than useful we would like to provide interesting exercises, this is the sense of
this question.
3. I enjoyed the exercises.
31
Once again, teaching does not limit to learn a subject. We believe that a student
completing exercises with pleasure will be far more efficient than another doing it
just by necessity.
4. I have learned something thanks to these exercises.
This is the first question about learning. In this one we would like to know if the
student learned something (in a general meaning).
5. The exercises gave me a better understanding of the robots.
This time we narrow the question to the robotics topic. This is a crucial point
because we want a reader of the curriculum to learn something about robotics, not
only in a general sense.
6. I had enough time to complete every exercises.
This is a teaching question to know if the length of the exercises are adapted to the
available time.
7. Working with the real robot was more interesting than with the simulation.
With this question we want to know what is the contribution of the real robots in
the process. Do we really need them ?
8. Using Webots was easy.
This question is for Cyberbotics, is the interface easy to understand, is it possible
to use the software with only limited explanation ?
9. This introduction made me feel like learning more about robotics (program a robot
in C for instance).
Did we manage to create an interest for robotics with a short introduction ?
10. On the whole the exercises were...
In this question the student had to grade the difficulty of the exercises between 1
(much too easy) and 6 (far too hard). This is to adapt the difficulty of the exercises
to the level of the students.
After this multiple choice 4 more general questions are asked. These questions are directed to give indications on what was good and what should be improved in the exercises.
This is also the place where the students can express themselves freely.
11. Which was the best exercise ? And why ?
12. Which was the worst exercise ? And why ?
13. What did you like in this introduction to robotics ?
14. What should be improved ? And how ?
Thanks to this survey improvements of the exercises will be possible but a significant
part of the learning process is provided by the teacher. That’s why in-class experiments
are valuable. They are a good help to corroborate information gathered with the survey
and to gather new information that could not be seen otherwise.
32
Figure 4.1: Interface of BotStudio, on the left the finite state machine, on the right the
sensors and actuators
4.1.1
Choosing the exercises
Two sets of exercises were designed to test all of the exercises that don’t require C programming because it is unrealistic to teach C programming and robotics to high-school
students in a couple of hours. This is why the students used the graphical programming
interface BotStudio (see figure 4.1). BotStudio is a simpler way to program a robot controller for the e-puck. It is based on a finite state machine approach that is understandable
even for someone who has no idea of programming. In BotStudio the following devices can
be used: distance sensors, light sensors, LEDs, motors, linear camera and accelerometer.
The first session was designed carefully to be feasible in the given time. The remaining exercises were gathered to form the second session. This caused the two sets to be
unbalanced, the second was harder and required more time. The exercises were directly
taken from the curriculum and translated into French. There was no other adaptation of
the exercises. To give the basic information about the e-puck and Webots we presented a
slideshow at the beginning of the exercise session. In this presentation some information
on the robot were provided and the basic usage of BotStudio was explained too.
The exercise in the first set are the following:
. Simple behavior: Finite State Machine (FSM) [Beginner]
. The blinking e-puck [Beginner]
. Line following [Beginner]
. Rally [Beginner] [Challenge]
The second set contains the following exercises:
. Robot Controller [Beginner]
. Move your e-puck [Beginner]
. *E-puck Dance* [Beginner]
. *A train of e-pucks* [Novice]
33
Table 4.1: List of all introductions to robotics
Date
7th nov. 2008
8th dec. 2008
15th dec. 2008
15th jan. 2009
19th jan. 2009
23rd jan. 2009
2nd feb. 2009
School
Gymnase de Chamblandes (Pully)
Lycée Blaise-Cendrars (Chaux-de-Fonds)
Lycée Blaise-Cendrars (Chaux-de-Fonds)
Gymnase cantonal (Porrentruy)
Gymnase des Alpes (Bienne)
Gymnase cantonal (Porrentruy)
Gymnase des Alpes (Bienne)
Total:
# of student
12
11
8
9
13
11
64
Set #
1
1
1
2
2
2
2
. Remain in Shadow [Novice]
You can find the full text of the exercises either in appendix E or in [18].
4.2
Get in contact with high-school teachers
This section could be named “Find experiment subjects” because this was reality ! We had
to find class teachers who were interested in spending 3-4 class hours for an introduction
to robotics. We already had some contacts with my former teachers but this would only
be for around 24 students. We needed more than that.
The solution came a bit by chance, a new computer science option is being built up
in the swiss-french high-schools. The teachers for this option have a class here at EPFL.
The professor responsible for this course is M. Petitpierre, he let us present this project
to the teachers. We have had some contacts thanks to this presentation.
In table 4.1 is the list of the high-school practical sessions that were finally conducted.
There were two different set of exercises named 1 and 2. This way more exercises can be
tested but the number of students testing each exercise is divided by 2. This drawback is
not a real problem, the goal is not to make a statistical study but to get some feedback
and improve upon this.
In the end those 7 exercise sessions were done. More could have been done, there was
a demand, but around 60 students was enough for our goal. During the project and after
planning the 7 sessions we received e-mails from interested teachers for at least 8 more
classes. Unfortunately we had to turn down their request because planned session were
enough for our purpose.
4.3
TP with EPFL students
An exercise section was planned with a class of EPFL master students within the “Mobile
Robots” course of Jean-Christophe Zufferey. To stay exactly in the field of the course I took
the existing exercise using MATLAB and the ePic toolbox and I adapted it for Webots.
The text and the questions were just adapted to have exactly the same exercise. This
is now the exercise on path planning with NF1 and potential fields. The day before the
34
session the decision was taken not to use the Webots exercise because of the lack of display.
The argument was that the students could not visualize what they were programming in
real-time. To be honest I’m a deceived to miss this opportunity of testing an exercise with
students of EPFL. However the exercise on path planning would not exist without this
aborted TP thus on the whole it is an enhancement to the curriculum.
4.4
Discussion of the results
The complete results of the survey with all comments of the students can be found in
appendix D. The results for the first ten questions asked during the two sets of exercises
will be analyzed concurrently to allow an easy comparison between the two sets. Finally
we will examine the most frequent remarks and explain which improvements were made
thanks to them.
In all this part you will see many graphs. Each graph represent the answer of the
students to one question. The numbering of the question is the same as presented in
section 4.1. On the X axis is the agreement (or difficulty for question 10) which is a
mark between 1 and 6. On the Y axis is the number of students who answered with that
particular mark. We will refer to the mean value and the standard deviation, both can be
found in appendix D.
4.4.1
Evaluation and comparison
Each question from 1 to 10 analyses two graphs containing the answers to the questions
for both sets of exercises. The figure presenting those two graphs is given at the beginning
of the text as (Fig. # of the figure). For example, (Fig. 4.42) means that the results of
the current question for the two sets of exercises are presented in figure 4.42.
Question 1: The instructions were easy to understand
(Fig. 4.2) Looking at the graphs we could think that the exercises are easier to understand
in the first set. Actually this is true but not because of the writing or the style. This is
just a question of difficulty, the problems are more complex to solve and to understand
in themselves (see question 10). In the two exercises of novice level the reader is asked to
build something on his own. There are less guidelines or hints which means that it may
be harder to fully grasp the goal of the exercise.
On the whole the results are yet very satisfying, the mean values are 5.16 and 4.64,
respectively. We have there a good starting point, the exercises are clear and understandable. What can be observed in the classes is that the high-school students are not used
to working alone with a document. It’s hard for them to try to read the text carefully
and try to find the solutions in the document. Their reaction is to ask questions as soon
as they don’t understand something. Thus learning with the curriculum is possible but
requires the presence of a teacher with them.
35
(a) Exercise set 1
(b) Exercise set 2
Figure 4.2: Results for question 1
Question 2: The exercises were interesting
(Fig. 4.3) This question is purely subjective but nonetheless the results are good (mean
values of 5.23 and 4.82). We were expecting this result because none of the students had
worked with a real robot before the exercises. This novelty ensures their interest and we
believe that the exercises are well designed to keep this interest intact. It might even
increase their eagerness for learning about robotics (see question 9). By computing the
correlation between question 2 and question 3 we find high values (0.61 and 0.71). This is
perfectly normal, an interesting exercise is more pleasant to complete than a boring one.
Question 3: I enjoyed the exercises
(Fig. 4.4) The question of the pleasure felt while doing an exercise is even more subjective
than the interest of an exercise. However this notion of enjoying the exercises is important
in the learning process. The results are more than good once again with mean values of
5.35 and 4.97 respectively, and we have a low dispersion (standard deviation are 0.86 and
0.94). The most important result for me with this question is that out of the 64 students
only 3 of them did not enjoy the exercises (answer 1, 2 or 3). So even if they didn’t
consider the exercises as interesting they had a good time doing them and I’m convinced
that this pleasure has a positive influence on their learning. If we look at the correlations
between the question 3 and the learning questions 4 and 5 we observe a medium correlation
between 3 and 4 and between 3 and 5 (between 0.39 and 0.58). Though not a proof, this
is an indication that my conviction is correct, pleasure is a positive factor in the learning
process.
36
(a) Exercise set 1
(b) Exercise set 2
Figure 4.3: Results for question 2
(a) Exercise set 1
(b) Exercise set 2
Figure 4.4: Results for question 3
37
(a) Exercise set 1
(b) Exercise set 2
Figure 4.5: Results for question 4
Question 4: I have learned something thanks to these exercises
(Fig. 4.5) This question and the following one are closely related. This can be seen in the
correlation between them which is of 0.66. This is evident because the present question
contains the next one, meaning that if you learned something about robotics you must
have learned something in general. However as we will see the inverse is not true. The
average value for this question are 5.19 and 4.88, the students learned something in a
general sense. This is already a good result but it is not sufficient, the curriculum is about
robotics and the students should rather acquire knowledge on this topic. Question 5 is
more precise and will give precision on this aspect.
An interesting difference between the two exercise set is that on the first set the exercises were globally easier (see question 10) but the students found that they learned
more in a general sense. This can be explained by the content of the exercises of the first
set which contains all that is needed to get started with the curriculum. Much of this
information is not directly related to robotics but is nonetheless seen as new knowledge
by the students. This information is less present in the second set of exercises.
Question 5: The exercises gave me a better understanding of the robots
(Fig. 4.6) This question is maybe the most important of all, if the students can learn
something about robots using the curriculum then the main goal of the curriculum is
fulfilled. First note that the results are very similar in both sets, the mean values (4.74
and 4.79) and the standard deviations (1.34 and 1.2) are almost equal. Then one can
notice that the students gave lesser marks to this affirmation. However, these results are
sufficient since only one student thinks that he didn’t learned anything. This means that
38
(a) Exercise set 1
(b) Exercise set 2
Figure 4.6: Results for question 5
all the others have learned something about the robots.
This result alone would be sufficient but the big majority of the students (about 5
over 6) believes that they really learned something about robotics (mark between 4 and
6). With this result the main goal of the curriculum is achieved, one can have a better
understanding of robotics using this document. However we need to be careful with this
result, it is only valid for students who had no previous knowledge in robotics. This
should be generalized by testing harder exercises with students having a background in
programming or/and robotics.
Question 6: I had enough time to complete every exercises
(Fig. 4.7) In this question the main difference between the two exercise set is revealed. The
time needed to complete the first one was clearly less than for the second one. This is clear
on the graphs and with the mean values (5.23 and 3.15). A majority of the students had
the time to complete every exercise of the first set whereas only one student finished the
second set. To my mind this means that the distribution of the exercises was suboptimal.
I should have divided them in two sets of equivalent length. However this is a teaching
error and it does not affect the results of the other questions.
Question 7: Working with the real robot was more interesting than with the
simulation
(Fig. 4.8) This question presented no difference between the two sets of exercises, the
mean values are almost equal (5.39 and 5.33) and the distribution are similar. The result
is clear, the robots are appreciated by the students, 70% of them agreed fully (answer =
39
(a) Exercise set 1
(b) Exercise set 2
Figure 4.7: Results for question 6
6) with the fact that using the robot was more interesting than simulating it ! This shows
that the robots are an advantage giving interest to the exercises. Nevertheless it does not
mean that the real robots are mandatory for a successful learning. One could imagine the
same introduction to robotics without the questions on the real robot.
Question 8: Using Webots was easy
(Fig. 4.9) This question was interesting for Cyberbotics but for the curriculum too since
it uses Webots as a tool. What we see first on the graphs is that the students managed
to use Webots. They agreed that Webots was user-friendly up to a certain point (mean
values of 4.77 and 4.24).
An important thing to note is that during two sessions the computers had a specific
graphic card. This card causes a bug in Webots which makes the objects of the simulation
difficult to move (one need to select them from the bottom of the ground). This was a
problem when the students needed to put the robot back in a given position for example.
Even if the results are good they would have been better without this bug. Considering
this information I can say that the use of Webots and the BotStudio interface is easy and
completely accessible to any high-school student after a minimal explanation.
Question 9: This introduction made me feel like learning more about robotics
(program a robot in C for instance)
(Fig. 4.10) On this question the opinions of the students were very scattered. It can be
explained by the interest of each of the students. On one hand, some of them were already
interested in computer science or robotics and the exercises reinforced their interest. On
40
(a) Exercise set 1
(b) Exercise set 2
Figure 4.8: Results for question 7
(a) Exercise set 1
(b) Exercise set 2
Figure 4.9: Results for question 8
41
(a) Exercise set 1
(b) Exercise set 2
Figure 4.10: Results for question 9
the other hand, some knew that science is not their main interest and this introduction
did not change their mind. This is corroborated by the mean values which are just above
the average and by the strong dispersion of the answers, there is no clear trend here.
When looking at the correlations one might notice that there is a medium correlation
between the questions 2, 3 and 9. This can be explained by the questions. Question 2 asks
for the interest of the exercises, question 3 for the pleasure in doing them and question 9
for the interest in going further. They are linked, if you think that something is interesting
you will probably have more pleasure with it. And if you have pleasure and interest in
doing something you might want to go further in that direction. So these correlations are
normal and confirm that our results are consistent.
Question 10: On the whole the exercises were...
(Fig. 4.11) The difficulty of the exercises was evaluated by the students to 3.17 and 3.47
respectively. So the exercises were too easy for the students. This is true for the first set
of exercises (which was also too short in terms of time). But the second set was harder
and I would have marked the difficulty around 4.5. I explain this difference by a simple
fact. As we saw in question 6 most of the students did not have enough time to finish
the exercises. This means that they skipped the last one or two exercises. Since those
exercises were the most difficult their difficulty estimation is based on the easy part of the
exercises. This leads to the lower value we have observed.
42
(a) Exercise set 1
(b) Exercise set 2
Figure 4.11: Results for question 10
Question 11: Which was the best exercise ? And why ?
This is the first open question, the goal with these questions was to get indications on
how to improve the exercises. The preferred exercise on the first set was the rally (line
following) (12 points) followed by the exercise on the LEDs (9 points) and the finite state
machine (8 points). Some of them did not answer, saying that each exercise was good
in explaining something but none of them was above the other. More interesting are the
reasons to choose these exercises, they are of course diverse and can’t be exhaustively
listed here. The frequent arguments are on the difficulty “It was the more complex.” or
on the contrary “It presented the basics.”. Another frequent reason is “It was the funniest
one.”.
On the second set, the best score is for the dancing e-puck (17 points) then comes
the wall following (10 points), the robot controller (4 points) and the train of e-pucks (2
points). The reasons are similar, the simplicity and the fun of the dance was the most
frequent. Then the completeness and the complexity of the wall following was the other
main argument.
What can we extract from this ? First, there is no clear winner in the first set, the gaps
between the scores are small. In the second set we have a winner, it can be explained by
the difficulty of the other exercises and as said by the students it was the funniest. With
the comments we see that there is no magic recipe that would make an exercise better
than the other.
43
Question 12: Which was the worst exercise ? And why ?
This question was intended to give hints to improve the exercises, you will see that it was
useful. In the first set the LED exercise was considered as the worst by the majority of
the students (14 points). The two other exercises had 5 points each. Hence this exercise
is a perfect candidate for improvement. A look at the comments shows that the exercise
is too long and that the result is not interesting. Actually I found out that the order in
which we asked the question was really not optimal leading to a significant loss of time.
This has now been corrected.
In the second set the results are the mirror image of the answers to question 11. The
train of e-pucks gets 12 points closely followed by the robot controller (9 points). Then
comes the wall following with 3 points and the dance exercise has no points. None of
the students considered it as the worst exercise. The main arguments against the train
of e-pucks were its complexity, the impossibility to test it on the real robot and the
difficulty to understand what was expected. The complexity was intended so and to test
this exercise on real robots would require 4 robots per students which is impossible. Thus
the explanation of the problem and the presentation of the exercise had to be improved.
Thanks to the remarks of the students I have been able to find out which exercises
were the worst and to improve them. This is why it was so important to ask this question.
Question 13: What did you like in this introduction to robotics ?
With this question we can have an idea of the strengths of the curriculum and take
advantage of them. The most frequent comment is that the work with the real robot
is very interesting. Two other appreciated facts are the simplicity of the programming
interface and the possibility to try by oneself. Some of the students wrote that they had
learned something new which is encouraging since it was one of the main goals of my
project.
Question 14: What should be improved ? And how ?
The answer to this questions were sometimes useful, sometimes just fun or not realistic.
In the list of improvements we have at first the bugs of Webots. This is because of the bug
due to a specific graphic card (as told in question 8). Actually on the whole the students
found that Webots was pretty easy to use even if they did not have any other method to
compare.
A second comment that came often was the need for a help section in the text so that
the students don’t need to ask their questions. This is a good request. The curriculum
is built to be read in a given way but in the exercise sessions the text was limited to the
minimum. This means that none of the students knew how to configure the e-puck to
work together with Webots. They did not read the chapter called ”Getting started” of the
curriculum. This is a problem but it wasn’t possible to do it otherwise with so little time
for the exercises. In my opinion a reader that read the curriculum from the beginning to
the end will find all the information he needs to use the e-puck robot with Webots. If this
is not sufficient the manual of Webots can be a good source of information as well as the
44
website of the e-puck. If none of this was enough there is the Webots support which is
ensured by Cyberbotics so this help section actually already exists.
In the first sessions some of the students asked for a demonstration or a correction of the
exercises. Once again the lack of time made me choose not to correct the exercises with the
whole class, I preferred to give more time to the students to try by themselves. To replace
this correction I created videos of the exercises to show what should be obtained in each
exercise. I presented the videos at the beginning of the exercises, after the introduction.
The second set of exercises led to a frequent remark. There was not enough time to
complete the exercises. This is again a good comment and the problem comes from my
choice of the exercises. This is explained in question 6. To solve this point one could
simply choose the first set of exercises for a short introduction and add some exercises of
the second set for a longer one.
The last remark that was worth a paragraph here is more like a complaint. Some
students think that there was too much text to read alone. They would prefer a longer introductory presentation rather than information in the text. I understand this, reading an
explanation is not as interesting as listening to a presentation and you can’t ask questions
to the text. However the curriculum is a support either for the enthusiastic who will find
further information with the references given all along the text or for a teacher who will be
able to complete this information for his students. I chose not to give a long introduction
and to give an important liberty to the students during the exercises. What I noticed was
that they are not used to work on their own with only supporting documentation. More
than once a question was asked for which the answer was in the text. I was probably
expecting too much from the students, if I had to do the experiment again I would start
with lower expectations.
There were also less interesting remarks. According to one of the students I should
have dreads and for another one we should improve his teacher ! To summarize all those
remarks I would say that the comments were much more interesting than I though. I know
that each comment helped improving the next exercises. This is why I’m satisfied with
this last question.
4.4.2
Synthesis
All the questions of the survey were shaped to help improving the form, the content or
another aspect of the curriculum. The answers were better than expected and helped
adjusting the exercises to the needs of the students. Thus we improved the usability of
the exercises. Thanks to the survey and the contact with the students I can tell that they
learned something which was the main objective. On the whole this third goal of testing
the curriculum is completed.
45
Chapter 5
Conclusion
This chapter will be much shorter than the others, the work that was achieved during this
project will be sumed up. Then a few personal words on how was the project and some
ideas for further work possibilities will finish this report.
5.1
Summary of the results
To start with the results we will repeat the three main goals of the project as stated in
section 1.3, quickly remind how they we dealt with them and explain why we consider
them as fulfilled. More detailed explanations for each point can be found at the end of
the corresponding chapters (2, 3 and 4).
. Finish the curriculum
Several new advanced exercises were written and coded for the curriculum. The
subjects were odometry, path planning, particle swarm optimization and simultaneous localization and mapping. They were written according to the style of the
curriculum, with this educational idea in mind. The curriculum could of course be
enriched with new exercises but we believe that any person following the way up
to the end of the document would already obtain a good knowledge in robotics.
. Publish a first stable version on the web
As you can see in [18] the curriculum of exercises is now available on the English
version of wikibooks. This solution was chosen among three possibilities, a dedicated wiki, the simple wiki offered by sourceforge.net and this one using wikibooks.
The conversion from the old LyX document to the new wikibook was done and
a way of generating a PDF version of the wiki was found and adapted to our
needs. With this structure we are confident that the curriculum will be welcomed
by Webots and e-puck users.
. Test the exercises through in-class experiments with high-school and/or grad-school
students
The goal of this point was to assess the utility and the usability of the curriculum.
Only time and user feedback will tell to which extent the curriculum is useful. Nevertheless the experiments made with high-school students showed that a student
46
is able to learn about robotics using this document. The feedback of the students
was also an help to improve the shortcomings that showed up during the project.
5.2
Personal conclusion
As a first conclusion I would stress the good relationships I have had with Cyberbotics
through Fabien Rohrer and Olivier Michel. They have always be frank and honest with
me. We disagreed on some points during the project but we found a satisfying solution
for each of them. This project was also my first contact with a company, however the job
was quite similar to a master project at EPFL thanks to the links between Cyberbotics
and the LIS.
During these 6 months of project I’ve learned a lot. I discovered new fields in robotics
through the writing of new exercises. I had to interact with the wiki community to build
up the final interactive version of the curriculum and the PDF stable version. I also had
the pleasure to teach high-school students.
During the project I was unsure of its academic value. My work involved many contacts with the high-school world and little with the academic world. The whole project
was based upon transmitting knowledge to somebody else. I realized that this work of
simplifying the notions to make them accessible by a student required to fully understand
the notions that were to be transmitted. I also had to analyze the results of the survey,
even if this is more directed to human science the results I obtained were interesting from
a scientific point of view.
At the time I write these lines the version 6.1.0 of Webots has just been released. This
new version includes the curriculum as part of the documentation. This required some
adaptations but is a great reward for me.
5.3
Further work
The curriculum in itself can be improved by the community now. If I had to give some
suggestions I would ask for exercises on genetic algorithm (in contrast with particle swarm
optimization), on low-level robot programming (a detailed explanation on the Braitenberg
controller would be a good example) or on robot communication. The wiki structure can
be modified by the wikibooks community and the generation of the PDF can be improved
by modifying javaLatex source code.
47
Acknowledgments
During this project many people helped me either by giving advices or simply by supporting me. Here is a non-exhaustive list:
. Olivier Michel, Adam Klaptocz and Prof. Dario Floreano who were responsible for
this project
. Fabien Rohrer who was not officially my assistant but who made a lot in proofreading the exercises and giving help when needed
. Prof. Claude Petitpierre for giving me the opportunity to present my project to
several high school teachers
. Patrick Türtschy, Didier Müller, Arnaud Le Gourriérec and Grégoire Favre who
received me in their classes for some hours
. Cédric Lavanchy, my flatmate who gave me pertinent advices each and every week
. All the high-school students who took part in the exercise sessions, it was great to
teach you
. Myriam, my fiancée, she will know why ^
¨
48
Bibliography
[1] LEGO, “Lego.com Mindstorms NXT home.” Website. http://mindstorms.lego.
com/.
[2] Innovation First Inc., “VEX Robotics Design System.” Website.
vexrobotics.com.
http://www.
[3] M. Bonani, “e-puck education robot.” Website. http://www.e-puck.org/.
[4] O. Michel, “Webots: Professional Mobile Robot Simulation,” International Journal
of Advanced Robotic Systems, vol. 1, p. 39 42, 2004.
[5] ICEA project, “Rat’s life - robot programming contest.” Website. http://www.
ratslife.org/.
[6] F. Rohrer, “Curriculum of exercises for e-puck and Webots - Report of the Master
Project,” tech. rep., March 2008.
[7] R. Siegwart and I. R. Nourbakhsh, Introduction to Autonomous Mobile Robots. MIT
Press, 2004.
[8] Wikibooks, “Khepera III Toolbox.” website.
Khepera_III_Toolbox.
http://en.wikibooks.org/wiki/
[9] J. Pugh, A. Martinoli, and Y. Zhang, “Particle swarm optimization for unsupervised
robotic learning,” Swarm Intelligence Symposium, 2005. SIS 2005. Proceedings 2005
IEEE, pp. 92–99, June 2005.
[10] D. Floreano and F. Mondada, “Evolution of homing navigation in a real mobile
robot,” Systems, Man, and Cybernetics, Part B, IEEE Transactions on, vol. 26,
pp. 396–407, June 1996.
[11] R. Eberhart and J. Kennedy, “A new optimizer using particle swarm theory,” Micro
Machine and Human Science, 1995. MHS ’95., Proceedings of the Sixth International
Symposium on, pp. 39–43, Oct 1995.
[12] Y. Shi and R. Eberhart, “A modified particle swarm optimizer,” Evolutionary Computation Proceedings, 1998. IEEE World Congress on Computational Intelligence.,
The 1998 IEEE International Conference on, pp. 69–73, May 1998.
[13] R. Poli, J. Kennedy, and T. Blackwell, “Particle swarm optimization,” Swarm Intelligence, vol. 1, pp. 33–57, August 2007.
49
[14] H. Durrant-Whyte and T. Bailey, “Simultaneous localization and mapping: part I,”
Robotics & Automation Magazine, IEEE, vol. 13, pp. 99–110, June 2006.
[15] T. Bailey and H. Durrant-Whyte, “Simultaneous localization and mapping (SLAM):
part II,” Robotics & Automation Magazine, IEEE, vol. 13, pp. 108–117, Sept. 2006.
[16] M. Dissanayake, P. Newman, S. Clark, H. Durrant-Whyte, and M. Csorba, “A solution to the simultaneous localization and map building (SLAM) problem,” Robotics
and Automation, IEEE Transactions on, vol. 17, pp. 229–241, Jun 2001.
[17] K. Beevers and W. Huang, “SLAM with sparse sensing,” Robotics and Automation,
2006. ICRA 2006. Proceedings 2006 IEEE International Conference on, pp. 2285–
2290, May 2006.
[18] Cyberbotics Ltd. and wikibooks contributors, “Cyberbotics’ Robot Curriculum.” Wikibook. http://en.wikibooks.org/wiki/File:Cyberbotics\%27_Robot_
Curriculum.pdf.
[19] Derbeth, “User:derbeth/javalatex - wikibooks, collection of open-content textbooks.”
Website. http://en.wikibooks.org/wiki/User:Derbeth/javaLatex.
50
Appendix A
Glossary of technologies
A brief overview of all the technologies used during this project is presented here. This
might be useful when you read the report.
ANN (Artificial Neural Network) is a mathematical model or computational model based
on biological neural networks. It consists of an interconnected group of artificial neurons
and processes information using a connectionist approach to computation. It can be used
for machine learning.
Apache Ant is a software tool for automating software build processes. It is similar to
make but is implemented using the Java language, requires the Java platform, and is best
suited to building Java projects. Ant uses XML to describe the build process and its
dependencies. By default the XML file is named build.xml.
BotStudio is the graphical programming interface of Webots for the e-puck robot. It
is based on a finite state machine paradigm and can be used to develop simple robot
controllers.
Cyberbotics Ltd. is a company developping mobile robots prototyping and simulation
software. In particular it develops and markets Webots, the award winning fast prototyping and simulation software for mobile robotics.
EPFL (École Polytechnique Fédérale de Lausanne) is the school in which this project was
completed.
e-puck is a miniature mobile robot for educational purposes at university level. It was
developped at EPFL and can be simulated using Webots.
IR sensors (Infrared sensors) are active sensors that send a infrared ray and measure the
environment response. This information can be used to compute a distance estimation.
LATEX is a document markup language and document preparation system for the TeX
typesetting program. It is widely used in the scientific community for the quality of the
typesetting and the extensive support of automated typesettings features (table of content,
cross-references, bibliography, ...).
LIS (Laboratory of Intelligent Systems) is the laboratory responsible for this master
project.
MATLAB is a numerical computing environment and programming language. It allows
easy matrix manipulation, plotting of functions and data, implementation of algorithms,
51
creation of user interfaces, and interfacing with programs in other languages. With the
ePic toolbox one can control the e-puck robot using MATLAB.
NetBeans IDE is an open-source integrated development environment written entirely
in Java using the NetBeans Platform. NetBeans IDE supports development of all Java
application types (Java SE, web, EJB and mobile applications) out of the box. Among
other features are an Ant-based project system, version control and refactoring.
Odometry is the study of position estimation during vehicle navigation. It is used by some
robots to estimate (not determine) their position relative to a starting location. Odometry
is the use of data from the movement of actuators to estimate change in position over time.
PDF (Portable Document Format) is a file format created by Adobe Systems for document exchange. PDF is used for representing two-dimensional documents in a manner
independent of the application software, hardware, and operating system.
PSO (Particle Swarm Optimization) is a stochastic, population-based computer algorithm
for problem solving. It is a kind of swarm intelligence that is based on social-psychological
principles. I can be used to optimize a mathematical function (find local maxima or
minima).
SLAM (Simultaneous Localization And Mapping) is an open problem in robotics. The
robot has to build up a map within an unknown environment while at the same time
keeping track of its current position.
SVN (Subversion) is a version control system. It is used to maintain current and historical
versions of files such as source code, web pages, and documentation.
TOC (Table Of Contents) is just the list of all chapters, sections subsections and so on
in a document.
Webots is a fast prototyping and simulation software for mobile robotics. It is used by
many universities around the world and it can simnulate the e-puck robot.
52
Appendix B
PDF document generation
The PDF generation is done using javaLatex, as explained in 3.3.1. The source code of a
customized version of the program is available on the Curriculum SVN repository:
http://robotcurriculum.svn.sourceforge.net/svnroot...
.../robotcurriculum/misc/javaLatex
Step 1: Download an compile javaLatex
Download the code, we will suppose you did not move the code, it should be in the
folder javaLatex. Once you have downloaded the source code of the program you will
want to generate the PDF file. First you need to compile the program, doing it with the
NetBeans IDE is the simplest way because the code was written with it. NetBeans is an
open source program, it can be downloaded here: http://www.netbeans.org. When you
have downloaded and installed NetBEans you can simply open the project from NetBeans
and build it (the shortcut is F11). After that you will have a javaLatex/dist/ folder
containing the result of the build, a jar file, javaLatex/dist/javaLatex.jar, and some
configuration files. The config files should be fine directly from the repository but one can
modify them if needed.
Step 2: Generate LATEX code
Once the jar is generated, open a terminal. To run a terminal under windows got to the
start menu, select “Run...” and type cmd. Then in the folder of the jar file type the
following command:
java - jar javaLatex . jar -- title =" Cyberbotics ’ Robot Curriculum "
-- genall -- cll =" Cyberbotics ’ Robot Curriculum / Contents "
The process can take a while because it needs to download all the texts and images from
the wikibooks server. It will generate a javaLatex/dist/LaTeX/Cyberbotics’ Robot
Curriculum LaTeX/ folder containing a directory with the LATEX code for the book.
53
Step 3: Compile LATEX code
Now that we have the LATEX source code we will compile it to get a PDF file. Use
your favorite LATEX distribution to compile the file javaLatex/dist/LaTeX/Cyberbotics’
Robot Curriculum LaTeX/main.tex. Under windows this can be done using the Miktex
distribution for instance. Under Linux you will have to run the following command (and
have pdflatex installed):
pdflatex main . tex
You will have to run it twice to get the references right, after the first pass you will get a
warning telling you that cross-references might be wrong. This is because on the first pass
the labels are listed and only after a second pass the references to the labels are updated.
Troubleshooting
If there are any errors you can read the file javaLatex/dist/log.txt to get information
about the problem. Further information about the javaLatex program can be found in
[19].
The most frequent error encoutered was the impossibility to create a file (namely
gfdl.tex). This one can be fixed by copying the file from “dist/appendices/gfdl.tex”
to “dist/LaTeX/Cyberbotics’ Robot Curriculum LaTeX/appendices/gfdl.tex”.
54
Appendix C
High-shool survey
On the next two pages you can see the original survey that was filled by each student
after the exercise session. It is written in French since all the students were French speakers.
55
Questionnaire d'évaluation
Donnez votre avis à propos des affirmations suivantes :
1. La donnée des exercices était facile à comprendre.
Pas d'accord
Tout à fait d'accord
2. Les exercices proposés étaient intéressants.
Pas d'accord
Tout à fait d'accord
3. J'ai eu du plaisir à faire ces exercices.
Pas d'accord
Tout à fait d'accord
4. J'ai appris quelque chose grâce à ces exercices.
Pas d'accord
Tout à fait d'accord
5. Les exercices m'ont permis de mieux comprendre les robots.
Pas d'accord
Tout à fait d'accord
6. J'ai eu le temps de faire tous les exercices.
Pas d'accord
Tout à fait d'accord
7. Travailler avec le vrai robot était plus intéressant qu'avec la simulation.
Pas d'accord
Tout à fait d'accord
8. Utiliser Webots était facile.
Pas d'accord
Tout à fait d'accord
9. Cette introduction me donne envie d'en découvrir plus sur la robotique
(programmer un robot en C par exemple).
Pas d'accord
Tout à fait d'accord
10.Dans l'ensemble, les exercices étaient
Trop faciles
Trop difficiles
N'oubliez pas de tourner la feuille !
Master project – High school survey
56
Questions ouvertes
11.Quel était le meilleur exercice ?
Et pourquoi ?
12.Quel était le moins bon exercice ?
Et pourquoi ?
13.Qu'est-ce que vous avez aimé dans cette introduction à la robotique ?
14.Qu'est-ce qu'il faudrait améliorer ? Et comment ?
Master project – High school survey
57
Appendix D
Full results
In the following pages you will find the whole results of the exercise sessions. For both
session we show first the answer to the questions and the comments. Then we have two
pages of graphs. Each graph shows the results for a specific question.
D.1
58
Exercise set 1
Results for questions 1 to 10
1 2 3 4 5 6 7 8 9 10
5
5
5
5
6
6
6
6
5
5
5
5
5
5
4
6
5
5
4
5
5
4
5
6
5
5
6
5
5
6
5
6
6
6
6
5
6
6
6
6
4
6
6
6
5
6
5
4
5
3
6
5
5
5
5
4
6
4
5
5
5
4
5
6
6
6
6
5
6
6
6
4
6
5
6
6
4
6
4
5
4
6
6
6
5
6
3
6
4
6
5
6
5
6
6
6
6
6
6
6
6
6
2
6
2
6
6
6
5
5
5
6
6
4
6
5
5
3
5
6
4
4
5
5
5
6
6
5
5
4
5
6
6
1
5
3
5
5
5
4
4
6
3
6
5
6
6
6
2
6
6
4
4
5
2
6
6
6
6
6
6
6
4
6
5
2
4
5
5
5
6
6
6
1
6
6
6
5
6
5
4
6
6
4
6
6
6
6
6
6
6
4
6
1
6
6
4
4
6
6
6
6
6
6
6
6
6
6
4
3
6
6
6
6
6
3
4
5
6
6
4
5
4
5
4
5
3
4
5
6
4
6
5
3
3
4
6
5
5
6
5
4
6
5
5
4
6
5
5
6
6
4
4
5
6
5
4
6
5
5
6
2
6
2
3
1
4
2
3
4
5
1
2
5
5
6
4
3
3
5
4
2
4
4
4
4
3
1
3
4
2
5
3
1
3
4
4
4
2
3
4
3
3
2
3
3
3
2
59
Results for question 11
Rallye :
- Plus de paramètres
- Plus de problèmes à gérer
Rallye :
- Expérimenter soi-même en s'amusant
Rallye :
- Plus amusant
- Plus à expérimenter
Rallye :
- Le plus complexe
LED :
- On voit bien la relation cause => effet, quand on approche la main ça s'allume)
LED (allumer la LED du côté de l'obstacle) :
- Le “rendu” était très bien
Rallye :
- Met tout en pratique
LED :
- On commence à créer nous-mêmes
FSM :
- C'est la base
Suivi de ligne :
- Me fait penser à des voitures sans conducteurs
- Pourrait être à la base d'un système utilisés tous les jours
LED :
- Je me suis rendu compte des possibilités du programme et de la robotique en général
FSM:
- On fait presque tout
Aucun:
- Ils sont dans l'ordre, c'est constructif et l'intérêt augmente
- La simulation me paraît moins intéressante mais nécessaire
Les derniers (Rallye):
- Plus complexe donc plus intéressant
Rallye
LED:
- Bon effet et on peut ajouter des déplacements
FSM (U-turn):
- Donne envie d'en faire plus
Rallye:
- Intéressant de voir comment le robot se déplace sur un “plan” (de jeu)
LED (début):
- Il offre plus de liberté
Rallye:
- On confronte nos robots, c'est amusant
- Le plus drôle c'est d'essayer de nouvelles choses sur le robot (le faire danser)
LED et FSM:
- Les plus amusants
- On constate mieux ce qu'on peut demander aux robots
FSM:
- On a un résultat “réel” de l'exercice
FSM:
- Il donne une base pour tout faire au niveau des obstacles, du mouvement, …
FSM:
- Découverte du robot avec un exercice très simple
60
LED:
- Grande liberté dans la programmation
- Le troisième aussi mais plus difficile
Rallye:
- Nouvelles notions (caméra)
- Test en vrai et comparaison des méthodes
LED:
- C'était le plus fiable (par rapport au suivi de ligne)
Rallye:
- Le plus difficile, en plus de programmer il fallait ajuster les paramètres
61
Results for question 12
LED :
- Simple
- Peu d'applications possibles
LED :
- Trop de manipulations
- Résultats peu surprenants
Aucun :
- Tous permettent d'apprendre un concept
LED :
- Trop long
Rallye :
- Prend beaucoup de temps
FSM :
- “Rendu” très basique
Aucun :
- Il faut bien commencer par la base
Rallye :
- Le plus dur
Rallye :
- Il n'y avait qu'un seul parcours
Les limites de BotStudio :
- Trop compliquée, la consigne m'a effrayé
Suivi de ligne :
- Pas très fun
LED (obstacle et LED):
- Trop vague
- Trop long
- N'apporte rien de plus que les autres
LED:
- Manque de mouvement
- Ennuyeux de tout écrire 2 fois
Aucun:
- Rien ne se répète
Les premiers (FSM):
- Un peu trop facile
Les premiers (FSM + LED):
- Trop ludiques
LED (obstacle et LED):
- On a déjà les bases
- Pas “impressionnant”
LED (obstacle et LED):
- Trop dur
LED:
- J'ai pas compris comment faire
LED (obstacle et LED):
- Moins intéressant parce qu'en simulation
Aucun:
- Ils m'ont semblés tous utiles
Rallye:
- Le programme était déjà fait
- C'était le plus difficile
LED:
- Long et pas très excitant
LED:
- Pas si facile et pas intéressant au niveau du résultat
62
FSM:
- Défauts du robots qui rendaient le demi-tour parfait impossible (timer limité à 0.2 s)
LED:
- Le programme est peu pratique avec beaucoup de transitions
- C'est long !
FSM:
- Le moins élaboré, le plus basique
63
Results for question 13
- Simplicité d'utilisation
- Découverte des possibilités
- Travailler avec un vrai robot
- Expérimenter soi-même
- L'aspect pratique
- Travailler avec un vrai robot
- Le logiciel
- Tout, en particulier la connexion entre le robot et l'ordinateur
- Explications orales
- Fonctionnement en live de notre programme sur l'e-puck
- Pouvoir faire bouger et réagir un robot complexe de manière simple
- La nouveauté
- Venir à l'EPFL
- S'amuser sur les ordis
- Pouvoir avancer à son rythme
- Progression écrite bien faite
- Travailler avec un vrai robot
- Me rendre compte des possibilités technologiques en robotique
- La logique de programmation
- Webots
- Voir que ça “tourne”
- L'équivalent matériel
- Avoir chacun son robot et pouvoir l'utiliser comme on veut
- Travailler avec du matériel nouveau et hors du cursus scolaire
- L'introduction (présentation + discours) est importante
- Le vocabulaire utilisé n'est pas trop spécifique
- Travailler avec des robots
- Tester les simulation pour de vrai
- Travailler avec des robots
- Découvrir le fonctionnement d'un robot
- Tester par nous-mêmes
- Les robots
- La vidéo sur youtube avec les e-pucks
- Les e-pucks
- Voir comment un vrai robot fonctionne
- L'aspect ludique
- Voir ce que font les robots, comment on les programme
- Avoir de vrais robots entre les mains
- Avant j'avais une mauvaise image de la robotique, merci pour la présentation
- L'approche intuitive et sans programmation
- Le changement par rapport aux leçons habituelles
- Faire fonctionner le robot (et voir si il fonctionne)
- La simplicité
- Pouvoir voir ce qu'on a programmé se concrétiser
- Facile, ludique, amusante
- Possibilité de travailler avec un vrai robot
- Les petits robots
- Le côté pratique, pouvoir jouer avec les robots
- C'était bien mais on ne rentre pas assez dans le vif du sujet
64
Results for question 14
- Webots (fenêtre de simulation)
- Plantage avec le bluetooth
- Déplacement des robots difficile
- Donner plus d'explications sur les paramètres des senseurs
- Faire une démo de ce qu'on peut faire avec ce robot (en allant plus loin)
Note : j'ai fait une démo au début...
- Donner une vue d'ensemble du fonctionnement d'un robot
- Les petits bugs du programme
- Des bugs sur certains robots
Note : les firmwares n'étaient pas tous identiques
- L'introduction aux robotiques existantes
- Dans quels milieux sont utilisées les robots (quelles industries)
- Expliquer les bras robotisés
- Expliquer comment les informations circulent
- Expliquer la fabrication (matériaux)
- Mettre plus d'informations sur le programme (connecter Bluetooth, déplacer les objets simulés)
- Heureusement que vous étiez là
- La pédagogie (pas assez d'explication claires)
- Ajouter une section d'aide/théorie/information en fin du polycopié pour pouvoir avancer sans aide
- Plus de mouvement, avec d'autres exercices, peut-être avec la caméra
- Rien, vous avez su nous aider et répondre à nos questions
- Plus d'exercices "évolués"
- Peu de choses
- Avoir plus de suivi, passer d'exercice en exercice ensemble en montrant le résultat avec le robot réel
- Plus d'explications orales
- Les explications pour les LEDs mais je sais pas comment
- Ajouter de nouveaux exercices (chorégraphie avec l'e-puck, celui qui fait le virage le plus rapide)
- Pouvoir ajouter de la musique sur les e-pucks
- Éventuellement donner des consignes plus attractives (où on peut faire plein de choses)
- En dire plus sur le robot lui-même, maintenant je sais mettre des carrés et des flèches mais pas ce que ça fait
dans le robot
- Faire une ou deux pauses
- Des meilleurs robots =) ... non, rien à redire
- Ajouter un concours, un jeu ou d'autres trucs marrants
- Le programme, pour simplifier l'addition de plusieurs états et transitions
- La vitesse, mettre qqch de réel pour pouvoir coordonner avec le temps et être plus précis
- Donner plus de possibilités (boucle p.ex.) ou la commande vocale serait cool
65
Summary of the answers to questions 1 to 10
nb. of 1
nb. of 2
nb. of 3
nb. of 4
nb. of 5
nb. of 6
1
0
0
0
3
20
8
2
0
0
1
5
11
14
3
0
0
1
5
7
18
4
0
2
1
3
8
17
5
1
2
2
5
10
11
6
1
1
0
4
6
18
Average 5.16 5.23 5.35 5.19 4.74 5.23
Std. Dev. 0.57 0.83 0.86 1.15 1.34 1.23
7
1
0
2
4
0
24
8
0
0
3
9
11
8
9
2
4
3
6
9
7
10
2
5
11
10
2
0
5.39
1.24
4.77
0.94
4.19
1.53
3.17
1
Correlation between the questions
1
2
3
4
5
6
7
8
9
10
66
1
1
2
3
4
0.13 0.28 0.05
1
0.61 0.33
1
0.39
1
5
6
7
8
9
10
0.14 0.24 -0.04 0.25 0.44 0.02
0.54 0.2 -0.02 -0.14 0.5 0.19
0.58 0.2 0.05 0.1 0.56 0.08
0.66 0.02 0.15 -0.08 0.2 0.35
1
0.18 0.2 -0.05 0.29 0.38
1
-0.12 0.52 0.19 -0.09
1
-0.01 0.05 0.11
1
0.28 -0.27
1
0.11
1
25
25
20
20
15
15
10
Question 1
# answer
# answer
Histograms for each question
5
10
Question 2
5
0
0
1
2
3
4
5
6
1
2
Agreement
25
4
5
6
25
20
20
15
10
Question 3
# answer
# answer
3
Agreement
15
10
Question 4
5
5
0
1
2
3
4
5
6
0
1
Agreement
2
3
4
5
6
Agreement
25
25
20
15
10
Question 5
5
# answer
# answer
20
15
10
Question 6
5
0
1
2
3
4
Agreement
5
6
0
1
2
3
4
5
6
Agreement
67
25
20
20
15
15
10
Question 7
# answer
# answer
25
5
10
Question 8
5
0
0
1
2
3
4
5
6
1
25
25
20
20
15
15
10
Question 9
5
4
5
6
10
Question 10
5
0
0
1
2
3
4
Agreement
68
3
Agreement
# answer
# answer
Agreement
2
5
6
1
2
3
4
Difficulty
5
6
D.2
Exercise set 2
69
Results for questions 1 to 10
1 2 3 4 5 6 7 8 9 10
6
3
5
4
6
4
5
5
5
5
6
5
5
5
5
4
6
4
5
3
5
3
4
5
5
4
5
5
3
3
5
5
5
70
5
6
5
6
3
6
5
4
3
5
5
5
4
5
5
5
5
5
5
6
5
2
5
5
6
5
5
4
3
4
6
6
5
5
6
5
5
5
6
6
5
4
6
5
6
5
5
6
6
5
5
4
5
4
2
5
5
6
5
4
3
4
4
6
6
5
5
6
6
6
4
6
4
6
4
4
6
4
6
6
5
5
5
6
4
6
6
2
4
6
6
2
5
4
3
3
5
5
6
5
5
6
6
5
6
3
6
4
5
6
6
6
4
4
5
4
6
5
6
4
3
4
5
5
2
3
5
2
4
6
6
6
4
5
4
4
6
4
5
2
5
4
2
5
3
2
4
1
4
4
2
2
1
1
4
3
1
1
3
4
5
1
3
2
3
6
6
6
6
6
5
6
3
6
6
6
4
6
6
6
5
6
4
6
6
6
5
3
5
4
6
6
3
4
6
6
6
5
5
5
2
5
6
3
4
4
6
5
5
5
4
5
5
4
5
4
5
3
3
3
3
3
5
4
5
2
5
3
5
5
4
4
2
4
6
3
5
6
6
5
5
5
6
3
3
4
4
5
3
1
4
3
2
3
3
6
5
5
3
4
1
5
6
5
3
3
4
3
3
3
3
3
3
3
3
3
3
4
3
5
3
4
4
4
4
4
4
5
3
3
5
2
2
4
4
4
Results for question 11
Ceux qui utilisent le robot réel, parce que c'est plus fun
Dance:
- avec le vrai robot
- pas trop facile
- intéressant
Contrôleur et suivi de mur:
- premier essais
- le plus intéressant
Suivi de mur:
- tous les éléments sont regroupés
Contrôleur:
- pouvoir contrôler l'e-puck et lui faire faire différents mouvements
Dance:
- on apprend à utiliser les états
- c'est intéressant d'inventer une danse
Suivi de mur:
- plus complexe
- plus de matière pour réfléchir
Move (2.1):
- il fait appel à la réflexion
Tous
Dance et suivi de mur:
On a pu créer des vraies marches à suivre pour le robot puis les modifier
Dance:
- c'est une bonne idée pour comprendre et utiliser un robot
Dance:
On programme le robot dans la simulation et ensuite on peut l'appliquer au robot réel avec le même résultat.
Dance:
- il est amusant et créatif
Suivi de mur:
- très instructif quantà la perception de l'environnement par le robot et les réactions en conséquence
- le meilleure pour comprendre ces mouvements robotiques
Dance:
- facile à programmer
- ludique
- le vrai robot peut le faire
Dance:
- l'exercice était original,
- intéressant,
- rigolo
Suivi de mur:
- nous apprend à programmer une algorithme complexe
- on utilise tout ce qu'on a vu de la programmation d'un robot
Dance:
- original et créatif, j'ai eu beaucoup de plaisir
Dance:
- pourquoi pas ?
Contrôleur:
- la simulation devient concrète
Dance:
- c'était marrant
Dance:
- c'était rigolo de le voir tourner
Train:
- il était plutôt difficile et il fallait bien raisonner pour le réussir
71
Dance:
- cool
- facile
Dance:
- aussi amusant en simulation qu'en vrai
Dance:
- plus marrant à faire les effets
Dance:
- c'était drôle de pouvoir faire danser le robot
Suivi de mur:
- le plus intéressant à faire
- on peut l'essayer avec le vrai robot
Train:
- il était intéressant
Suivi de mur:
- intéressant
- on pouvait avancer au fur et à mesure
Dance:
- il était marrant
- on pouvait facilement manier notre robot
Suivi de mur:
- permet de voir la complexité de définir les transitions sur des capteurs IR pour faire évoluer le robot
Suivi de mur:
- exercice complet
72
Results for question 12
Contrôleur (1.2):
- il ne sert à rien
Contrôleur (1.4):
- trop “théorique”
Note: c'est le contrôle à l'aveugle... rien de moins théorique.
Chaîne:
- parce que j'ai rien capté
Contrôleur:
- un peu long (mais bon faut bien commencer)
Aucun
Suivi de mur:
- assez compliqueé, je n'ai pas compris comment il marchait
Move (2.4):
- les capteurs IR sont assez aléatoires, le robot ne détecte un paroi qu'une fois qu'il est rentré dedans
Aucun
Suivi de mur:
- parce que je n'ai pas réussi à la faire
Train:
- Impossible de le faire avec le vrai robot
Contrôleur:
- les questions de théorie sont moins intlressantes
Train:
- trop compliqué comparé au reste
Train:
- on n'a pas pu le tester avec les vrais robots
Contrôleur:
- le contrôle avec les touches 'S'/'X' et 'D'/'C' est difficile
- je n'arrivais pas à le faire tourner comme je voulais
Train:
- difficile d'être précise avec les vitesses des chariots et de la locomotive → collisions fréquentes
Contrôleur:
- difficile d'utiliser les touches 'S', 'X', 'D' et 'C' pour le contrôler
Train:
- difficile avec les informations et indications données
Train:
- problèmes de sauvegarde => perte de temps
Première séance:
- pas clair
Train:
- march pas bien avec 2 programmes différents (sauvegardes)
Aucun
Train:
- il faut trop réfléchir
Train:
- il est assez difficile
Train:
- trop compliqué
Suivi de mur:
- pas simple de trouver un objet carré puis de le faire tourner de 90°
Train:
- je l'ai trouvé difficile
73
Results for question 13
- les vrais robots
- la facilité d'accès au logiciel
- le fait de pouvoir le faire soi-même
- on peut avancer assez vite
- travailler avec des vrais robots
- le programme est bien fait
- Tout, surtout les petits robots.
- on comprend mieux le lien entre robot et commande
- c'est très intéressant de pouvoir contrôler un robot
- Travailler avec un vrai robot et le voir interagir avec nous.
- apprendre comment fonctionne un robot, c'était passionant de savoir un tout petit peu les utiliser
- ça donne envie d'en savoir plus
- le fait de pouvoir faire soi-même
- enfin une présentation où on ne reste pas assis à écouter
- Le fait d'apprendre le fonctionnement d'un robot.
- C'était convivial.
- Tout, programmer un robot puis faire bouger le vrai
- la bonne présentation
- l'interaction avec le robot
- on comprend comment fonctionne le robot
- contrôle avec le bluetooth
- utilisation d'un vrai robot
- Pouvoir utiliser le robot depuis l'ordi, c'était classe.
- Le concept interactif avec le robot permet d'aller insctinctivement à la découverte de leur fonctionnement
- Le vrai robot
- Utiliser le robot et essayer des trucs avec lui
- l'aspect utilisation concrète de la robotique
- pouvoir faire joujou avec ce robot
- l'aspect interactif
- pouvoir utiliser un vrai robot et s'amuser un peu
- Maintenant j'ai une brève notion de la robotique.
- faire les simulations puis concrétiser
- facilité de compréhension
- L'aspect technologique
- Les couleurs
- Pouvoir contrôler le robot et lui faire faire ce qu'on veut
- Faire fonctionner le robot en vrai
- Oui :)
- Les LEDs qui s'allument et s'éteignent
- J'ai découvert qqch que je ne connaissais pas avant. J'aime bien faire marcher le robot en le programmant
sur l'ordinateur
- Le fait de pouvoir programmer de vrais robots
- Avoir de nouvelles connaissances
- Avoir de nouvelles connaissances
- le fait de pouvoir programmer un robot
- exercices en difficulté croissante
- manipulation des robots rend très concrète la programmation
- Exercices progressifs et concrets
74
Results for question 14
- M. Müller (le prof.)
- simplifier les termes employés (ou donner des définitions à la fin)
- des fois il y a beaucoup de texte d'un coup
- Il faudrait moins de texte à lire seul. On se perd facilement et on comprend pas toujours.
- Donner plus de détails dans le cours, quand on est perdu ou qu'on a mal compris une donnée.
- présenter quelques vidéos de robots en action
- donner plus de détails sur les robot (où sont-ils fabriqués, leur utilité, leur programmation, …)
- l'introduction peut-être
- la clarté des énoncés
- Je ne vois pas de trucs à améliorer.
- Parler plus et mettre moins de texte sur les feuilles.
- rien
- Avoir plus de temps (en ayant p.ex. Plusieurs fois ce cours)
- Rien, c'est sympa
- La difficulté entre chaque exercice augmente trop rapidement.
- Faire plus de représentation de l'univers robot et pas seulement du robot lui-même.
- plus de temps
- (nous permettre de ramener les robots à la maison)
- Peut-être un peu plus de temps
- manque de temps pour tout terminer
- difficile d'entrer des valeurs précises avec un curseur...
- fluidité du robot réel (Note: à cause de l'image caméra probablement)
- plus de temps à disposition
- pas si évident de connecter le robot, ouvrir les bons fichiers, etc...
- le cours sur la pause de midi, manger en classe c'est pas top
- prendre 4 périodes -> plus de temps
- faire des démos (vision du PC du prof sur tous les écrans)
Programme:
- sauvegardes plus faciles
- possibilité d'entrer les valeurs au clavier (gain de temps)
- Il faudrait que le programme soit plus simple et que le robot puisse faire plus que avant-arrière-tourner-clignoter
- L'utilisation de Webots est assez difficile => faire plus simple (en français)
- Montrer le résultat pour ceux qui n'ont pas réussi
- Le bug de la première leçon, on n'a pas pu faire tous les exercices nous-mêmes, sinon rien.
- Les sauvegardes, sur l'e-puck train ça prend beaucoup de temps
- Un peu plus d'explications pour les exercices
- Montrer les réponses aux exercices
- La liste d'exercices
- Peut-être plus d'explications pour les exercices
- Plus expliquer l'interface avec un exemple projeté (au beamer) qui utilise les capteurs (IR)
- corriger les exercices en plénum
- présentation à l'écran d'un exemple complet
75
Summary of the answers to questions 1 to 10
nb. of 1
nb. of 2
nb. of 3
nb. of 4
nb. of 5
nb. of 6
1
0
0
5
6
18
4
2
0
1
3
4
18
7
3
0
1
1
6
15
10
4
0
2
2
8
7
14
5
0
2
3
7
9
12
6
6
6
5
10
5
1
7
0
0
3
4
5
21
8
0
2
7
7
15
2
9
2
2
8
6
9
6
10
0
2
16
11
3
0
Average 4.64 4.82 4.97 4.88 4.79 3.15 5.33 4.24 4.09 3.47
Std. Dev. 0.88 0.97 0.94 1.2 1.2 1.44 1.01 1.05 1.42 0.75
Correlation between the questions
1
2
3
4
5
6
7
8
9
10
76
1
1
2
3
4
5
6
7
8
0.06 0.21 0.27 0.24 0.19 0.17 0.36
1
0.7 0.58 0.44 -0.11 0.19 0.01
1
0.43 0.4 0.21 0.14 0.32
1
0.68 -0.01 0.06 -0.02
1
0.05 -0.07 -0.06
1 -0.06 0.3
1
0.33
1
9
0.34
0.34
0.5
0.2
0.17
0.14
-0.06
0.31
1
10
0.31
0.25
-0.05
0.19
0.15
-0.16
-0.15
-0.27
0.08
1
25
25
20
20
15
15
10
Question 1
# answer
# answer
Histogram for each question
5
10
Question 2
5
0
0
1
2
3
4
5
6
1
2
3
4
5
6
Agreement
25
25
20
20
15
10
Question 3
# answer
# answer
Agreement
15
10
Question 4
5
5
0
1
2
3
4
5
6
0
1
Agreement
2
3
4
5
6
25
25
20
20
15
15
10
Question 5
5
# answer
# answer
Agreement
10
Question 6
5
0
0
1
2
3
4
Agreement
5
6
1
2
3
4
5
6
Agreement
77
25
20
20
15
15
10
Question 7
# answer
# answer
25
5
10
Question 8
5
0
0
1
2
3
4
5
6
1
2
25
25
20
20
15
15
10
Question 9
5
4
5
6
10
Question 10
5
0
0
1
2
3
4
Agreement
78
3
Agreement
# answer
# answer
Agreement
5
6
1
2
3
4
Difficulty
5
6
Appendix E
Cyberbotics’ Robot Curriculum
This appendix is presented as a separate report since it is the main realization of this
project. If this appendix is not included with your copy of this document you can easily
download it on wikibooks [18]. The version reproduced here was generated on the 2nd of
March 2009, for environmental concerns it is reduced to get 2 pages per sheet.
79
80
Cyberbotics’ Robot Curriculum
Cyberbotics Ltd., Olivier Michel, Fabien Rohrer, Nicolas Heiniger and
wikibooks contributors
Created on Wikibooks,
the open content textbooks collection.
PDF generated on March 2, 2009
c 2009 Wikibooks contributors.
Copyright Permission is granted to copy, distribute and/or modify this document under the terms of the
GNU Free Documentation License, Version 1.2 or any later version published by the Free Software
Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy
of the license is included in the section entitled “GNU Free Documentation License”.
Line following [Beginner] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Rally [Beginner] [Challenge] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7 Novice programming Exercises
*A train of e-pucks* [Novice] . . . .
Remain in Shadow [Novice] . . . . .
Introduction to the C Programming
K-2000 [Novice] . . . . . . . . . . . .
Motors [Novice] . . . . . . . . . . . .
The IR Sensors [Novice] . . . . . . .
Accelerometer [Novice] . . . . . . . .
Camera [Novice] . . . . . . . . . . .
Contents
1 About this book
Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 What is Artificial Intelligence?
GOFAI versus New AI . . . . . . .
History . . . . . . . . . . . . . . .
The Turing test . . . . . . . . . . .
Cognitive Benchmarks . . . . . . .
Further reading . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
6
7
7
8
9
12
13
3 What are Robots?
15
Robots in our every Day’s Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Robots as Artificial Animals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4 E-puck and Webots
19
E-puck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Webots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5 Getting started
Explanations about the Practical Part . .
Get Webots and install it . . . . . . . . .
Get the exercise files . . . . . . . . . . . .
Bluetooth Installation and Configuration .
Open Webots . . . . . . . . . . . . . . . .
E-puck Prerequisites . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
27
27
28
28
29
33
33
6 Beginner programming Exercises
Discovery of the e-puck [Beginner] . . . . . . . . . . . . .
Robot Controller [Beginner] . . . . . . . . . . . . . . . . .
Move your e-puck [Beginner] . . . . . . . . . . . . . . . .
Simple Behavior: Finite State Machine (FSM) [Beginner]
Better Collision avoidance Algorithm [Beginner] . . . . . .
The blinking e-puck [Beginner] . . . . . . . . . . . . . . .
*E-puck Dance* [Beginner] . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
35
35
36
40
42
46
46
47
.
.
.
.
.
.
.
.
.
.
.
.
3
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
48
49
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
51
51
52
53
55
57
58
60
62
8 Intermediate programming Exercises
Program an Automaton [Intermediate] . . . .
*Lawn mower* [Intermediate] . . . . . . . . .
Behavior-based artificial Intelligence . . . . .
Behavioral Modules [Intermediate] . . . . . .
Create a line following Module [Intermediate]
Mix of several Modules [Intermediate] . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
69
69
70
73
74
76
78
9 Advanced programming Exercises
Odometry [Advanced] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Path planning [Advanced] . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Pattern Recognition using the Backpropagation Algorithm [Advanced] . . . .
Unsupervised Learning using Particle Swarm Optimization (PSO) [Advanced]
SLAM [Advanced] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
81
. 81
. 85
. 89
. 97
. 102
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
10 Cognitive Benchmarks
109
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Rat’s Life Benchmark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Other Robotic Cognitive Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
A Document Information
117
History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
PDF Information & History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
B GNU Free Documentation License
119
4
81
82
6
CHAPTER 1. ABOUT THIS BOOK
Easy-to-use robotics Tools
The practical part of this book relies on a couple of software and hardware tools that will allow
you to practice intelligent robot programming for real. These tools are the e-puck robot and the
Webots software. They are both widely used for education and research in Universities worldwide
and are commercially available and well supported. These tools will be described in chapter E-puck
and Webots.
Chapter 1
Enjoy Robot Competitions
Several exercises are provided along this book. Starting from very simple introductory exercises in
chapter Beginner programming Exercises, the reader will learn progressively how to create more
and more advanced robotics controllers throughout the following chapters. Finally, the chapter
Cognitive Benchmarks will introduce the reader into the realm of robot competitions through a
cognitive benchmark: Rat’s Life 1 .
About this book
Learning about Intelligent Robots
This book is intended to students, teachers, hobbyists and researchers interested in intelligent
robots. It will help you understanding what robots are, what they can do for you, and most
interestingly how to program them. It includes two parts: a short theoretical part and a longer
practical part. Practical part is decomposed in one chapter about the computer configuration and
five chapters of exercises corresponding to five level of difficulty (see the next section). After reading
this book, you should be able to design your own intelligent robots.
Further reading
• Cyberbotics Official Webpage
• e-puck website
From Beginners to Robotics Experts
Even if you never wrote a computer program before, you will learn easily how to graphically program
the behavior of a simple robot. From this first experience, you will be smoothly introduced to higher
level computer programming and discover more possibilities of intelligent robots. This practical
investigation is organized in projects for which a difficulty level is associated. You are free to stop
at any level if the projects suddenly become too difficult to handle, but if you reach the latest levels
successfully, you should consider yourself as a genuine robotics researcher! Here are the levels of
difficulty:
• beginner: no prior knowledge needed, suitable for children from 8 years old and people without
a scientific background (see Beginner programming Exercises)
• novice: scientific or technological interest needed, suitable for children from 8 years old (see
Novice programming Exercises)
• intermediate: general computer science background needed, intended to student from 12 years
old with some interest in computer science (see Intermediate programming Exercises)
• advanced: programming skills needed, intended to post-graduate students and researchers
(see Advanced Programming Exercises)
• expert: research spirit needed, intended to post-graduate student and researchers (see Cognitive Benchmarks)
5
1 See
their website, Rat’s Life Programming Contest
8
CHAPTER 2. WHAT IS ARTIFICIAL INTELLIGENCE?
• Artificial Neural Networks are bio-inspired systems with very strong pattern recognition capabilities.
• Fuzzy Systems are techniques for reasoning under uncertainty; they have been widely used in
modern industrial and consumer product control systems.
• Evolutionary computation applies biologically inspired concepts such as populations, mutation
and survival of the fittest to generate increasingly better solutions to a problem. These
methods most notably divide into Evolutionary Algorithms (including Genetic Algorithms)
and Swarm Intelligence (including Ant Algorithms).
Chapter 2
What is Artificial Intelligence?
Artificial Intelligence (AI) is an interdisciplinary field of study that includes computer science, engineering, philosophy and psychology. There is no widely accepted precise definition of Artificial
Intelligence, because Intelligence is very difficult to define. John McCarthy defined Artificial Intelligence as “the science and engineering of making intelligent machine” 1 which does not explain
what intelligent machines are. Hence, it does not help either to answer the question “Is a chess
playing program an intelligent machine?”.
GOFAI versus New AI
AI divides roughly into two schools of thought: GOFAI (Good Old Fashioned Artificial Intelligence)
and New AI. GOFAI mostly involves methods now classified as machine learning, characterized by
formalism and statistical analysis. This is also known as conventional AI, symbolic AI, logical AI
or neat AI. Methods include:
• Expert Systems apply reasoning capabilities to reach a conclusion. An Expert System can
process large amounts of known information and provide conclusions based on them.
• Case Based Reasoning stores a set of problems and answers in an organized data structure
called cases. A Case Based Reasoning system upon being presented with a problem finds a
case in its knowledge base that is most closely related to the new problem and presents its
solutions as an output with suitable modifications.
• Bayesian Networks are probabilistic graphical models that represent a set of variables and
their probabilistic dependencies.
• Behavior Based AI is a modular method building AI systems by hand.
New AI involves iterative development or learning. It is often bio-inspired and provides models
of biological intelligence, like the Artificial Neural Networks. Learning is based on empirical data
and is associated with non-symbolic AI. Methods mainly include:
1
Hybrid Intelligent Systems attempt to combine these two groups. Expert Inference Rules can
be generated through Artificial Neural Network or Production Rules from Statistical Learning.
History
Early in the 17th century, René Descartes envisioned the bodies of animals as complex but reducible
machines, thus formulating the mechanistic theory, also known as the “clockwork paradigm”. Wilhelm Schickard created the first mechanical digital calculating machine in 1623, followed by machines of Blaise Pascal (1643) and Gottfried Wilhelm von Leibniz (1671), who also invented the
binary system. In the 19th century, Charles Babbage and Ada Lovelace worked on programmable
mechanical calculating machines.
Bertrand Russell and Alfred North Whitehead published Principia Mathematica in 1910-1913,
which revolutionized formal logic. In 1931 Kurt Gödel showed that sufficiently powerful consistent
formal systems contain true theorems unprovable by any theorem-proving AI that is systematically deriving all possible theorems from the axioms. In 1941 Konrad Zuse built the first working
mechanical program-controlled computers. Warren McCulloch and Walter Pitts published A Logical Calculus of the Ideas Immanent in Nervous Activity (1943), laying the foundations for neural
networks. Norbert Wiener’s Cybernetics or Control and Communication in the Animal and the
Machine (MIT Press, 1948) popularized the term “cybernetics”.
Game theory which would prove invaluable in the progress of AI was introduced with the paper,
Theory of Games and Economic Behavior by mathematician John von Neumann and economist
Oskar Morgenstern 2 .
1950’s
The 1950s were a period of active efforts in AI. In 1950, Alan Turing introduced the “Turing test” as
a way of creating a test of intelligent behavior. The first working AI programs were written in 1951
to run on the Ferranti Mark I machine of the University of Manchester: a checkers-playing program
written by Christopher Strachey and a chess-playing program written by Dietrich Prinz. John
McCarthy coined the term “artificial intelligence” at the first conference devoted to the subject,
in 1956. He also invented the Lisp programming language. Joseph Weizenbaum built ELIZA, a
chatter-bot implementing Rogerian psychotherapy. The birth date of AI is generally considered to
be July 1956 at the Dartmouth Conference, where many of these people met and exchanged ideas.
2 Von
See John McCarthy, What is Artificial Intelligence?
7
Neumann, J.; Morgenstern, O. (1953), “Theory of Games and Economic Behavior”, New York
83
84
THE TURING TEST
9
10
CHAPTER 2. WHAT IS ARTIFICIAL INTELLIGENCE?
1960s-1970s
During the 1960s and 1970s, Joel Moses demonstrated the power of symbolic reasoning for integration problems in the Macsyma program, the first successful knowledge-based program in mathematics. Leonard Uhr and Charles Vossler published “A Pattern Recognition Program That Generates,
Evaluates, and Adjusts Its Own Operators” in 1963, which described one of the first machine
learning programs that could adaptively acquire and modify features and thereby overcome the
limitations of simple perceptrons of Rosenblatt. Marvin Minsky and Seymour Papert published
Perceptrons, which demonstrated the limits of simple Artificial Neural Networks. Alain Colmerauer developed the Prolog computer language. Ted Shortliffe demonstrated the power of rule-based
systems for knowledge representation and inference in medical diagnosis and therapy in what is
sometimes called the first expert system. Hans Moravec developed the first computer-controlled
vehicle to autonomously negotiate cluttered obstacle courses.
1980s
In the 1980s, Artificial Neural Networks became widely used due to the back-propagation algorithm,
first described by Paul Werbos in 1974. The team of Ernst Dickmanns built the first robot cars,
driving up to 55 mph on empty streets.
1990s & Turn of the Millennium
The 1990s marked major achievements in many areas of AI and demonstrations of various applications. In 1995, one of Ernst Dickmanns’ robot cars drove more than 1000 miles in traffic at up
to 110 mph, tracking and passing other cars (simultaneously Dean Pomerleau of Carnegie Mellon
tested a semi-autonomous car with human-controlled throttle and brakes). Deep Blue, a chessplaying computer, beat Garry Kasparov in a famous six-game match in 1997. Honda built the first
prototypes of humanoid robots (see picture of the Asimo Robot).
During the 1990s and 2000s AI has become very influenced by probability theory and statistics.
Bayesian networks are the focus of this movement, providing links to more rigorous topics in statistics and engineering such as Markov models and Kalman filters, and bridging the divide between
GOFAI and New AI. This new school of AI is sometimes called ‘machine learning’. The last few
years have also seen a big interest in game theory applied to AI decision making.
The Turing test
Artificial Intelligence is implemented in machines (i.e., computers or robots), that are observed by
”Natural Intelligence” beings (i.e., humans). These human beings are questioning whether or not
these machines are intelligent. To give an answer to this question, they evidently compare the
behavior of the machine to the behavior of another intelligent being they know. If both are similar,
then, they can conclude that the machine appears to be intelligent.
Alan Turing developed a very interesting test that allows the observer to formally say whether
or not a machine is intelligent. To understand this test, it is first necessary to understand that
intelligence, just like beauty, is a concept relative to an observer. There is no absolute intelligence,
like there is no absolute beauty. Hence it is not correct to say that a machine is more or less
intelligent. Rather, we should say that a machine is more or less intelligent for a given observer.
Figure 2.1: Asimo: Honda’s humanoid robot
11
THE TURING TEST
Starting from this point of view, the Turing test makes it possible to evaluate whether or not a
machine qualifies for artificial intelligence relatively to an observer.
The test consists in a simple setup where the observer is facing a machine. The machine could be
a computer or a robot, it does not matter. The machine however, should have the possibility to be
remote controlled by a human being (the remote controller) which is not visible by the observer. The
remote controller may be in another room than the observer. He should be able to communicate
with the observer through the machine, using the available inputs and outputs of the machine.
In the case of a computer, the inputs and outputs may be a keyboard, a mouse and computer
screen. In the case of a robot, it may be a camera, a speaker (with synthetic voice), a microphone,
motors, etc. The observer doesn’t know if the machine is remote controlled by someone else or if it
behaves on its own. He has to guess it. Hence, he will interact with the machine, for example by
chatting using the keyboard and the screen to try to understand whether or not there is a human
intelligence behind this machine writing the answers to his questions. Hence he will want to ask
very complicated questions and see what the machine answers and try to determine if the answers
are generated by an AI program or if they come from a real human being. If the observer believes
he is interacting with a human being while he is actually interacting with a computer program,
then this means the machine is intelligent for him. He was bluffed by the machine. The table below
summarizes all the possible results coming out of a Turing test.
The Turing test helps a lot to answer the question “can we build intelligent machines?”. It
demonstrates that some machines are indeed already intelligent for some people. Although these
people are currently a minority, including mostly children but also adults, this minority is growing
as AI programs improve.
Although the original Turing test is often described as a computer chat session (see picture), the
interaction between the observer and the machine may take very various forms, including a chess
game, playing a virtual reality video game, interacting with a mobile robot, etc.
The machine is remote
controlled by a human
The observer believes he
faces a human intelligence
The observer believes he
faces a computer program
undetermined:
the observer is good at recognizing
human intelligence
undetermined: the observer
has troubles recognizing human intelligence
The machine runs an
Artificial Intelligence program
successful: the machine is
intelligent for this observer
failed: the machine is not intelligent for this observer
Table 2.1: All possible outcomes for a Turing test
Similar experiments involve children observing two mobile robots performing a prey predator
game and describing what is happening. Unlike adults who will generally say that the robots were
programmed in some way to perform this behavior, possibly mentioning the sensors, actuators and
micro-processor of the robot, the children will describe the behavior of the robots using the same
words they would use to describe the behavior of a cat running after a mouse. They will grant
feelings to the robots like ”he is afraid of”, ”he is angry”, ”he is excited”, ”he is quiet”, ”he wants
to...”, etc. This leads us to think that for a child, there is little difference between the intelligence
of such robots and animal intelligence.
12
CHAPTER 2. WHAT IS ARTIFICIAL INTELLIGENCE?
Figure 2.2: The Turing test
Cognitive Benchmarks
Another way to measure whether or not a machine is intelligent is to establish cognitive (or intelligence) benchmarks. A benchmark is a problem definition associated with a performance metrics
allowing evaluating the performance of a system. For example in the car industry, some benchmarks
measure the time necessary for a car to accelerate from 0 km/h to 100 km/h. Cognitive benchmarks
address problems where intelligence is necessary to achieve a good performance.
Again, since intelligence is relative to an observer, the cognitive aspect of a benchmark is also
relative to an observer. For example if a benchmark consists in playing chess against the Deep Blue
program, some observers may think that this requires some intelligence and hence it is a cognitive
benchmark, whereas some other observers may object that it doesn’t require intelligence and hence
it is not a cognitive benchmark.
Some cognitive benchmarks have been established by people outside computer science and
robotics. They include IQ tests developed by psychologists as well as animal intelligence tests
developed by biologists to evaluate for example how well rats remember the path to a food source
in a maze, or how do monkeys learn to press a lever to get food.
AI and robotics benchmarks have also been established mostly throughout programming or
robotics competitions. The most famous examples are the AAAI Robot Competition, the FIRST
Robot Competition, the DARPA Grand Challenge, the Eurobot Competition, the RoboCup competition (see picture), the Roboka Programming Contest. All these competitions define a precise
scenario and a performance metrics based either on an absolute individual performance evaluation
or a ranking between the different competitors. They are very well referenced on the Internet so
that it should be easy to reach their official web site for more information.
The last chapter of this book will introduce you to a series of robotics cognitive benchmarks
85
86
13
FURTHER READING
Figure 2.3: Aibo Robocup competition
(especially the Rat’s Life benchmark) for which you will be able to design your own intelligent
systems and compare them to others.
Further reading
• Artificial Intelligence
• Embedded Control Systems Design/RoboCup
14
CHAPTER 2. WHAT IS ARTIFICIAL INTELLIGENCE?
16
CHAPTER 3. WHAT ARE ROBOTS?
Chapter 3
What are Robots?
Robots are electro-mechanical machines, interacting autonomously with their environment. They
include sensors allowing them to perceive the environment. They also include actuators allowing
them to modify their environment. Finally, they include a micro-processor allowing them to process
the sensory information and control their actuators accordingly.
Robots in our every Day’s Life
There exist few applications of robots in our every days’ life. The most well known applications
are probably toys and autonomous vacuum cleaners (see figure with toy robots), but there are also
grass mower robots, mobile robots in factories, robots for space exploration, surveillance robots,
etc. These devices are becoming increasingly complex in term of sensors, actuators and information
processing.
Figure 3.2: Roomba of first generation: a vacuum cleaner
Figure 3.3: Aibo: Sony’s dog robot
Figure 3.1: Two Pleo robots
15
87
88
ROBOTS AS ARTIFICIAL ANIMALS
17
Robots as Artificial Animals
Like animals, robots can move, perceive their environment and act. Like animals, they need energy
to be able to operate. This is probably why several examples of animal robots were developed for toy
applications. This includes the Sony Aibo dog robot (see figure), the Furby toy and later the Pleo
dinosaur robot. From the mechanical and electronic points of view, these robots are very advanced.
They are equipped with many sensors (distance sensors, cameras, touch sensors, position sensors,
temperature sensors, battery level sensors, accelerometers, microphones, wireless communication,
etc.) and actuators (motors, speakers, LEDs, etc.). They also include a significant processing power
with powerful onboard micro-controllers or micro-processors. Moreover, the latest Aibo robots and
several vacuum cleaner robots are able to search their recharging station, to dock on it, recharge
their batteries and move on once the battery is charged. This makes them even more autonomous.
However, their learning capabilities and ability to adapt to unknown situations is often still very
limited. Hence, this affect to comparison with real animals in term of intelligence. When observing
an Aibo robot and a real dog, there is no doubt for most observers that the dog is more intelligent
than the robot. The same could probably apply if you compare the Pleo toy robot with a real
reptile. However, since reptiles appear to be more primitive than dogs, the difference of intelligence
in the Pleo / reptile case may not be as evident as in the Aibo / dog case.
The conclusion we can draw from the above paragraph is that the hardware technology for
intelligent robots is currently available. However, we still need to invent a better software technology
to drive these robots. In other words, we currently have the bodies of our intelligent robots, but
we lack their minds. This is probably the reason why most of the toy and vacuum cleaner robots
described here are still provided with a remote control...
Hence this book will not focus on robot hardware, but rather on robot software because robot
software is the greatest research challenge to overcome to be able to design more and more intelligent
robots.
18
CHAPTER 3. WHAT ARE ROBOTS?
20
CHAPTER 4. E-PUCK AND WEBOTS
• Robustness and maintenance: e-puck resists to student use and is simple to repair.
• Affordable: the price tag of e-puck is friendly to university budgets.
The e-puck robot has already been used in a wide range of applications, including mobile robotics
engineering, real-time programming, embedded systems, signal processing, image processing, sound
and image feature extraction, human-machine interaction, inter-robot communication, collective
systems, evolutionary robotics, bio-inspired robotics, etc.
Chapter 4
E-puck and Webots
This chapter introduces you to a couple of useful robotics tools: e-puck, a mini mobile robot and
Webots, a robotics CAD software. In the rest of this book, you will use both of them to practice
hands-on robotics. Hopefully, this practical approach will make you understand what robots are
and what you can do with them.
E-puck
Introduction
The e-puck robot was designed by Dr. Francesco Mondada and Michael Bonani in 2006 at EPFL,
the Swiss Federal Institute of Technology in Lausanne (see Figure). It was intended to be a tool for
university education, but is actually also used for research. To help the creation of a community
inside and outside EPFL, the project is based on an open hardware concept, where all documents
are distributed and submitted to a license allowing everyone to use and develop for it. Similarly,
the e-puck software is fully open source, providing low level access to every electronic device and
offering unlimited extension possibilities. The e-puck robots are now produced industrially by
GCTronic S.à.r.l. (Switzerland) and Applied AI, Inc. (Japan) and are available for purchase from
various distributors. You can order your own e-puck robot for about 950 Swiss Francs (CHF) from
Cyberbotics Ltd..
The e-puck robot was designed to meet a number of requirements:
• Neat Design: the simple mechanical structure, electronics design and software of e-puck is an
example of a clean and modern system.
• Flexibility: e-puck covers a wide range of educational activities, offering many possibilities
with its sensors, processing power and extensions.
• Simulation software: e-puck is integrated in the Webots simulation software for easy programming, simulation and remote control of real robot.
• User friendly: e-puck is small and easy to setup on a table top next to a computer. It doesn’t
need any cable (rely on Bluetooth) and provides optimal working comfort.
19
Figure 4.1: The e-puck mobile robot
Overview
The e-puck robot is powered by a dsPIC processor, i.e., a Digital Signal Programmable Integrated
Circuit. It is a micro-controller processor produced by the Microchip company which is able to
perform efficient signal processing. This feature is very useful in the case of a mobile robot, because extensive signal processing is often needed to extract useful information from the raw values
measured by the sensors.
The e-puck robot also features a large number of sensors and actuators as depicted on the
pictures with devices and described in the table. The electronic layout can be obtained at this
address: e-puck electronic layout Each of these sensors will be studied in detail during the practical
investigations later in this book.
89
90
21
WEBOTS
22
CHAPTER 4. E-PUCK AND WEBOTS
Figure 4.2: Sensors and actuators of the e-puck robot
Webots
Introduction
Webots is a software for fast prototyping and simulation of mobile robots. It has been developed
since 1996 and was originally designed by Dr. Olivier Michel at EPFL, the Swiss Federal Institute of
Technology in Lausanne, Switzerland, in the lab of Prof. Jean-Daniel Nicoud. Since 1998, Webots
is a commercial product and is developed by Cyberbotics Ltd. User licenses of this software have
been sold to over 400 universities and research centers world wide. It is mostly used for research
and education in robotics. Besides universities, Webots is also used by research organizations and
corporate research centers, including Toyota, Honda, Sony, Panasonic, Pioneer, NTT, Samsung,
NASA, Stanford Research Institute, Tanner research, BAE systems, Vorverk, etc.
The use of a fast prototyping and simulation software is really useful for the development of
most advanced robotics project. It actually allows the designers to visualize rapidly their ideas, to
check whether they meet the requirements of the application, to develop the intelligent control of
the robots, and eventually, to transfer the simulation results into a real robot. Using such software
tools saves a lot of time while developing new robotics projects and allows the designers to explore
more possibilities than they would if they were limited to using only hardware. Hence both the
development time and the quality of the results are improved by using a rapid prototyping and
simulation software.
Overview
Webots allows you to perform 4 basic stages in the development of a robotic project as depicted on
the figure.
The first stage is the modeling stage. It consists in designing the physical body of the robots,
including their sensors and actuators and also the physical model of the environment of the robots.
It is a bit like a virtual LEGO set where you can assemble building blocks and configure them by
changing their properties (color, shape, technical properties of sensors and actuators, etc.). This
Figure 4.3: Webots development stages
way, any kind of robot can be created, including wheeled robots, four legged robots, humanoid
robots, even swimming and flying robots! The environment of the robots is created the same
way, by populating the space with objects like walls, doors, steps, balls, obstacles, etc. All the
physical parameters of the object can be defined, like the mass distribution, the bounding objects,
the friction, the bounce parameters, etc. so that the simulation engine in Webots can simulate
their physics. The figure with the simulation illustrates the model of an e-puck robot exploring an
environment populated with stones. Once the virtual robots and virtual environment are created,
you can move on to the second stage.
The second stage is the programming stage. You will have to program the behavior of each
robot. In order to achieve this, different programming tools are available. They include graphical
programming tools which are easy to use for beginners and programming languages (like C, C++
or Java) which are more powerful and enable the development of more complex behaviors. The
program controlling a robot is generally a endless loop which is divided into three parts: (1) read
the values measured by the sensors of the robot, (2) compute what should be the next action(s) of
the robot and (3) send actuators commands to performs these actions. The easiest parts are parts
(1) and (3). The most difficult one is part (2) as this is here that lie all the Artificial Intelligence.
Part (2) can be divided into sub-parts such as sensor data processing, learning, motor pattern
generation, etc.
The third stage is the simulation stage. It allows you to test if your program behaves correctly.
By running the simulation, you will see you robot executing your program. You will be able to play
interactively with you robot, by moving obstacles using the mouse, moving the robot itself, etc.
You will also be able to visualize the values measured by the sensors, the results of the processing
of your program, etc. It is likely you will return several times to the second stage to fix or improve
your program and test it again in the simulation stage.
Finally, the fourth stage is the transfer to a real robot. Your control program will be transferred
into the real robot running in the real world. You could then see if your control program behaves
23
WEBOTS
24
CHAPTER 4. E-PUCK AND WEBOTS
the same as in simulation. If the simulation model of your robot was performed carefully and
was calibrated against its real counterpart, the real robot should behave roughly the same as the
simulated robot. If the real robot doesn’t behave the same, then it is necessary to come back to
the first stage and refine the model of the robot, so that the simulated robot will behave like the
real one. In this case, you will have to go through the second and third stages again, but mostly
for some little tuning, rather than redesigning your program. The figure with two windows shows
the e-puck control window allowing the transfer from the simulation to the real robot. On the left
hand side, you can see the point of view of the simulated camera of the e-puck robot. On the right
hand side, you can see the point of view of the real camera of the robot.
Figure 4.4: Model of an e-puck robot in Webots
Figure 4.5: Transfer from the simulation to the real robot
91
92
25
WEBOTS
Features
Size, weight
Battery autonomy
Processor
Memory
Motors
Speed
Mechanical structure
IR sensors
Camera
Microphones
Accelerometer
LEDs
Speaker
Switch
PC connection
Wireless
Remote control
Expansion bus
Programming
Simulation
Technical information
70 mm diameter, 55 mm height, 150 g
5Wh LiION rechargeable and removable battery providing about 3 hours autonomy
dsPIC 30F6014A @ 60 Mhz (˜15 MIPS) 16
bit microcontroller with DSP core
RAM: 8 KB; FLASH: 144 KB
2 stepper motors with a 50:1 reduction gear,
resolution: 0.13 mm
Max: 15 cm/s
Transparent plastic body supporting PCBs,
battery and motors
8 infra-red sensors measuring ambient light
and proximity of objects up to 6 cm
VGA color camera with resolution of
480x640 (typical use: 52x39 or 480x1)
3 omni-directional microphones for sound localization
3D accelerometer along the X, Y and Z axis
8 independent red LEDs on the ring, green
LEDs in the body, 1 strong red LED in front
On-board speaker capable of WAV and tone
sound playback
16 position rotating switch on the top of the
robot
Standard serial port up to 115 kbps
Bluetooth for robot-computer and robotrobot wireless communication
Infra-red receiver for standard remote control
commands
Large expansion bus designed to add new capabilities
C programming with free GNU GCC compiler. Graphical IDE (integrated development environment) provided in Webots
Webots facilitates the use of the e-puck
robot: powerful simulation, remote control,
graphical and C programming systems
Table 4.1: Features of the e-puck robot
26
CHAPTER 4. E-PUCK AND WEBOTS
28
CHAPTER 5. GETTING STARTED
: When this symbol occurs, only the users who work with a Linux operating system are
invited to read what follows. Note that this curriculum was written using Ubuntu Linux.
: Ibid for the Windows operating system. Note that this curriculum was also written using
Windows XP.
Chapter 5
: Ibid for Mac OS X operating system.
Each section of this document corresponds to an exercise. Each exercise title finishes with its
level between square brackets (for example : [Novice]). When an exercise title, a question number
or a practical part number is bounded by the star character (for example: *[Q.5]*), it means that
this part is optional, i.e., this part is not essential for the global understanding of the problem but
is recommended for accruing your knowledge. They can also be followed by the Challenge tag.
This tag means that this part is more difficult than the others, and that it is optional.
Getting started
The first section of this chapter (section Explanations about the Practical Part) explains how to
use this document. It presents the formalism of the practical part, i.e., the terminology, the used
icons, etc.
The following sections will help you to configure your environment. For profiting as much as
possible of this document, you need Webots, an e-puck and a Bluetooth connection between both
of them. Nevertheless, if you haven’t any e-puck, you can still practice a lot of exercises. Before
starting the exercises you need to setup these systems. So please refer to the following sections:
• Section Get Webots and install it describes how to install Webots on your computer.
• Section Bluetooth Installation and Configuration describes how to create a Bluetooth connection between your computer and your e-puck.
• Section Open Webots describes how to launch Webots.
• Section E-puck Prerequisites describes how to update your e-puck’s firmware.
• Chapter 4 in the User Guide describes how to model your own world with Webots.
If you want to go further with Webots you can consider the online user guide User Guide or the
Reference Manual.
Explanations about the Practical Part
Throughout the practical part, you will find different symbols. They have the following meaning:
: When this symbol occurs, you are invited to answer a question. The questions are related
either to the current exercise or to a more general topic. They are referenced by a number which has
the following form: ”[Q.”+question number+”]”. For example, the third question of the exercise
will have the Q.3 number.
: When this symbol occurs, you will be invited to practice. For example you will have to
program your robot to obtain a specific behavior. They are referenced by a number which has the
following form: ”[P.”+question number+”]”.
27
Get Webots and install it
The easiest way to obtain Webots is to visit the following website:
http://www.cyberbotics.com
There, you will find all the information about Webots and its installation.
Get the exercise files
All the files necessary for the exercises (Webots world files, controllers and prototypes) are hosted
at sourceforge.net and can be directly downloaded from the subversion repository (SVN) at this
adress:
http://robotcurriculum.svn.sourceforge.net/svnroot/robotcurriculum
The SVN contains only the exercise files, you can download the whole SVN tree. The repository
is organized in three folders, the project folder is divided as the project folder of Webots is. It
contains three subdirectories called controllers, protos and worlds. In addition we have a lib
directory for the file we reuse in more than one exercise. The misc directory contains only a
javaLatex directory for the program which generates the PDF version of this wikibook and two
PDF files which are needed in an exercise. Finally the doc directory contains documents which are
used in exercises. The SVN structure is shown below.
robotcurriculum
|
|- doc
|- misc
|- javaLatex
|- project
|- controllers
|- lib
93
94
BLUETOOTH INSTALLATION AND CONFIGURATION
29
30
|- protos
|- worlds
If you don’t know how to use a SVN you can look at their website: http://subversion.tigris.
org/ If you are using Windows you might also want to look at TortoiseSVN which is a SVN client
: http://tortoisesvn.tigris.org/ Wikipedia has also an interesting article about Subversion
Bluetooth Installation and Configuration
First of all, your computer needs a Bluetooth device to communicate with your e-puck. This kind
of devices is often integrated in modern laptops. The installation of this device is out of the scope
of this document. However, its correct installation is required. So, refer to its installation manual
or to the website of its constructor. This document explains only the configuration of the Bluetooth
connection between your computer and the e-puck. This connection emulates a serial connection.
Refer to your corresponding operating system:
First of all, your Linux operating system needs a recent kernel. Moreover, the following
packets have to be installed: bluez-firmware, bluez-pin and bluez-utils1
The commands lsusb (or lspci according to your Bluetooth hardware) and hciconfig inform
about the success of the installation.
Switch on your e-puck (with the ON-OFF switch) and execute the following command:
> hcitool scan
Scanning ...
00:13:11:52:DE:A8 PowerBook G4 12"
08:00:17:2C:E0:88 e-puck_0202
CHAPTER 5. GETTING STARTED
comment "e-puck_0202";
}
rfcomm0 is the name of the connection. If more than one e-puck is used, enter as entrees
(rfcomm0, rfcomm1, etc.) as there are robots. The device tag must correspond to the e-puck’s
MAC address and the comment tag must correspond to the e-puck name. rfcomm0 is the name this
connection.
Execute the following commands:
> /etc/init.d/bluez-utils restart
> rfcomm bind rfcomm0
A PIN (Personal Identification Number) will be asked to you (by bluez-pin) when trying to
establish the connection. This PIN is a 4 digits number corresponding to the name (or ID) of your
e-puck, i.e., if your e-puck is called “e-puck 0202”, then, the PIN is 0202.
Your connection will be named “rfcomm0” in Webots.
This part2 was written using Windows XP. There are probably some differences with other
versions of Windows.
After the installation of your Bluetooth device, an icon named ”My Bluetooth Places” is appeared on your desktop. If it is not the case, right click on the Bluetooth icon in the system tray
and select ”Start using Bluetooth”. Double-click on the ”My Bluetooth Places” icon. If you use
”My Bluetooth Places” for the first time, this action will open a wizard. Follow the instructions
of this wizard up to arrive at the window depicted in the first figure of the wizard. If you already
used ”My Bluetooth Places”, click on the ”Bluetooth Setup Wizard” item. This action will open
this window.
The last line corresponds to your e-puck. It shows its MAC address (08:00:17:2C:E0:88) and its
name (e-puck 0202). The number of the e-puck (0202) should correspond with its sticker.
Edit the /etc/bluetooth/hcid.conf and change the security parameter from ”auto” to ”user”.
Edit the /etc/bluetooth/rfcomm.conf configuration file and add the following entree (or modify the existing rfcomm0 entree):
rfcomm0 {
bind yes;
device 08:00:17:2C:E0:88;
channel 1;
1 This
website.
Figure 5.1: The first window of the wizard
part is inspired by the ”Bluetooth and e-puck” article written by Bonani Michael on the official e-puck
2 This
part is inspired by the third practical work of the EPFL’s Microinformatique course.
BLUETOOTH INSTALLATION AND CONFIGURATION
31
32
CHAPTER 5. GETTING STARTED
In this first window, select the second item: ”I want to find a specific Bluetooth device and
configure how this computer will use its services.”. Switch on your e-puck by using the ON/OFF
switch. A green LED on the e-puck should be alight. Click on the Next button.
The second window searches all the visible Bluetooth devices. After a time, an icon representing
your e-puck must appear. Select it and click on the Next button.
Figure 5.4: Selection of the services
Figure 5.2: Research the Bluetooth devices
This action opens the security window. Here you have to choose four digits for securing the
connection. Choose the same number as your e-puck (if your e-puck is called ”e-puck 0202”, choose
0202 as PIN) and click on the Initiate Paring button.
Figure 5.5: Configure the COM port
Figure 5.3: The security window
The opened window (on a figure too) enables you to choose which service you want to use.
Select COM1 (add a tick). If there isn’t any service, it’s maybe because the battery is too low. This
action opens a new window (see the next figure). Here you can select which port is used for the
communication. Select for example ”COM6”.
To finish, click on the Finish button.
Finally, in the ”My Bluetooth Places” window (also shown on figure), right click on the ”epuck 0202 COM1” icon and select the ”Connect” item.
Your connection will be named ”COM6” in Webots.
If your Bluetooth device is correctly installed, a Bluetooth icon should appear in your
Figure 5.6: My Bluetooth places
95
96
OPEN WEBOTS
33
System Preferences. Click on this icon and on the Paired Devices tab. Switch on your e-puck. A
green LED on the e-puck should be alight. Then, click on the New... button. It should open a new
window which scans the visible Bluetooth devices. After a while, the name of your e-puck should
appear in this list. Select the e-puck in the list and click on the Pair button. A pass key is asked.
It is the number of your e-puck coded on 4 digits. For example, if your e-puck has the number 43
on its stickers, the pass key is 0043. Enter the pass key and click on the OK button.
Once pairing is completed, you need to specify a serial port to use in order to communicate
with Webots. So, click the Serial Ports tab. Thanks to the New... button, create an outgoing port
called COM1. Finally, quit the Bluetooth window.
Your connection will be named “COM1” in Webots.
Open Webots
34
CHAPTER 5. GETTING STARTED
In the case of a remote-control session, your robot needs a specific firmware for having talks to
Webots.
For uploading the latest firmware (or other programs) on the real e-puck, select the menu Tool
| Upload to e-puck robot... as depicted in the figure. Then, a message box asks you to choose
which Bluetooth connection you want to use. Select the connection which is linked to the e-puck
and click on the Ok button. The orange LED on the e-puck will switch on. Then, a new message
box asks you to choose which file you want to upload. Select the following file and click on the Ok
button:
...webots_root/transfer/e-puck/firmware/firmware-X.Y.Z.hex
Where X.Y.Z is the version number of the firmware. Then, if the firmware (or an older version)
isn’t already installed on the e-puck, the e-puck must be reseted when the window depicted in the
figure is displayed.
This section explains how to launch Webots. Naturally it depends on your environment. So please
refer to your corresponding operating system:
Open a terminal and execute the following command:
> webots &
You should see the simulation window appear on the screen.
From the Start menu, go to the Program Files | Cyberbotics menu and click on the
Webots (+ version) menu item. You should see the simulation window appear on the screen.
Figure 5.7: The location of the tool for uploading a program on the e-puck
Open the directory in which you uncompressed the Webots package and double-click on
the webots icon. You should see the simulation window appear on the screen.
E-puck Prerequisites
An e-puck has a computer program (called firmware) embedded in its hardware. This program
defines the behavior of the robot at startup.
There are three possible ways to use Webots and an e-puck:
• The simulation: By using the Webots libraries, you can write a program, compile it and run
it in a virtual 3D environment.
• The remote-control session: You can write the same program, compile it as before and run it
on the real e-puck through a Bluetooth connection.
• The cross-compilation: You can write the same program, cross-compile it for the e-puck
processor and upload it on the real robot. In this case, the previous firmware is substituted
by your program. In this case, your program is not dependent on Webots and can survive
after the rebooting of the e-puck.
Figure 5.8: When this window occurs, the e-puck must be reset by pushing the blue button on its
top
36
CHAPTER 6. BEGINNER PROGRAMMING EXERCISES
• Accelerometer: An accelerometer3 is a device which measures the total force applied on it as
a 3D vector. An e-puck has a single accelerometer. If your e-puck is at rest, the accelerometer
indicates at least the gravitational vector. The accelerometer can be used for detecting a
collision with a wall or for detecting the fall of the robot.
Chapter 6
Beginner programming Exercises
This chapter is composed of a series of exercises for beginners. You don’t need prior knowledge to
go through these exercises. The aim is to learn the basics of mobile robotics by manipulating both
your e-puck and Webots. First, you will discover some e-puck devices and their utility. Then, you
will acquire the concept of a robot controller. And finally, you will program a simple robot behavior
by using a Webots module: BotStudio. This module enables to program an e-puck robot using a
graphical interface. You will discover how to use it and what are the notions related to it.
Discovery of the e-puck [Beginner]
As explained in the chapter E-puck and Webots, an e-puck has different devices. Through this
document, you will use some of them: the stepper motors, the LEDs, the accelerometer, the infrared
sensors and the camera. In this exercise, you will discover the utility of each of them. The following
list gives you a quick definition of these devices. You will see in the next chapter all these devices
in more details.
• Stepper motor: A stepper motor1 is an electrical motor which breaks up a full rotation into
a large number of steps. An e-puck possesses two stepper motors of 1000 steps. They can
achieve a speed of about one rotation per second. The wheels of the e-puck are fixed to these
motors. They are used to move the robot. They can move independently. Moreover, for
knowing the position of the wheels, an incremental encoder can be used. The e-puck encoder
returns the number of steps since the last reset of the encoder. For example, this ”device”
can be used for turning the wheel of one turn precisely.
• LED: A LED (Light-Emiting Diode) is a small device which can emit light by using few
energy. An e-puck possesses several LEDs. Notably, 8 around it, 4 in the e-puck body and
1 in front of it. The front LED is more powerful than the others. The aim of these LEDs is
mainly to have a feedback on the state of the robot. They can also be used for illuminating
the environment.
2
1 More
information on: Stepper motor
2 More information on: Led
• Infrared (IR) sensor: An e-puck possesses 8 infrared (IR) sensors. An IR sensor is a device
which can produce an infrared light (a light which is out the range of the visible light) and
which can measure the amount of the received light. It has two kind of use. First, only
the received light is measured. In this configuration, the IR sensor measures the light of the
nearby environment. The e-puck can detect for example from where a light illuminates it.
Second, the IR sensor emits infrared light and measures the received light. If there is an
obstacle in front of the IR sensor, the light will bounce on it. The light difference is bigger.
So, the e-puck can estimate the distance between its IR sensors and an obstacle.
• Camera: In front of the e-puck, there is also a VGA camera. The e-puck uses it to discover
its direct front environment. It can for example follow a line, detect a blob, recognize objects,
etc.
Note that the stepper motors and the LEDs are actuators. This device have an effect on the
environment. To the contrary, the IR sensors and the camera are sensors. They measure specific
information of the environment. On the following page you can see photos of the mechanical design.
To successfully go through the following exercises, you have to know the existence of other
devices. The e-puck is alimented with a Li-ION battery. It has a running life of about 3 hours. You
can switch on or off your e-puck with the ON/OFF switch which is located near the right wheel.
The robot has also a Bluetooth interface which allows a communication with your computer or with
other e-pucks.
Finally, the e-puck has other devices (like the microphones and the speaker) that you will not
use in this document because the current version of Webots doesn’t support them yet.
[Q.1] What is the maximal speed of an e-puck? Give your answer in cm/s. (Hint: The
wheel radius is about 2.1 cm. Look at the definition of a stepper motor above.)
[Q.2] Compare your e-puck with an actual mobile phone. Which e-puck devices were
influenced by this industry?
[Q.3] Sort the following devices either into the actuator category or into the sensor category:
a LED, a stepper motor, an IR sensor, a camera, a microphone, an accelerometer and a speaker.
[P.1] Find where these devices are located on your real e-puck. (Hint: look at the figure
Epuck devices.png)
Robot Controller [Beginner]
In order to understand the concept of a robot controller you will play the role of the robot controller.
You will perceive the sensory information coming from the sensors of the robot and you will be able
3 More
35
information on: Accelerometer
97
98
ROBOT CONTROLLER [BEGINNER]
37
38
CHAPTER 6. BEGINNER PROGRAMMING EXERCISES
to control the actuators of the robot. In this exercise, you will not actually program the behavior
of the robot, but you will nevertheless control the robot.
Open the World File
First of all, you need to open the world file of this exercise. A world file contains the entire
environment of the simulation, i.e., the robot shape, the ground shape, the obstacles shape and
some general information like the position of the camera and even the direction of gravitational
vector. In the simulation window (window (1) in figure below), click on File | Open menu and open:
.../worlds/beginner_robot_controller.wbt
You can also open the world file by clicking on the open button on the tool box of the simulation
window. The e-puck model and its environment are loaded in Webots. In the simulation window,
you can see an e-puck on a green board.
The Webots Windows and the simulation Camera
Webots can display several windows. Some of them were already introduced. You will focus
especially on two of them (which are depicted on a figure):
• The simulation window (1): This window is probably the most important one. It shows a
3D representation of the simulation. In our case, you can see a virtual e-puck and its virtual
environment. If you want to modify the camera orientation, just click and drag with the left
button of the mouse where you want in the panel. Similarly you can modify the position of
the camera by using the right button (Note for the Mac OS X users : if you have a mouse with
a single button, hold down the Ctrl key and click for emulating the right click.). Finally you
can also set the zoom by moving the mouse wheel. There are also two important buttons in
this window: the play/stop button and the revert button. With the first one, the simulation
can be either played or stopped, and with the second one, the entire simulation can be reset.
• The robot window (2): This window shows a 2D representation of the e-puck. The purpose
of this window is to visualize the sensor values and the actuators values in real-time during a
simulation. The figure with the robot window shows the meaning of the values that can be
seen. The red integers correspond to the speed of the motors. They should be initially null.
The green values below correspond to the encoders. The light measured by the IR sensor is
represented by the green integers. While the distance between a IR sensor and an obstacle is
represented by blue integers. So, note that the green and the blue values represent the same
device. The red or black rectangles correspond to the LEDs which are respectively switched
on or off. Finally, the accelerometer is represented both by a 2D vector which corresponds to
the inclination of the e-puck, and by a slider which represents the norm of the acceleration.
This window contains also a drop-down menu to configure the Bluetooth connection.
[P.1] By using the camera, identify where is the front and the back of your virtual e-puck.
(Hint: the camera is placed in the front of the e-puck)
[P.2] Try to place the camera of the simulation window on the e-puck roof in order to see
in front of it. Then, use the revert button.
Figure 6.1: The simulation window (1) and the robot window (2)
Figure 6.2: A description of the robot window
ROBOT CONTROLLER [BEGINNER]
39
The e-puck Movements
Check that the simulation is running by clicking on the start/stop button. Then, click on the virtual
e-puck in order to select it. When your e-puck is selected, white lines appears. They represent the
bounds of your object for the physical simulation. You also remark red lines. They represent
the direction of the IR sensors. While the magenta lines correspond to the field of view of the
camera. Moreover, you can observe the camera values into a little window in the top left part of
the simulation window.
On your keyboard, press the “S” key and the “X” key for respectively increasing or decreasing
the speed value of the left motor. Try to press the “D” key and on the ”C” key for modifying the
speed of the right motor. Now you can move the virtual robot like a remote control toy. Note that
only one key can be pressed at the same time.
[P.3] Try to follow the black band around the board by using these four buttons.
[Q.1] Is it easy? What are the difficulties?
[Q.2] There are different kind of movements with an e-puck. Can you list them? (Ex: the
e-puck can go forwards)
[Q.3] Try to use the keyboard arrows and the ”R” key. What is the utility of these
commands? Explain the difference with the first ones. Are they more practical? Why?
Blinded Movement [Challenge]
The aim of this subsection is to play the role of the robot controller. A robot controller perceives
only the values measured by the robot sensors, treats them and sends some commands to the robot
actuators as depicted in the figure. Note that the sensor values are modified by the environment,
and that a robot can modify the environment with its actuators.
40
CHAPTER 6. BEGINNER PROGRAMMING EXERCISES
[P.4] Hide the simulation window (However, this window has to remain selected so that
the keyboard strokes are working. A way to hide it is to move it partially off-screen) and just look
at the sensor values. Try now to follow the wall as before only with the IR sensor information.
[Q.4] What information is useful? From which threshold value do you observe that a wall
is close to the robot?
Let’s move your real Robot
You probably have a real e-puck in front of you and you would like to see it moving! Webots can
communicate with an e-puck via a Bluetooth connection. It can receive some values from the e-puck
sensors and send some values to command the e-puck actuators. So, Webots can play the role of
the controller. This mode of operation is called a remote-control session.
In order to proceed, configure first your Bluetooth connection as explained in the section Bluetooth configuration. Stop the simulation with the start/stop button. Switch on your e-puck with
the ON/OFF switch. Then, in the robot window, select your Bluetooth connection in the dropdown menu. Behind the e-puck, an orange LED should switch on. To finish press the start/stop
button in order to run the program. Your e-puck should behave the same as in simulation.
[Q.5] Observe the sensor values from the real e-puck. Are they similar as the virtual ones?
[Q.6] Set the motor speeds to 10|10. When the real e-puck moves slowly, it vibrates. That
does not occur in simulation. Could you explain this phenomenon?
Your Progression
Congratulation! You finished the first exercise and stepped into the world of robotics. You already
learned a lot:
• What is a sensor, an actuator and a robot controller.
• What kind of problems a robot controller must be able to solve.
• What are the basic devices of the e-puck. In particular, the stepper motors, the LEDs, the
IR sensors, the accelerometer and the camera.
• How to run your mobile robot both in simulation and in reality and what is a remote-control
session.
• How to perform some basic operations with Webots.
Move your e-puck [Beginner]
Figure 6.3: The robot controller receives sensor values (ex: IR sensor, camera, etc.) and sends
actuator commands (motors, LEDs, etc.)
You already learned what a robot controller is. In the following exercises you will create simple
behaviors by using a graphical programming interface: BotStudio. This module is integrated in
Webots. The aim of this exercise is to introduce BotStudio by discovering the e-puck’s movement
possibilities.
99
100
MOVE YOUR E-PUCK [BEGINNER]
41
Open the World File
Similarly to the first exercise, open the following world file:
42
CHAPTER 6. BEGINNER PROGRAMMING EXERCISES
running, the virtual e-puck should change its actuators values accordingly. Note that when the
simulation is launched, the right part of BotStudio displays the IR sensors.
.../worlds/beginner_move_your_epuck.wbt
[P.1] Set the actuators of your virtual e-puck in order to go forward, to go backwards, to
follow a curve and to spin on itself.
Two windows are opened. The first one is the simulation window that you know. You should
observe a similar world as before except that the size of the board is twice as big. This is because
an e-puck needs room for moving. The second window is the BotStudio window (see figure).
[Q.1] For each of these moves, what are the links between the two speeds? (Example:
forward : right speed = left speed and right speed > 0 and left speed > 0)
[Q.2] There are 17 LEDs on an e-puck. 9 red LEDs around the e-puck (the back LED
is doubled), 1 front red LED, 4 intern green LEDs, 2 LEDs (green and red) for the power supply
and 1 orange LED for the Bluetooth connection. Find where they are and with which button or
operation you can switch them on. (Hint: some of them are not under your control and some of
them are linked together, i.e. they cannot be switched on or off independently)
The real e-puck’s IR Sensors
The aim of this subsection is to create a remote-control session with your real e-puck. This part
is similar to the subsection in previous exercise where you used the real robot. There are just two
differences due to the fact that the BotStudio window is used instead of the robot window. For
choosing the Bluetooth connection, there is just one drop-down menu instead of two. So, please
select your Bluetooth connection instead of the simulation item on the top right part of the window.
Then, you have to click on the upload button for starting the remote-control session.
Figure 6.4: The BotStudio interface
The “forward” State
A BotStudio window is composed of two main parts. The left part is a graphical representation
of an automaton. You will learn to use this part and understand the automaton concept in the
next exercise. The right part represents an e-puck in 2 dimensions. On this representation, you
can observe the e-puck sensors values in real-time. Moreover, you can set the actuators commands.
This interface has also a drop-down menu for choosing a Bluetooth connection in order to create a
remote-control session. This menu is similar to the two drop-down menu of the robot window that
you saw above. In top, there is a tool menu. This menu enables you to create, to load, to save
or to modify an automaton. The last button (the upload button) executes your automaton on the
e-puck.
In the BotStudio window, select the “forward” state (blue rectangle in the middle of the white
area) just by clicking on it. A selected rectangle becomes yellow. In the right part of the BotStudio
window, you can modify the actuator commands, i.e., the motors speed and the LEDs state. If
you want to change the motors speed, click and drag the two yellow sliders. You can set this value
between -100 and 100. 0 corresponds to a null speed, i.e., the wheel won’t turn. A positive value
should turn the wheel forward, and a negative one backwards. If you want to change the state of a
LED, click on its corresponding gray circle (red -> on, black -> off, gray -> no modification).
Configure the “forward” state as follows: all the LEDs are alight, and the motors speeds are
-30|30. Upload it on the virtual e-puck by clicking on the upload button. If the simulation is
[P.2] Set the actuators such that the e-puck doesn’t move. Try this configuration on your
real e-puck by creating a remote-control session. Put your hands around your real e-puck and
observe the modifications of the IR sensor values in the BotStudio window.
[Q.3] What are the values of the front left IR sensor when there is an obstacle (example:
a white piece of paper) at 1 cm ? At 3 cm ? At 5 cm ? At 10 cm ? Starting from which distance
is it difficult to distinguish an obstacle from the noise4 ?
Simple Behavior: Finite State Machine (FSM) [Beginner]
In the precedent exercise, you learned to configure a single state. One cannot speak about behavior
yet because your robot doesn’t interact with its environment, i.e., it moves but it hasn’t any reaction.
The goal of this exercise is to create a simple behavior. You will discover what an automaton is, how
it is related to the robot controller concept and how to construct an automaton using BotStudio.
Finite State Automaton
A finite state automaton (FSM)5 is a model of behavior. It’s a possible way to program a robot
controller. It’s composed of a finite number of states and of some transitions between them. In our
case, the states correspond to a configuration of the robot actuators (the wheels speed and the LEDs
4 Noise
is an unwanted perturbation. More information on : Noise
and more information on : Finite state automaton
5 Source
SIMPLE BEHAVIOR: FINITE STATE MACHINE (FSM) [BEGINNER]
43
44
CHAPTER 6. BEGINNER PROGRAMMING EXERCISES
state), while the transitions correspond to a condition over the sensor values (the IR sensors and
the camera), i.e., under which condition the automaton can pass from one state to another. One
state is particularly important: the initial state. It’s the state from where the simulation begins.
During this curriculum, an automaton will have the same signification as an FSM.
BotStudio enables you to create graphically an automaton. When an automaton is created, you
can test it on your virtual or real e-puck. You will observe in the following exercises that this simple
way to program enables to create a large range of behaviors.
Open the World File and move Objects
Open the following world file:
.../worlds/beginner_finite_state_machine.wbt
This time, there are two obstacles. You can move an object (obstacle, e-puck or even walls) by
selecting the desired object, and drag and drop it by pressing the shift key. The reverse button can
be pressed when you want to reset the simulation.
Creation of a Transition
In the BotStudio window, create two states with the new state button. Name the first state
”forward” and the second one ”stop” by using the text box at right. You can change the position
of a state by dragging and dropping the corresponding rectangle. Change the motors speed of these
states (forward state -> motors speed: 45|45, stop state -> motors speed: 0|0). Now, you will
create your first transition. Click on the new transition button. Create a link from the ”forward”
state to the ”stop” state (the direction is important!). In this transition, you can specify under
which condition the automaton can pass from the ”forward” state to the ”stop” state. Select this
transition by clicking on its text field. It becomes yellow. Rename it to ”front obstacle”. By
dragging the two highest red sliders, change the conditions values over the front IR sensors to have
”>5” for each of them. You should obtain an automaton as this which is depicted in the figure
called “First automaton”. Select the initial state (the ”forward” state) and test this automaton on
your virtual e-puck by clicking on the upload button.
Figure 6.5: First automaton
this state, set the motors speed to 30|-30. Add a transition called ”timer1” from the ”stop” state to
the ”U-turn” state. Select this transition. This time, don’t change the conditions of the IR sensors
but add a delay (1 s) to this condition by moving the yellow slider in the middle of the green circle.
The figure called “The timer condition” depicts what you should obtain.
[Q.1] What is the e-puck behavior?
[Q.2] In the ”forward” state, which actuator command is used? Which condition over the
IR sensors values are tested in the ”front obstacle” transition?
[P.1] Execute the same automaton on the real e-puck.
You finished your first collision avoidance algorithm, i.e., your e-puck doesn’t touch any wall.
This kind of algorithm is a good alternative to collision detection algorithm because a collision can
engender a robot’s degradation. Of course it isn’t perfect, you can think about a lot of situations
where your robot would still touch something.
U-turn
In this subsection, you will extend your automaton in order to perform a U-turn (a spin on itself of
180 degrees) after an obstacle’s detection. Add a new state called ”U-turn” to your automaton. In
Figure 6.6: The timer condition
[P.2] Run the simulation (always with ”forward” state as initial state) both on the virtual
and on the real e-puck.
For performing a perfect U-turn, you still have to stop the e-puck when it turned enough. Add
a new timer (”timer2”) transition from ”U-turn” state to ”forward” state (see next figure).
[Q.3] With which delay for the ”timer2” transition does the robot perform a perfect U-turn?
Is it the same for the real robot? Why?
101
102
SIMPLE BEHAVIOR: FINITE STATE MACHINE (FSM) [BEGINNER]
45
46
CHAPTER 6. BEGINNER PROGRAMMING EXERCISES
• How to design a simple FSM
The following exercises will train you to create an FSM by yourself.
Better Collision avoidance Algorithm [Beginner]
At this step, you know what an automaton is. Now, you will reinforce this knowledge through an
exercise of parameters estimation. The structure of the automaton is given (the states and the
transitions) but it doesn’t contain any parameter, i.e., the actuators commands and the conditions
over the sensors values aren’t set. You will set these parameters empirically.
Open the World File
Figure 6.7: A loop in the automaton
Open the following world file:
.../beginner_better_collision_avoidance_algorithm.wbt
[Q.4] You created an automaton which contains a loop. What are the advantage of this
kind of structure?
[Q.5] Imagine the two following states: the ”forward” state and the ”stop” state. If you
want to test front IR sensors values for passing from the ”forward” state to the ”stop” state, you will
have two possibilities: either you create one transition in which you test the two front IR sensors
values together or you create two transitions in which you test independently the two IR sensors
values. What is the difference between these two solutions?
During this exercise you created an automaton step by step. But there are still several BotStudio
tricks that are not mentioned above:
You may have to increase the size of the BotStudio window to see the entire automaton.
Collision Avoidance Automaton
Note that you can store your automaton by clicking on the save as button in the BotStudio window.
You can also load it by clicking on the load button.
[P.1] Start with the given automaton (see the figure). There is only its structure, i.e., there
are states and transitions but their parameters aren’t set. Find the parameters of each state and
each transition such that the e-puck avoids obstacles.
• For switching from ”bigger than” to ”smaller than” condition (or inversely) for an IR sensor,
click on the gray part of an IR sensor slider.
• If you don’t want to change the speed of a motor in a state, click on the yellow rectangle of
the motor slider. The rectangle should disappear and the motors speed will keep its precedent
value.
• If you want to remove a condition of a transition, set it to 0. The value should disappear.
• There is a slider which wasn’t mentioned. This slider is related to the camera. You will learn
more about this topic in exercise [sec:Line-following].
Your Progression
Thanks to the two previous exercises, you learned:
• What is an FSM and how it’s related with a robot behavior
• How to use BotStudio
Figure 6.8: A better collision avoidance automaton
[P.2] Repeat the operation for the real e-puck.
[Q.1] Describe your method of research.
The blinking e-puck [Beginner]
Until now, you just have modified existing automata. It is time for you to create them. In this
practical exercise, you will design your own automaton by manipulating LEDs.
*E-PUCK DANCE* [BEGINNER]
47
48
CHAPTER 6. BEGINNER PROGRAMMING EXERCISES
Open the World File
Open the World File
Open the following world file:
Open the following world file:
.../worlds/beginner_blinking_epuck.wbt
.../worlds/beginner_epuck_dance.wbt
Maybe you should increase the size of the BotStudio window for seeing the entire automaton.
In the simulation window, if you don’t see all the LEDs, you can move the camera around the robot
by left clicking. For this exercise, working directly on the real e-puck is more convenient.
Modify an Automaton
[Q.1] Without launching the automaton, describe the e-puck behavior. Verify your theory by
running the simulation.
[P.1] Modify the current automaton in order to add the following behavior: the 8 LEDs
will light on and switch off clockwise and they will stay on for 0.2 s.
[P.2] Modify the current automaton in order to add the following behavior: when you cover
up the real e-puck with your hands, the LEDs turn in the other direction. Use only 4 LEDs for this
one (front, back, left and right).
Create your own Automaton
A way to design an automaton is first to identify the possible actuators configurations, to create a
state for each of these configurations, to set the parameters of these states, to establish the conditions
to pass from a state to another, to create a transition for each of these conditions and finally to
set the parameters of these conditions. Unfortunately it is not always so easy. For example, in an
automaton, it’s possible to have two states with identical actuators commands. Indeed, if you see
somebody who is running across a street without any context, you don’t know if he’s running to
catch a bus or if he’s leaving a building in fire. He seems identical but it’s internal state is different.
[P.3] Create a new automaton (press the new graph button in BotStudio). Choose only
the four following LEDs: the front LED, the back one, the left one and the right one. The goal is
to switch on the LED corresponding to the side of the obstacle. Note that if there are obstacles on
two sides of the robot, two LEDs should be on ! Don’t do the case with an obstacle on three or
four sides.
[Q.2] If I proposed to repeat the exercise by using the 8 LEDs around the e-puck and with
all the cases up to 8 obstacles, would you do it? Why? Do you find a limitation in BotStudio?
*E-puck Dance* [Beginner]
The goal of this exercise is to put the fire on the dance floor with your virtual e-puck by creating
the e-puck dance. You can imagine dance as a succession of movements with the same rhythm.
You will be able to model that easily in a finite state automaton.
This opens a disco dance floor. Moreover, there is already a very little example of what you can
achieve. I hope you will find a better e-puck dance than the existing one.
Imagine your Dance
[P.1] Observe the existing dance. The automaton has a loop shape. The time of every transition
is identical. Create a new automaton or modify the existing one. First of all, choose a rhythm. The
chosen rhythm of the example is a movement every second. It implies that every timer is set to 1s.
Then, you should create a state for each movement you want to see during the global loop (note
that if you want to have twice the same movement during the main loop, you have to create two
states). Then, you have to set the states parameters according to your rhythm. Finally, link each
state with a timer transition. (Hints: for producing a beautiful dance, LEDs are welcome. You can
also perform semi-movements)
Line following [Beginner]
The goal of this exercise is to explore the last device available in BotStudio: the camera. With the
e-puck camera, you can obtain information about the ground in front of it. BotStudio computes ”in
real time” the center of the black line in front of the e-puck. The camera is another e-puck sensor
and the center of the front line is the sensor value of this camera.
A linear Camera
An e-puck has a camera in front of it. Its resolution is 480x640. For technical issues it has a width
of 480 and a height of 640. The most important information for following a line is the last line of
the camera (see the figure called “The field of view of the linear camera”). For this reason, only
the last line of the camera is sent from the e-puck to Webots. Finally an algorithm is applied on
this last line in order to find the center of the black line.
The problem is that the e-puck sees only at about 5.5 cm in front of it, and sees a line of about
4.5 cm of width. Moreover, this information is refreshed about 2 times per seconds. It is little
information!
[Q.1] Imagine that, every 5 s, at 4 m in front of you, you can only see a line of 3 m wide
with always the same angle. What will be your strategy for following the line? Imagine that the
line is intricate. What is the most important factor for following the line?
Open the World File
Open the following world file:
.../worlds/beginner_linear_camera.wbt
103
104
RALLY [BEGINNER] [CHALLENGE]
49
50
CHAPTER 6. BEGINNER PROGRAMMING EXERCISES
Figure 6.9: The field of view of the linear camera
This opens a long world. A black line is drawn on the ground. Remark that there is a grain
(Gaussian noise) on the e-puck camera values in order to be more realistic. Indeed, a real camera
doesn’t acquire perfect values. Some noise is always present on a real camera, it comes from several
factors.
Line following Automaton
In the BotStudio interface, you will also find a condition over the camera (figure called “The linear
camera condition in BotStudio”) represented by a slider. The value represents the center of the
black line in front of the robot. When a transition is selected, you can change the condition over
the camera by dragging the slider, and you can change the direction of the test by clicking on the
text field (ex: ”<5” becomes ”>5”). Remark that if there is no line in front of the robot, the center
value can be wrong.
[P.1] Run the given automaton both in the simulation and in the reality. For creating a
real environment, you can draw a line with a large black pen on a big white piece of paper (ex: A2
format). The black line must pass far from the paper bounds.
[P.2] Observe the direction (bigger than or smaller than) of the two conditions.
[P.3] Try to let go the e-puck as fast as possible by changing parameters of the states and
of the transitions.
Rally [Beginner] [Challenge]
Open the following world file:
Figure 6.10: The linear camera condition in BotStudio
.../worlds/beginner_rally.wbt
[P.1] Create an automaton which can perform a complete turn of this path. (Hint: adapt
your speed!)
52
CHAPTER 7. NOVICE PROGRAMMING EXERCISES
Create two automata; one for the first e-puck of the chain (the ”locomotive”) and one for the others
(the ”chariots”). The locomotive should go significantly slower than the chariots)
If you don’t succeed it, you can open the following automaton and try to improve it:
.../novice_train/novice_train_corr.bsg
Chapter 7
Note that there are two automata in a single file. The chariots must have the ”chariot init”
state as initial state, and the locomotive must have the ”locomotive init” state as initial state.
Novice programming Exercises
Remain in Shadow [Novice]
This chapter is composed of a series of exercises for the novices. We expect that the exercises of
the previous chapter are acquired. BotStudio is still used for the first exercises. More complex
automata will be created using this module. Then, C programming will be introduced. You will
discover the e-puck devices in detail.
Open the world file
*A train of e-pucks* [Novice]
You observe that the world is changed. The board is larger, there are more obstacles and there
is a doubled wall. The purpose of the doubled wall is to perform either an inner or an outer wall
following, and the purpose of the obstacles is to turn around them. So, don’t hesitate to move the
e-puck or the obstacles by shift-clicking.
The aim of this exercise is to create a more complex automaton. Moreover, you will manipulate
several virtual e-pucks at the same time. This simulation uses more e-pucks, your computer has to
be recent to avoid lags and glitches.
The purpose of this exercise is to create an automaton of wall following. You will see that it isn’t
so easy. This exercise still uses BotStudio, but is more difficult than precedent ones.
Open the following world file:
.../worlds/novice_remain_in_shadow.wbt
Wall following Algorithm
Open the World File
Open the following world file:
.../worlds/novice_train.wbt
The e-puck which is the closest to the simulation camera is the last of the queue. Each e-puck
has its own BotStudio window. The order of the BotStudio windows is the same as the order of the
e-pucks, i.e., the first e-puck of the queue is linked to the upper BotStudio window.
Upload the Robot Controller on several e-pucks
Stopping the simulation before uploading is recommended. Choose a BotStudio window (let say
the lowest one). You can modify its automaton as usual. You can use either the same robot
controller for every e-puck or a different robot controller for every e-puck. If you want to use the
same controller for every e-puck, save the desired automaton and load the saved file on the other
e-pucks. This way is recommended.
[P.1] Create an automaton so that the e-pucks form a chain, i.e., the first e-puck goes
somewhere, the second e-puck follow the first one, the third one follow the second one, etc. (Hints:
51
Let’s think about a wall following algorithm. There are a lot of different ways to apprehend this
problem. The proposed solution can be implemented with an FSM. First of all, e-puck has to go
forward until it meets an obstacle, then it spins on itself at left or at right (let’s say at right) to
be perpendicular to the wall. Then, it has to follow the wall. Of course, it doesn’t know the wall
shape. So, if it is too close to the wall, it has to rectify its trajectory to left. On the contrary, if it
is too far from the wall, it has to rectify to right.
[P.1] Create the automaton which corresponds to the precedent description. Test only on
the virtual e-puck. (Hints: Setting parameters of the conditions is a difficult task. If you don’t
achieve to a solution, run your e-puck close to a wall, observe the sensors values and think about
conditions. Don’t hesitate to add more states (like ”easy rectification — left” and ”hard rectification
— left”).)
If you don’t achieve to a solution, open the following automaton in BotStudio (it’s a good
beginning but it’s not a perfect solution):
.../controllers/novice_remain_in_shadow/novice_remain_in_shadow_corr.bsg
[P.2] If you tested your automaton on an inner wall, modify it to work on an outer wall,
and inversely.
105
106
INTRODUCTION TO THE C PROGRAMMING
53
54
CHAPTER 7. NOVICE PROGRAMMING EXERCISES
• The controller file: It is the program used by the robot. It defines the behavior of the robot.
You already saw that a controller file can be a BotStudio automaton and you will see that it
can also be a C program which uses the Webots libraries.
[P.3] Until now the environment hasn’t changed. Add a transition in order to find again
a wall if you lost the contact with an obstacle. If the e-puck turns around an obstacle which is
removed, it should find another wall.
[P.4] Modify parameters of the automaton in order to reproduce the same behavior on a
real e-puck.
Your Progression
With the two previous exercises, you learned:
• How to construct a complex FSM using BotStudio
• What are the limitations of BotStudio
The following exercises will teach you another way to program your e-puck: C language.
Introduction to the C Programming
Until now, you have programmed the robot behavior by using BotStudio. This tool enables to
program quickly and intuitively simple behaviors. Probably you also remarked the limitations of
this tool in particular that the expression freedom is limited and that some complex programs
become quickly unreadable. For this reason, a more powerful programming tool is needed: the C
programming language. This programming language has an important expression freedom. With it
and the Webots libraries, you can program the robot controller by writing programs. The problem
is that you have to know its syntax. Learning the C language is out of the focus of the present
document. For this reason, please refer to a book about the C programming. There are also very
useful tutorials on the web about this subject like those wikibooks on C programming
You don’t need to know every subtleties of this language to successfully go through the following
exercises. If you are totally beginner, focus first on: variables, arrays, functions, control structures
(Boolean tests, if ... else ..., switch, for, while, etc.).
In the four following exercises, you will discover the e-puck devices independently. The exercises
about devices are sorted according to their difficulty, i.e., respectively: the LEDs, the stepper
motors, the IR sensors, the accelerometer and the camera.
at:
Note that almost all the world files of this document use the same definition of the e-puck located
webots_root/project/default/protos/EPuck.proto
This file contains the description of a standard e-puck including notably the names of the devices.
This will be useful for getting the device tags.
The simplest Program
The simplest Webots program is written here. This script uses a Webots library (refer to the first
line) to obtain the basic robot functionality.
// included libraries
#include <webots/robot.h>
// global defines
#define TIME_STEP 64
// [ms]
// global variables...
int main() {
// Webots init
wb_robot_init();
// put here your initialization code
// main loop
while(wb_robot_step(TIME_STEP) != -1) {
}
// put here your cleanup code
The Structure of a Webots Simulation
// Webots cleanup
wb_robot_cleanup();
To create a simulation in Webots, two kinds of files are required:
• The world file: It defines the virtual environment, i.e., the shape, the physical bounds, the
position and the orientation of every object (including the robot), and some other global
parameters like the position of the windows, the parameters of the simulation camera, the
parameters of the light sources, the gravitational vector, etc. This file is written in the VRML1
language.
1 The
Virtual Reality Markup Language (VRML) is standard file format for representing 3D objects.
return 0;
}
[P.1] Read carefully the programming code.
[Q.1] What is the meaning of the TIME STEP variable ?
K-2000 [NOVICE]
55
56
CHAPTER 7. NOVICE PROGRAMMING EXERCISES
[Q.2] What could be defined as a global variable ?
[Q.3] What should we put in the main loop ? And in the initialization section ?
K-2000 [Novice]
For the first time, this exercise uses the C programming. You will create a LEDs loop around the
robot as you did before in exercise The blinking e-puck.
Open the World File and the Text Editor Window
Open the following world file:
.../worlds/curriculum_novice_k2000.wbt
This will open a small board. Indeed, when one plays with the LEDs we don’t need a lot of
room. You remark also a new window: the text editor window (you can see it on the picture).
In this window you can write C programs, load, save and compile2 them (for compiling click on
compile button).
[P.1] Run the program on the virtual e-puck (Normally, it should already run after the
opening of the world file. If it is not the case, you have to compile the controller and to revert the
simulation). Observe the e-puck behavior.
[P.2] Observe carefully the C programming code of this exercise. Note the second include
statement.
[Q.1] With which function the state of a LED is changed? In which library can you find
this function? Explain the utility of the i global variable in the main function.
Simulation, Remote-Control Session and Cross-Compilation
There are three utilization modes for the e-puck:
• The simulation: By using the Webots libraries, you can write a robot controller, compile it
and run it in a virtual 3D environment. It’s what you have done in the previous subsection.
• The remote-control session: You can write the same program, compile it as before and
run it on the real e-puck through a Bluetooth connection.
• The cross-compilation: You can write the same program, cross-compile it for the e-puck
processor and upload it on the real robot. In this case, the old e-puck program (firmware)
is substituted by your program. In this case, your program is not dependent on Webots and
can survive after the rebooting of the e-puck.
2 Compiling
is the act of transforming source code into object code.
Figure 7.1: The text editor window
107
108
MOTORS [NOVICE]
57
If you want to create a remote-control session, you just have to select your Bluetooth instead
of simulation in the robot window. Your robot must have the right firmware as explained in the
section E-puck prerequisites.
For the cross-compilation, select first the Build | Cross-compile... menu in the text editor window
(or click on the corresponded icon in the tool bar). This action will create a .hex file which can be
executed on the e-puck. When the cross-compilation is performed, Webots ask you to upload the
generated file (located in the directory of the e-puck). Click on the Yes button and select which
bluetooth connection you want to use. The file should be uploaded on the e-puck. You can also
upload a file by selecting the Tool | Upload to e-puck robot... menu in the simulation window.
For knowing in which mode is the robot, you can call the wb robot get mode() function. It
returns 0 in a simulation, 1 in a cross-compilation and 2 in a remote-control session.
[P.3] Cross-compile the program and to upload it on the e-puck.
[P.4] Upload the firmware on the e-puck (see the section E-puck prerequisites). Compile
the program and launch a remote-control session.
Modifications
[P.5] Modify the given programming code in order to change the direction of the LEDs rotation.
manner.
[P.6] Modify the given programming code in order to blink all the LEDs in a synchronous
[P.7] Determine by tests at which led correspond each device name. For example, the
”led0” device name corresponds to the most front led of the e-puck.
Motors [Novice]
The goal of this exercise is to use some other e-puck devices: the two stepper motors.
58
CHAPTER 7. NOVICE PROGRAMMING EXERCISES
For knowing the position of a wheel, the encoder device can be used. Note that an e-puck has
not any physical encoder device which measures the position of the wheel like on some other robots.
But a counter is incremented when a motor step is performed, or it is decremented if the wheel
turns on the other side. This is a good approximation of the wheel position. Unfortunately, if the
e-puck is blocked and the wheel doesn’t slide, the encoder counter can be incremented even if the
wheel doesn’t turn.
The encoders are not implemented yet in the remote-control mode. So, if you want to use it on
a real e-puck, use the cross-compilation.
[Q.1] Without running the simulation, describe what will be the e-puck behavior.
[P.1] Run the simulation on the virtual e-puck.
Modifications
[P.2] Modify the given programming code to obtain the following behavior: the e-puck goes
forward and stops after exactly a full rotation of its wheels. Try your program both on the real and
on the virtual e-puck.
[P.3] Modify the given programming code to obtain the following behavior: the e-puck goes
forward and stops after exactly 10 cm. (Hint: the radius of an e-puck wheel is 2.1 cm)
[P.4] [Challenge] Modify the given programming code to obtain the following behavior: the
e-puck moves to a specific XZ-coordinates (relative to the initial position of the e-puck).
The IR Sensors [Novice]
You will manipulate in this exercise the IR sensors. This device is a less intuitive than the previous
ones. Indeed, an IR sensor can have different uses. In this exercise, you will see what information
is provided by this device and how to use it. The log window will also be introduced.
Open the World File
Open the following world file:
.../worlds/novice_motors.wbt
The Whirligig
“A stepper motor is an electromechanical device which converts electrical pulses into discrete mechanical movements” 3 . It can divide a full rotation into a large number of steps. An e-puck stepper
motor has 1000 steps. This kind of motor has a precision of \pm1 step. It is directly compatible with
digital technologies. You can set the motor speed by using the wb differential wheels set speed(...)
function. This function receives two arguments: the motor left speed and the motor right speed.
The e-puck accepts speed values between -1000 and 1000. The maximum speed corresponds to
about a rotation every second.
3 Eriksson,
Fredrik (1998), Stepper Motor Basics
The IR Sensors
Eight IR sensors are placed around the e-puck in a not regular way. There are more sensors in the
front of the e-puck than in the back. An IR sensor is composed of two parts: an IR emitter and a
photo-sensor. This configuration enables an IR sensor to play two roles.
Firstly, they can measure the distance between them and an obstacle. Indeed, the IR emitter
emits infrared light which bounce on a potential obstacle. The received light is measured by the
photo-sensor. The intensity of this light gives directly the distance of the object. This first use
is probably the most interesting one because they enable to know the nearby environment of the
e-puck. In Webots, an IR sensor in this mode of operation is modeled by using a distance sensor.
Note that the values measured by an IR sensor behave in a non linear way. For illustrating this
fact, the following experience was performed. An e-puck is placed in front of a wall. When the
experiment begins, then the e-puck moves backwards. The values of the front right IR sensor are
stored in a file. These values are plotted in the figure. Note that the time steps are not identical
for the two curves. The distance between the e-puck and the wall grows linearly but the measures
THE IR SENSORS [NOVICE]
59
60
CHAPTER 7. NOVICE PROGRAMMING EXERCISES
of the IR sensor are non-linear. Then, observe the offset value. This value depends principally of
the lighted environment. So, this offset is often meaningless. Note also that the distance estimation
depends on the obstacle (color, orientation, shape, material) and on the IR sensor (properties and
calibration).
Figure 7.3: The log window
[Q.3] Without running the simulation, describe what will be the e-puck behavior.
[P.2] Run the simulation both on the virtual and on the real e-puck, and observes the
e-puck behavior when an object approaches it.
[Q.4] Describe the utility of the calibrate(...) function.
Figure 7.2: An e-puck is in front of a wall, and moves backwards. This figure plots the front right
distance sensor value in function of the simulation steps. The left plot depicts the real values and
the right plot depicts the simulated values.
Secondly, the photo-sensor can be used alone. In that case, the IR sensors quantify the amount
of received infrared light. A typical application is the phototaxy, i.e., the robot follows or avoids a
light stimulus. This behavior is inspired of the biology (particularly of the insects). In Webots, an
IR sensor in this mode of operation is modeled by using a light sensor. Note that in this exercise,
only the distance sensors are manipulated. But the light sensors are also ready to use.
[P.3] Until now, the offset values are defined arbitrarily. The goal of this part is to calibrate
the IR sensors offsets of your real e-puck by using the calibrate function. First of all, in the main
function, uncomment the call to the calibrate function, and compile the program. Your real e-puck
must be in an airy place. Run the program on your real e-puck. The spent time in the calibration
function depends on the number n. Copy-paste the results of the calibrate function from the log
window to your program in the ps offset real array. Compile again. Your IR sensors offsets are
calibrated to your environment.
[P.4] Determine by tests at which IR sensor correspond each device name. For example,
the ”ps2” device name corresponds to the right IR sensor of the e-puck.
Open the World File
Use the Symmetry of the IR Sensors
Open the following world file:
Fortunately, the IR sensors can be used more simply. For bypassing the calibration of the IR sensors
and the treatment of their returned values, the e-puck symmetry can be used. If you want to create
a simple collision avoidance algorithm, the difference (the delta variable of the example) between
the left and the right side is more important than the values of the IR sensors.
.../worlds/novice_ir_sensors.wbt
You should observe a small board, on which there is just one obstacle. To test the IR sensors,
you can move either the e-puck or the obstacle. Please note also that the window depicted in
the figure. This window is called the log window. It displays text. Two entities can write in this
window: either a robot controller or Webots.
Calibrate your IR Sensors
[P.1] Without running the simulation, observe carefully the programming code of this exercise.
[Q.1] Why the distance sensor values are subtracted by an offset? Describe a way to
compute this offset?
[Q.2] What is the utility of the THRESHOLD DIST variable?
[P.5] Uncomment the last part of the run function and compile the program. Observe the
code and the e-puck behavior. Note that only the values returned by the IR sensors are used. Try
also this part on the real robot either in remote-control mode or in cross-compilation.
[P.6] Get the same behavior as before but backwards instead of forwards.
[P.7] Simplify the code as much as possible in order to keep the same behavior.
Accelerometer [Novice]
In this exercise and for the first time in this curriculum, the e-puck accelerometer will be used. You
will learn the utility of this device and how to use it. The explanation of this device refers to some
109
110
ACCELEROMETER [NOVICE]
61
62
CHAPTER 7. NOVICE PROGRAMMING EXERCISES
notions that are out of the scope of this document such as the acceleration and the vectors. For
having more information about these topics, refer respectively to a physic book and to a mathematic
book.
Open the World File
Open the following world file:
.../worlds/novice_accelerometer.wbt
This opens a world containing a spring board. The utility of this object is to observe the
accelerometer behavior on an incline and during the fall of an e-puck. For your tests, don’t hesitate
to move your e-puck (SHIFT + mouse buttons) and to use the Step button. For moving the e-puck
vertically, use SHIFT + the mouse wheel.
The Accelerometer
The acceleration can be defined as the change of the instantaneous speed. Its SI units are sm2 . The
accelerometer is a device which measures its own acceleration (and so the acceleration of the e-puck)
as a 3D vector. The axis of the accelerometer are depicted on the figure. At rest, the accelerometer
measures at least the gravitational acceleration. The modifications of the motor speeds, the robot
rotation and the external forces influence also the acceleration vector. The e-puck accelerometer is
mainly used for:
• Measuring the inclination and the orientation of the ground under the robot when it is at
rest. These values can be computed by using the trigonometry over the 3 components of
the gravitational acceleration vector. In the controller code of this exercise, observe the
getInclination(float x,float y, float z) and the getOrientation(float x,float y,
float z) functions.
• Detecting a collision with an obstacle. Indeed, if the norm of the acceleration (observe the
getAcceleration(float x,float y, float z) function of this exercise) is modified brutally without changing the motor speed, a collision can be assumed.
• Detecting the fall of the robot. Indeed, if the norm of the acceleration becomes too low, the
fall of the robot can be assumed.
[Q.1] What should be the direction of the gravitational acceleration? What is the direction
of the vector measured by the accelerometer?
[P.1] Verify your previous answer, i.e., in the simulation, when the e-puck is at rest, observe
the direction of the gravitational acceleration by observing the log window. (Hint: use the Step
button.)
Figure 7.4: The axis orientation of the e-puck accelerometer
Practice
[P.2] Get the following behavior: when the e-puck falls, the body led switches on. If you want
to try this in reality, remember that an e-puck is breakable.
[P.3] Get the following behavior: only the lowest led switches on (according to the vertical).
Camera [Novice]
In the section sec:Line-following, you already learned the basics of the e-puck camera. You used
the e-puck camera as a linear camera by handling the BotStudio interface. By using this interface,
you are more or less limited to follow a black line. You will observe in this exercise that you can use
the camera differently by using the C programming. Firstly, your e-puck will use a linear camera.
It will able be to follow a line of a specific color. Secondly, your e-puck will follow a light source
symbolized by a light point using the entire field of view of the camera.
Open the World File
Open the following world file:
.../worlds/novice_linear_camera.wbt
This opens a long board on which three colored lines are drawn.
63
CAMERA [NOVICE]
Linear Camera — Follow a colored Line
On the board, there are a cyan line, a yellow line and a magenta line. These three colors (primary
colors) are not chosen randomly. They have the particularity to be seen independently by using a
red, green or blue filter (primary colors). The four figures depicts the simulation first in a colored
mode (RGB) and, then, by applying a red, green or blue filter. As depicted in the table, if a red
filter is used, the cyan color has no component in the red channel, so it seems black, while the
magenta color has a component, so it seems white like the ground.
64
CHAPTER 7. NOVICE PROGRAMMING EXERCISES
Color
Cyan
Magenta
Yellow
Black
White
Red channel
0
1
1
0
1
Green channel
1
0
1
0
1
Blue channel
1
1
0
0
1
Table 7.1: RGB components of the used colors
[Q.1] According to the current robot controller, the e-puck follows the yellow line. What
do you have to change in the controller code to follow the blue line?
[Q.2] What is the role of the find middle(...) function? (Note: the same find middle(...)
function is used in BotStudio)
[Q.3] The speed of the motors depends twice on the delta integer. Why?
[Q.4] Explain the difference between the two macro variables TIME STEP and TIME STEP CAM.
[P.1] Try this robot controller on your real e-puck. For creating a real environment, use a
big piece of paper and trace lines with fluorescent pens. (Hint: The results depends a lot on your
room light)
Open another World File
Open the following world file:
.../worlds/novice_camera.wbt
This opens a dark board with a white ball. To move the ball, use the arrow keys. This ball can
also move randomly by pressing the M key.
Follow a white Object — change the Camera Resolution
[P.2] Launch the simulation. Approach or move away the ball.
Figure 7.5: The RGB channels of the current world
In the actual configuration, the e-puck camera acquires its image in an RGB mode. Webots
software uses also this mode of rendering. So, it’s easy to separate these three channels and to see
only one of these lines. In the code, this will be done by using the wb camera image get red(...),
wb camera image get green(...) and wb camera image get blue(...) functions of the webots/camera.h
library.
[Q.5] What is the behavior of the e-puck?
The rest of this subsection will teach you to configure the e-puck camera. Indeed, according
to your goals, you don’t need to use the same resolution. You have to minimize the resolution
according to your problem. The transfer of an image from the e-puck to Webots requires time. The
bigger an image is, the longer it will take to be transmitted. The e-puck has a camera resolution of
480x640 pixels. But the Bluetooth connection supports only a transmission of 2028 colored pixel.
For this reason a resolution of 52x39 pixels maximizes the Bluetooth connection and keeps a 4:3
ratio.
111
112
CAMERA [NOVICE]
65
The figure with the field of view of the e-puck shows where the physical parameters of the e-puck
camera are. They correspond to the following values:
• a: about 6 cm
• b: about 4.5 cm
• c: about 5.5 cm
• α: about 0.47 rad
• β: about 0.7 rad
In the text editor window, open the world file:
66
CHAPTER 7. NOVICE PROGRAMMING EXERCISES
epuck_linear_camera {
translation -0.03 0 0.7
rotation 0 1 0 0
controller "curriculum_novice_camera"
camera_fieldOfView 0.7
camera_width 60
}
Note that the height of the camera is not defined. This field is implicitly set to 1.
The table, compares the values of the world file with the resulted values of the real e-puck
camera. Note that the field of view (\beta) influences the zoom attribute. A zoom of 8 defines a
sub-sampling (one pixel is taken over 8). X and Y show the relative position of the real camera
window (see figure fig:utilisation-camera).
.../worlds/novice_camera.wbt
This opens the decryption of the world of this exercise. At the end of the file, you will find the
following node:
EPuck {
translation -0.03 0 0.7
rotation 0 1 0 0
controller "curriculum_novice_camera"
camera_fieldOfView 0.7
camera_width 52
camera_height 39
}
This part defines the virtual e-puck. The prototype4 of the e-puck is stored in an external file
(see The Structure of a Webots Simulation). Only some fields can be modified. Notably, what
will be the e-puck position, its orientation and its robot controller. You will also find there the
attributes of the camera. Firstly, you can modify the field of view. This field accepts values bigger
than 0 and up to 0.7 because it is the range of the camera of your real e-puck. This field will also
define the zoom attribute of the real e-puck camera. Secondly, you can also modify the width and
the height of the virtual camera. You can set them from 1 to 127. But their multiplication hasn’t
to exceed 2028. When the modification of the world file is performed, then, save the world file and
revert the simulation.
The current version of the e-puck firmware5 permits only to obtain a centered image. For
obtaining the last line of the real camera in order to simulate a linear camera, a trick is required. If
the virtual e-puck camera is inclined and the height is 1, Webots call another routine of the e-puck
firmware for obtaining the last line. So, if you want to use the linear camera, you have to use
another e-puck prototype in which the virtual camera is inclined:
4 A Webots prototype is the definition of a 3D model which appears frequently. For example, all the virtual e-puck
of this document com from only two prototypes. The advantage is to simplify the world file.
5 The firmware is the program which is embedded in the e-puck.
Figure 7.6: The physical parameters of the real camera
[P.3] Try each configuration of the table. Observe the results both on your virtual and on
your real e-puck. (Hint: Put the values from the left part of the table to the world file)
Your Progression
In the five previous exercises, you saw in detail five devices of the e-puck: the LEDs, the stepper
motors, the IR sensors, the accelerometer and the camera. You saw their utility and how to use
them using the C programming. Now, you have all the technical knowledge to begin to program
the e-puck behavior. This is the main topic of the rest of the curriculum.
67
CAMERA [NOVICE]
68
Figure 7.7: The window of the real camera
width
(sim)
52
39
52
39
120
60
40
height
(sim)
39
52
39
52
1
1
1
β
0.7
0.7
0.08
0.08
0.7
0.7
0.35
angle6
false
false
false
false
true
true
false
width
416
468
52
39
480
480
240
height
312
624
39
52
1
1
1
Table 7.2: The recommended camera configurations
zoom
8
12
1
1
4
8
6
x
y
32
6
214
222
0
0
120
164
8
300
294
639
639
320
CHAPTER 7. NOVICE PROGRAMMING EXERCISES
113
114
70
CHAPTER 8. INTERMEDIATE PROGRAMMING EXERCISES
In our case, the behavioral part of the robot is modeled using an FSM.
[Q.1] Locate in the given C programming code where are these 3 points. Explain what
operations are performed in each of these points.
[Q.2] Describe how the FSM is implemented, i.e., how the current state is modeled, which
control structure is used, what kind of instructions are performed in the states, what are the
conditions to switch from a state to another, etc.
Chapter 8
Intermediate programming
Exercises
In the previous chapter, you learned to manipulate the e-puck devices in detail by using the C
programming. You have now the technical background to start this new chapter about the robotic
behavior. Different techniques will be used to program the e-puck behavior. Notably, you will see
that your robot has a memory.
Program an Automaton [Intermediate]
During this exercise, you will convert an FSM of the chapter Beginner programming Exercises in
the C programming. First, a simple example will be given, a new version of the exercise Simple
Behavior: Finite State Machine. Then, you will create your own FSM. It will be a new version of
the exercise Better Collision avoidance Algorithm.
Create your own FSM
[P.1] By using an FSM, implement a wall following algorithm. (Hint: start to design the
structure of the automaton and then find how the transitions are fired and finally set up parameters
empirically.)
*Lawn mower* [Intermediate]
In this optional exercise, you will discover another topic of the robotic: the exhaustive research, i.e.,
your e-puck will move on a surface having an unknown shape, and will have to pass by every place.
A cleaning robot or an automatic lawn mower is a typical application of this kind of movement.
There are different ways to treat this topic. This exercise presents two of them: the random walk
and the walk ”by scanning”.
The random Walking
Open the following world file:
.../worlds/intermediate_lawn_mower.wbt
Open the World File
Open the following world file:
.../worlds/intermediate_finite_state_machine.wbt
You should observe the same world as in the section Simple Behavior: Finite State Machine.
An Example of FSM
According to the figure Sensors to actuators loop.png, the main loop of your program should always
execute its instructions in the following order:
1. Obtain the sensor values
2. Treat these values according to the expected behavior
3. Sends commands to the actuators
69
You should see a grassy board with a white hedge. For the moment, the robot controller is the
same as this of the previous exercise, i.e., a simple FSM which lets return your e-puck when it meets
a wall. In this subsection, you will create a random walk.
When your e-puck meets a wall, it must spin on itself and go in another direction as depicted
in figure below. The next figure depict a possible automaton for a random walk.
For generating a random integer between 0 and X, import the standard library (#include
<stdlib.h>) and the time library (#include <time.h>), and write the two following instructions:
// The utility of this line is to have at every simulation a different
// series of random number
srand(time(0));
// change X by the maximal bound (in the double format)
int random_value = X * rand() / (RAND_MAX+1);
*LAWN MOWER* [INTERMEDIATE]
71
72
CHAPTER 8. INTERMEDIATE PROGRAMMING EXERCISES
[Q.1] Which part of the random walk is impossible to realize by using BotStudio?
[P.1] Implement the automaton given in the figure: Random Walk. For the stop conditions,
you can either use a timer or use a condition over the encoders. (Hint: you can merge the ”turn —
left” state and the ”random turn — left” state. Ibid for the right part.)
[Q.2] Generally speaking, is a random walk algorithm efficient in term of surface coverage?
(Hints: Does the e-puck pass by every point? Are there places where the e-puck spends more time?)
A walk ”by scanning”
Another solution for performing an exhaustive coverage is to ”scan” the area as depicted in figure.
Firstly, a horizontal scanning is performed, and then, when the robot has no more possibilities to go
on, a vertical scanning is performed. The corresponding automaton is depicted in the next figure.
This automaton is the biggest you have seen until now.
Figure 8.1: In a random walk, this figure shows the possible output ways when an e-puck detects
an obstacle.
Figure 8.3: A possible trajectory of a walk ”by scanning”
Figure 8.2: A possible automaton for a random walk.
[P.2] Implement the automaton given in the figure: “by scanning”. (Hint: you can use the
symmetry of this automaton for a shorter implementation.)
[Q.3] Generally speaking, is the walk ”by scanning” algorithm efficient in term of surface
coverage? (Hints: Can you find another topology of the room in order to have a place where the
e-puck never passes? Are there places where the e-puck spends more time?)
115
116
BEHAVIOR-BASED ARTIFICIAL INTELLIGENCE
73
74
CHAPTER 8. INTERMEDIATE PROGRAMMING EXERCISES
The external structure of a module is shown on the figure. A module receives input values
directly from the sensor or from another module. These values can be inhibited by another module.
Similarly, the output values are directed either directly on an actuator or on another module.
Finally a module may have a reset function.
There is several ways to implement these modules. We decided to use C functions for modeling them. Indeed, a function receives arguments as input and returns output values. For the
code simplification, some of these values are stored in global variables in order to facilitate the
communication between the modules.
Figure 8.4: A possible automaton for walking ”by scanning”. OF, OR and OL means that an
obstacle is detected respectively in front, at right or at left of the robot. FR and FL means that no
obstacle is detected respectively at right and at left of the robot.
Your Progression
At this moment, you know how the program uses an FSM for creating a robot behavior. An FSM
can be formalized mathematically. Moreover, there is a theory behind this concept: the automata
theory. FSM is only a part of this theory. You can find there other kinds of automata. For example,
a probabilistic automaton has probabilities on its transitions. So, a state and fixed sensor values
may have different next states. For more information about this topic, you can refer to:
Automata theory
This is a good starting point. In the literature, you will find several related books.
Behavior-based artificial Intelligence
Until now, you have learned to program a robotic behavior by using a finite state automaton.
The three following exercises will explain you a completely different way to treat this subject: the
behavior-based robotic that was introduced by the professor Rodney Brooks1 in a paper2 of 1986.
The behavior-based robotic is a way to create a robotic behavior (to program the robot controller). The idea is to separate a complex behavior into several smaller behavioral modules. These
modules are independent and semi-autonomous. They have a delimited role to play. They work
together without synchronization. Typical examples of these modules could be: ”go forward”, ”stay
upright”, ”stop if there is an obstacle”, ”follow a wall”, ”search food”, ”be happy”, or ”help the
community”. You may observe that these examples are placed hierarchically. Down in the hierarchy, there will be the reflex modules (like ”stay upright”). Up in the hierarchy, there will be
the goals of the robot (like ”find food”). A reflex module can influence its hierarchical superiors.
Indeed, if you stumble, the fact to stay standing is the most important: the order coming from your
foot sensation dominates your ability to think.
1 See
Rodney Brooks for more information.
R. A. “A Robust Layered Control System for a Mobile Robot”, IEEE Journal of Robotics and Automation, Vol. 2, No. 1, March 1986, pp. 14–23; also MIT AI Memo 864, September 1985.
2 Brooks,
Figure 8.5: The external shape of a behavioral module
Behavioral Modules [Intermediate]
In this exercise, you will observe practically the behavior-based robotic with two modules: an
obstacle avoidance module and a wall following module. First, you will observe the modules independently. Then, you will mix them together.
This exercise is closely related with the two following ones.
Open the World File
Open the following world file:
.../worlds/intermediate_oam.wbt
You should observe an environment punctuated by obstacles. Don’t hesitate to move them or
to overlay them.
Obstacle avoidance Module (OAM)
The given robot controller uses actually only the obstacle avoidance module (OAM). This module
is a reflex module. It will run all the time. It receives as input the IR sensor values. If an obstacle
is detected in front of the e-puck, the OAM will compute its own speed estimation in order to avoid
BEHAVIORAL MODULES [INTERMEDIATE]
75
the collision. It can only spin the robot on itself (left speed = — right speed). Finally, the OAM
actualize the side variable in order to memorize where the wall is. This information will help other
modules.
[P.1] Run the simulation both on the virtual and on the real e-puck.
[Q.1] Describe the e-puck behavior. What is its reaction when it meets a wall? What is
its trajectory when there is no obstacle in front of it?
[Q.2] The OAM generates a motor speed difference, but the robot goes forward. Why?
[Q.3] In the OAM function, what is the way to compute the motor speeds difference (the
delta variable)?
76
CHAPTER 8. INTERMEDIATE PROGRAMMING EXERCISES
[Q.5] Compare the figure and the given controller code. (Hints: How the black arrow is
implemented? How the sensor values are sent to the modules?)
[Q.6] Explain the utility of each term of the sum present in the wb differential wheels set speed(...)
function call.
*[P.3]* In simulation, try to obtain an e-puck trajectory as smooth as possible by changing
the macro variables (see the variables defined with the #define statement). The figure with two
arrows (green and red) depicts a smooth trajectory in comparison of a sinuous one. Note that a
smooth trajectory depends a lot on the environment, i.e., if you obtain a smooth trajectory and
you change the obstacle width, your robot will probably have a sinuous trajectory.
Wall following Module (WFM)
The second module (named wall following module (WFM)) creates a constant motor speed difference
according to the side variable. Its role is to attract the e-puck against the wall. If there was only
this module, the robot would collide with the wall. Fortunately, if the OAM module is also enabled,
it will create a repulsion. The combination of these two modules will create a more powerful
behavior: the robot will be able to follow a wall. In biology, this phenomenon is called emergence.
The figure depicts the interaction between these two modules. Horizontally, the schema separations are similar to the figure Sensors to actuators loop.png, it’s the perception-to-action loop.
Vertically, the modules are separated by hierarchical layers. The most bottom layer is the reflex
one. The black arrow from the OAM to the WFM symbolizes the side variable.
Figure 8.7: The green arrow represents a smooth trajectory, while the red arrow represents a sinuous
trajectory.
Create a line following Module [Intermediate]
In the previous exercise, you observed the interactions between two modules. In this exercise,
similarly, you will see some other modules and their interactions. The aim of this exercise is to
observe how three modules can generate a powerful line following controller. At the end, you will
create your own module.
Figure 8.6: The interactions between the OAM and the WFM
[P.2] In the main loop of the main function, uncomment the wfm() function call and compile
your program. Run the simulation both on the virtual and on the real e-puck. Observe the robot
behavior. Try to move the obstacle at which the e-puck is ”linked”.
[Q.4] Describe the e-puck behavior.
Open the World File
Open the following world file:
.../worlds/intermediate_lfm.wbt
The e-puck is on the starting blocks for turning around the ring.
117
118
CREATE A LINE FOLLOWING MODULE [INTERMEDIATE]
77
78
CHAPTER 8. INTERMEDIATE PROGRAMMING EXERCISES
Three modules for a Wall Following
Your own Module
The robot controller of this exercise uses three modules:
[P.3] Implement a new module (called utm()) which will let the e-puck perform a u-turn
when there is no more line in front of it. This module will work when the WFM is inhibited and
conversely. In this module, the robot should only generate a motor speed difference.
• Line following module (LFM): First, this module receives the linear camera values (the last
line of the camera). With the help of the find middle(...) function, it finds the middle
of the black line in front of the robot. Then, it computes its own appreciation of the motor
speeds. Similarly to the OAM, this function only creates a motor speed difference. This is a
high level module. Its inputs values can be inhibited.
• Line entering module (LEM): This module observes also the linear camera values. It notice
if there is a line in the robot field of view.
• Line leaving module (LLM): This module works collectively with the LEM. It notice if there
is no line in the robot field of view.
The utility of the LEM and LLM appears when the e-puck enters or leaves a line. With these
modules, these two events are mastered. In this exercise, these two modules are used to inhibit the
LFM if there is no line to follow. This enables to move straightforward. In the next exercise, we
will use these two events for a more helpful purpose.
The interactions between these modules are depicted in the schema.
Mix of several Modules [Intermediate]
During the two previous exercises, you observed 5 different modules. Now, you will combine them
to obtain a more complex behavior.
Open the World File
Open the following world file:
.../worlds/intermediate_behavior_based.wbt
You should observe the world depicted on the figure. A line is painted on the ground. It has a
”C” shape. Some obstacles are dispersed along the line.
Figure 8.8: Interactions between the LFM, LEM and LLM
[P.1] Run the simulation both on the virtual and on the real e-puck.
[Q.1] Compare the figure and the given controller code. Don’t look at the UTM yet. (Hints:
what is the role of the LFM’s inhibitor, how the LEM and the LFM are related in the code, why is
there no links between them in the figure, etc.)
[Q.2] Explain the algorithm of the LEM.
[P.2] Modify the given code in order switch on the 8 LEDs when the e-puck detects a line
and switches them off when there is no detected line.
Figure 8.9: The environment of this exercise
Combination of several Modules
The goal of this exercise is to obtain the following behavior: the e-puck follows the line, but, if it
detects an obstacle, it must go round the obstacle until it finds again the line.
The given robot controller contains all the previous modules, but there is no link between them.
The schema shows a possible way to link them. The most important point in this schema is to
observe the interactions from a module to another:
MIX OF SEVERAL MODULES [INTERMEDIATE]
79
• OAM: It’s the only reflex module. If the OAM detects an obstacle, it must inhibit the LFM
in order to avoid its influence. The OAM has also to inform the WFM where is wall.
• LEM: Its role is to remove the inhibition on the LFM when it detects a black line. Moreover,
it has to inform the WFM that is has to stop following the wall.
• LLM: It has to inhibit the LFM if the black line is lost.
Figure 8.10: Interactions between all the modules
[P.1] Implement the interactions between the modules to obtain the behavior described
above. Refer to the schema for the interactions. (Hint: The code pieces, that you have to modify,
are labeled by a TODO comment.)
[Q.1] What are the advantages of the behavior based robotic in respect to finite state
machine based robotic? And what are the disadvantages? Is it conceivable to use a combination of
these two techniques? Give an example.
80
CHAPTER 8. INTERMEDIATE PROGRAMMING EXERCISES
119
120
82
CHAPTER 9. ADVANCED PROGRAMMING EXERCISES
Some theory
For a differential drive robot the position of the robot can be estimated by looking at the difference
in the encoder values:∆sr and ∆sl .With these values we can update the position of the robot


x
p= y 
θ
Chapter 9
to find it’s current position
Advanced Programming Exercises
This chapter requires an advanced knowledge in computer science. It guides the user through
several advanced robotic topics which can be performed using the e-puck robot. At this level, it
becomes unfeasible to be exhaustive because the subject is too wide and too specific to the treated
problem. For this reason, only some topics are treated thoroughly and only one point of view is
depicted. The first exercise is about position estimation using odometry, the second is about path
planning with NF1 and potential fields, the third one is about pattern recognition using artificial
neural networks and supervised learning, we will then speak of unsupervised learning with particle
swarm optimization (PSO) and finally the simultaneous localization and mapping problem (SLAM)
will be explored.


∆x
p0 = p +  ∆y 
∆θ
. This update needs to be done as often as possible to maximize the precision of the whole method.
l
l
and ∆s = ∆sr +∆s
. Thus we have ∆x = ∆s · cos(θ + ∆θ
We express ∆θ = ∆sr −∆s
b
2
2 ) and ∆y =
∆s · sin(θ + ∆θ
2 ) where b is the distance between the wheels. As a summary we have:

  ∆sr +∆sr

−∆sl
· cos(θ + ∆sr2b
)
x
2
∆sr −∆sl
r

p = f (x, y, θ, ∆sr , ∆sl ) =  y  +  ∆sr +∆s
·
sin(θ
+
)
2
2b
∆sr −∆sl
θ
b
These equations are given in section 5.2.4 of 1 , you can also have more information about error
propagation in the same reference.
Calibrate your e-puck
Open the following world file:
.../worlds/advanced_odometry.wbt
Odometry [Advanced]
We saw in previous exercises that the e-puck robot is equipped with two DC motors with an encoder
in each. These encoders count the number of steps done by the motor (1000 steps for one rotation
of the wheel). By recording this information we can estimate the trajectory of the robot, this is
called odometry. In this exercise we will learn the basics of odometry.
The major drawback of this procedure is error accumulation. At each step (each time you take
an encoder measurement), the position update will involve some error. This error accumulates over
time and therefore renders accurate tracking over large distances impossible. Over short distances,
however, the odometry can provide extremely precise results. To get these good results, it is crucial
to calibrate the robot. Tiny differences in wheel diameter will result in important errors after a
few meters, if they are not properly taken into account. Note that the calibration must be done
for each robot and preferably on the surface on which it will be used later on. The procedure also
works with the simulated e-puck.
81
This world has one robot and the odometry equations are already implemented. The module
odometry (.h and .c) tracks the position of the robot, the module odometry goto (.h and .c) is made
to give the robot a destination and compute the needed motor speeds to lead it there.
The calibration procedure is adapted from 2 for the e-puck robot, its goal is to compute the
following three parameters:
• Distance per increment for left wheel (left conversion factor)
• Distance per increment for right wheel (right conversion factor)
• Distance between the two wheels (axis length)
Those parameters can be estimated rather easily but we cannot measure them directly with
enough precision. Thus we will measure 4 other parameters instead and do some math to get the
values we need. Each of these measures is a step in the procedure:
1 R.
Siegwart and I. R. Nourbakhsh. Introduction to Autonomous Mobile Robots. MIT Press, 2004
2 Wikibooks contributors.
Khepera III toolbox, odometry calibration, 2008. Khepera III Toolbox/Examples/odometry calibration
83
ODOMETRY [ADVANCED]
1. Increments per rotation is the number of increments per wheel rotation
2. Axis wheel ratio is the ratio between the axis length and the mean wheel diameter
3. Diameter left and diameter right are the two wheel diameters
4. Scaling factor is self explanatory
84
CHAPTER 9. ADVANCED PROGRAMMING EXERCISES
[P.3] Enter the value for the number of increments per tour and the axis wheel ratio in the
file odometry.c. This will provide the accuracy we need to finish the calibration.
[P.4] Run the third test with key ’3’, the robot will move and come back near it’s start position. If the square is not closed, as shown on the figure called “Open and closed square trajectories”,
increase the left diameter and decrease the right diameter in odometry.c. This means
diameterLef t = diameterLef t + δ
Once we have those parameters we can compute the previous parameters as follows:
diameterLef t · scalingF actor · 2 · π
distanceP erIncrementLef t =
incrementsP erT our
distanceP erIncrementRight =
axisLength =
and diameterRight = diameterRight − δ. If on the contrary the trajectory is too closed, as shown
on the same figure, decrease the left diameter and increase the right diameter.
diameterRight · scalingF actor · 2 · π
incrementsP erT our
axisW heelRatio · (diameterLef t + diameterRight)
2 · scalingF actor
Step 1 — Increments per tour
The number of increments per tour can be found experimentally. In this test, the robot accelerate,
stabilize it’s speed and do a given number of motor increments. This value can be modified by the
constant INCREMENT TEST in advanced odometry.c Of course, every change in the code must be
followed by a build and a simulation revert.
[P.1] Run the first step with key ’1’ with the INCREMENT TEST value to 1000. Check that
the wheel has done exactly one revolution by putting a marker on the wheel. If it is not the case,
find a value such that the wheel does a perfect turn and check it by doing 10 turns instead of 1.
Step 2 — Axis wheel ratio
If you search in the documentation of the e-puck you will find that the wheel diameter should be
41mm for each wheel and that the distance between the wheels is 53mm which gives a ratio of
1.293. To verify this we will command the robot to rotate in place several times. To do a complete
turn we will need 1000 * 1.293 = 1293 increments. The number of steps can be adjusted with the
constant RATIO TEST.
[P.2] Run the second step with key ’2’ and adjust the value as befor to have one perfect
turn. Then verify it with 10 turns or more. Once you have a good value do the math backward to
find out what is the axis wheel ratio.
Step 3 — Wheels diameters
To experimentally find out diameter differences between the two wheels, we will load our current
odometry model onto the robot and then let it move around in your arena while keeping track of
its position. At the end, we let the robot move back to where it thinks it started, and note the
offset with the ctual initial position. We will give 4 waypoints to the robots such that it will move
along a square. By default the side of the square measures 0.2m but if you have more space you
can increase this size to get better results.
Figure 9.1: Open and closed square trajectories
Step 4 — Scaling factor
The scaling factor is the easiest to find out, we just run the robot on a straight line for a given
distance (0.3m in the code) and measure the actual covered distance.
[P.5] Run the fourth test with key ’4’, the robot will move. Measure the covered distance
and compute the scaling factor as follows
scalingF actor =
real distance
parameter distance
. Again, report this value in odometry.c.
Your robot is now calibrated.
[Q.1] We did a whole procedure to limit the error in the odometry process but where does
this error come from ? What are the causes ?
Use the odometry
Now that we have a complete and calibrated odometry module we will use it. The commands to
move the robot are the summarized in the table.
121
122
85
PATH PLANNING [ADVANCED]
Keyboard key
1
2
3
4
UP
DOWN
LEFT
RIGHT
S
R
86
CHAPTER 9. ADVANCED PROGRAMMING EXERCISES
Action
Start step 1 of calibration
Start step 2 of calibration
Start step 3 of calibration
Start step 4 of calibration
Increase robot speed
Decrease robot speed
Turn left
Turn right
Stop the robot
Reset the encoders
Table 9.1: Command summary for this exercise
[P.6] Using odometry can you compute the surface of the blue hexagon ?
This was a simple use of odometry, but in the next exercises you will see that we heavily depend
on odometry to have an idea of the robot’s position. We also use it to move the robot precisely.
Exercises that make use of odometry are Path Planning and SLAM.
Figure 9.2: Equipotential contour plot of the potential field
Path planning [Advanced]
Overview
The goal of this exercise is to implement and experiment with two methods of path planning:
potential field and NF1. We will compare them to find out what are the advantages of both
methods and what needs to be taken care of while using them. Path planning is heavily used in
industry where you need to move, for instance, a robotic arm with 6 degrees of freedom. The
techniques used in mobile robotics, especially in 2D as we do, are much simpler.
Potential field
This exercise is based on the section 6.2.1.3 of 3 . The goal is to implement a simple potential field
control for the e-puck through the provided world.
Theory
The idea behind potential field is to create a field (or gradient) acrosse the map that will direct the
robot to the goal. We consider the robot as a point under the influence of an artificial potential field
U (q). The robot moves by following the field, as would a ball rolling downhill. This is possible by
giving the goal an attractive force (i.e. the minimum potential) and obstacles very high repulsive
force (peaks in the field). On the figures, you see the resulting contour plot for a single triangular
obstacle (in blue) and a goal (in red), the same setup is plotted as a 3D field on next picture.
The resulting force can be computed at each position
3 R.
Siegwart and I. R. Nourbakhsh. Introduction to Autonomous Mobile Robots. MIT Press, 2004
Figure 9.3: Potential field shown in 3D
87
PATH PLANNING [ADVANCED]
The NF1 algorithm, also known as Grassfire, is an alternative approach to path planning. It is
described in section 6.2.1.2 and in figure 6.15 of 4 . This algorithm is already implemented in the
provided code.
where ∇U (q) denotes the gradient of U at position q:
∂U ∂x
∇U (q) = ∂U
Theory
∂y
We can then separate the potential and the force as two different parts, an attractive part and
a repulsive one. We can then compute them separately:
2
1
katt · ρgoal (q)
2
Urep =
Frep (q) =
(

1
2 krep
1
2 krep
1
ρ(q)
1
ρ(q)
0
−
− ρ10
0
1
ρ0
2


if ρ(q) ≤ ρ0
if ρ(q) ≥ ρ0 
q−qobstacle
ρ3 (q)
The algorithm could be explained this way:
• Divide the plan into simple geometric areas called cells. In our case we do this by creating a
grid with square cells and we might change the resolution if necessary.
• Determine which cells are occupied (by obstacles) and which are free. Also find out in which
cell is the start point and where is the goal.
Fatt (q) = −katt · (q − qgoal )


CHAPTER 9. ADVANCED PROGRAMMING EXERCISES
NF1
q = (x, y) : F (q) = −∇U (q)
Uatt =
88
if ρ(q) ≤ ρ0
if ρ(q) ≥ ρ0
• Find a path in the free cells to reach the goal from the start position. In NF1 this is done by
assigning a distance to each cell, the goal has distance 0 and then each movement increases
the distance to the goal. This assignment can be done recursively on the whole grid.
)
The result of this process can be seen on figure called “Cell decomposition and distance to goal
with NF1”. We get a discretized gradient which is similar to the potential field we had before.
Application
Open the following world file:
.../worlds/advanced_path_planning.wbt
[P.1] Implement the attractive goal potential and force according to equations Uatt to
Fatt (q) in the controller advanced path planning potential field.c
Frep (q).
[P.2] Implement the repulsive obstacle potential and force according to equations Urep to
[P.3] Implement the control by calling the functions you just implemented with your current
position as argument and following the computed force. The missing code is clearly indicated in
the provided structure.
[P.4] Test your code both in simulation and on the real e-puck. Fir the run on the real
e-puck you can print a copy of the A3 sheet given in .../path planning obstacle.pdf
Figure 9.4: Cell decomposition and distance to goal with NF1
[Q.1] Do you notice any difference ? Which one ? And why do they happen ?
[P.5] Test other functions for the potential (for example, a linear, instead of quadratic,
attractive potential).
[Q.2] There are methods to improve the trajectory of the robot (extended potential field
for instance), think of how you could do to have a shorter path.
Application
[P.7] Go through the NF1 controller code controller advanced path planning NF1.c and run
it in simulation and on the real e-puck.
4 R.
Siegwart and I. R. Nourbakhsh. Introduction to Autonomous Mobile Robots. MIT Press, 2004
123
124
PATTERN RECOGNITION USING THE BACKPROPAGATION ALGORITHM [ADVANCED]89
90
CHAPTER 9. ADVANCED PROGRAMMING EXERCISES
[Q.3] What do you notice ? Is the behavior optimal ? Notice how it is based on the motion
control algorithm used in step 3 of the odometry calibration procedure: Step 3.
[P.8] What happens if you increase the configuration values for the motion control algorithm
(k alpha, k beta and k rho) in file .../lib/odometry goto.c ? How can you explain that ? (Hint:
Webots is a physically realistic simulator)
[P.9] Can you modify the motion control algorithm to have a smoother behavior?
Comparison
Now we will compare the two approaches to path planning to highlight the advantages and drawback
of each method.
[P.10] Using the blank A3 world sheet given in file .../path planning blank.pdf, design
a world where potential field should fail but not NF1.
[P.11] Enter the obstacle coordinates in both controllers and run them in simultation to
confirm the prediction.
[Q.4] Based on these exercises, list the advantages and drawbacks of each methods.
Figure 9.5: Maze with landmarks painted on the wall
Pattern Recognition using the Backpropagation Algorithm
[Advanced]
Robot learning is one of the most exciting topics in computer science and in robotics. The aim of
this exercise is to show that your robot is able to learn. First, the basics of supervised machine
learning will be introduced. Then, you will use in details an advanced technique to recognize
patterns in the e-puck camera image: the backpropagation algorithm. This exercise is spread over
two topics: image processing and artificial neural networks.
Description of the Exercise
Generally speaking, the pattern recognition into an image has many applications such as the optical
character recognition (OCR) used in many scanners or the faces detection used in many webcams.
In robotics, it can be useful to locate the robot in an environment.
The purpose of this exercise is simply to recognize some landmarks on the wall. Your e-puck
should be able to move randomly in a maze (see the figure) and, each time its camera is refreshed,
to recognize the landmarks that it sees. The figure called “4 landmarks to recognize” depicts the
landmarks used in this exercise. Note that the camera values are complex to treat. First, there
are a huge number of values to treat (here: 52*39*3=6084 integers between 0 and 255). Moreover,
some noise surrounds these values.
Intuition about Pattern Recognition
[Q.1] Think about the problem described in the previous section. Describe a method to detect
Figure 9.6: 4 landmarks to recognize
PATTERN RECOGNITION USING THE BACKPROPAGATION ALGORITHM [ADVANCED]91
92
a blue landmark in the camera image values, i.e., to find the x-y coordinates and the size of the
landmark in the image. (Hints: There are maybe several landmarks in one image.)
corresponding weight is called the bias (b).PThe y output of the neuron is computed as a function
m
of the weighted sum of the inputs y = ϕ( i=0 wj xj ). The ϕ function can be a sigmoid function
ϕ(z) = 1+e1−z .
A single neuron can already classify inputs. But it can only classify linearly separable inputs.
To achieve better results artificial neurons are linked together like the neural network of our brain.
The result is called an Artificial Neural Network (ANN).
[Q.2] Once the position and the size of the landmarks are known, describe a method to
recognize them, i.e., to determine at which class of landmarks it belongs to (either l1, l2, l3 or l4).
[Q.3] Is your recognizing method easy to maintain? (Hint: if you modify the shape of the
landmarks in order to get a horizontal cross, a diagonal cross, a circle and a checkerboard, is your
method directly applicable?)
CHAPTER 9. ADVANCED PROGRAMMING EXERCISES
Machine Learning
You observed in the last section that pattern recognition is not trivial. Some results can be achieved
by programming empirically. But the program is then specific to the given problem.
How about a way in which your robot would be able to learn any landmark? During the rest of
the exercise, a method of supervised learning is introduced. Supervised learning is a subset of the
machine learning topic. It is constituted mainly by a function which maps the inputs of the robot
with its outputs. This function can be trained using a database filled by an expert. This database
contains pairs of one input and the corresponding output as expected by the expert. In the method
used in this exercise, this function is represented by an artificial neural network (ANN) and the
backpropagation algorithm is used to train the function. These features will be introduced in the
following sections.
Neurons
The purpose is to model the capacity of learning. Which better solution to take inspiration from
the best example we know in term of intelligence: the brain. Long-past, the biologists try to
understand the mechanism of the brain. To sum up their results briefly, the brain is composed
of a huge number (about 10ˆ{11}) of basics cells called neurons. These cells are specialized in
the propagation of information (electrical impulses). Each of these neurons has on average 7000
connections (called synaptic connection) with other neurons. These connections are chemical. An
adult has on average 10ˆ{15} of these connections in its brain. This incredible network maps our
senses to muscles, and enables us to think, to feel, to memorize information, etc. On the figure
below you see a representation of a biological neuron. The electrical information from the other
neurons comes via the synaptic connection. According to this signal, the terminal button excites
another neuron.
The next step is to model an artificial neuron. There exist several neuron models in the literature.
But one thing is sure: the more biologically realistic the model is, the more complex it is. In
this exercise, we will use a model often used in the literature: the McCulloch&Pitts neuron (see
the figure). The explanation of the biological meaning of each term is out of the scope of this
document, only the functioning of an artificial neuron is explained here. A neuron is modeled by
several weighted inputs and a function to compute the output. The inputs (x vector) represents the
synaptic connection by a floating-point value. The biological meaning of these values is the rate of
the electrical peaks. They come either from another neuron or from the inputs of the system. The
weights (w vector) balances the inputs. Thanks to these weights, a given input takes more or less
importance to the output. You may have noticed on the picture that the first input is set to 1. Its
Figure 9.7: A representation of a biological neuron
The Backpropagation Algorithm
To train the ANN (finding the right weights) knowing the input and the expected output (supervised
learning), there exist an algorithm called backpropagation. To apply it, a specific ANN shape is
required. The ANN must be constructed by layer and each neuron of a layer has to be strictly linked
with all the neurons of the previous and of the next layer. The algorithm is written in Algorithm
125
126
PATTERN RECOGNITION USING THE BACKPROPAGATION ALGORITHM [ADVANCED]93
94
CHAPTER 9. ADVANCED PROGRAMMING EXERCISES
Figure 9.9: The sample extraction. First, the patterns (landmarks) are detected in the image, then
the found patterns are sampled to a fixed size and its blue components are normalized between 0
and 1.
Figure 9.8: The McCulloch&Pitts neuron
1 as reference. But, its explanation is out of the scope of this exercise. Nevertheless, information
about this algorithm can be found on the Internet, 5 is a good starting point.
Backpropagation algorithm
1. Initialize the weights in the network (often randomly)
2. repeat * foreach example e in the training set do
1. O = neural-net-output(network, e) ; forward pass
2. T = teacher output for e
3. Calculate error (T - O) at the output units
4. Compute delta_wi for all weights from hidden layer to output layer ; backward pass
5. Compute delta_wi for all weights from input layer to hidden layer ; backward pass continued
6. Update the weights in the network * end
3. until all examples classified correctly or stopping criterion satisfied
The proposed Method
As shown in figure, the first step of the proposed method consists to extract the samples from the
camera image and to preprocess them in order to directly send it to the ANN. Its results must be
an array of floating values between 0 and 1 having a fixed size. So only the blue channel is kept and
the resampled image is sampled into another resolution. The method to sample the image is the
nearest neighbor interpolation. This method can be implemented quickly but the resulting samples
are of very poor quality. However, for the landmarks used in this exercise, this method is sufficient.
The second step of the method is to send the values of the sampled image as input of an ANN
(see figure). The input layer of the ANN has to have the same size as the sampled image pixels.
The hidden layer of the ANN can have an arbitrary size. The last layer has the same size as the
number of landmarks.
5 W.
Gerstner. Supervised learning for neural networks: A tutorial with java exercises, 1999
Figure 9.10: The internal structure of the ANN and its links with the outputs and the inputs.
There is only one hidden layer. The inputs of the network are directly the sampled image. So, the
input layer has a size equal to the size of the preprocessed image. The output layer has 4 neurons.
This number corresponds to the number of landmarks.
PATTERN RECOGNITION USING THE BACKPROPAGATION ALGORITHM [ADVANCED]95
96
Train your Robot by yourself and test it
to ”4” key strokes? And what about this recognition when it moves randomly in the big board?
What is the problem? How to correct it?
Keyboard key
Action
Move the robot in front of the pattern 1 and
select pattern 1 for learning
Move the robot in front of the pattern 2 and
select pattern 2 for learning
Move the robot in front of the pattern 3 and
select pattern 3 for learning
Move the robot in front of the pattern 4 and
select pattern 4 for learning
Select the next pattern for learning
Select the previous pattern for learning
Enable (or disable) the learning mode
Enable (or disable) the testing mode
Load the weights stored in the weights.w file
Save the weights stored in the weights.w file
Stop the motors of the e-puck
The e-puck performs rotations
The e-puck walks randomly
The e-puck dances
1
2
3
4
UP
DOWN
L
T
O
S
B
R
W
D
Table 9.2: Command summary for this exercise
CHAPTER 9. ADVANCED PROGRAMMING EXERCISES
[P.2] Retry the e-puck training thanks to the previous results. (Hint: Revert the simulation
in order to remove the weights of the ANN.)
Parameters of the ANN
The ann.h file defines the structure of the ANN (size of the hidden layer, number of layers, etc),
the size of the sampled images and the learning rate.
[P.3] What is the influence of the learning rate on the e-puck learning? What occurs if this
rate is too high or too low?
[P.4] What is the influence of the hidden layer size?
[P.5] Add at least one other hidden layer. What is its influence on the e-puck learning?
[P.6] Improve the current image subsampling method. (Hint: There exist several methods
to do interpolation. Search on the web: multivariate interpolation)
learning?
[P.7] Modify the resolution of the sample image. What is its influence on the e-puck
[P.8] Replace the four initial landmarks by some other ones (ex: some icons you find on
the Internet, your own creations, digits or the Rat’s Life landmarks 6 ). Modify the parameters of
the ANN, of the backpropagation algorithm and of the image interpolation in order to recognize
them with an error less than 3%.
Open the following world file:
.../worlds/advanced_pattern_recognition.wbt
The table lists the commands of the exercise. The first group of commands enables to select
a pattern to learn. The second one enables to learn the pattern selected (one backpropagation
iteration on the ANN with the sampled image as input and the selected pattern as output) and to
test a sampled image (to put it as input of the ANN and to observe its output). The third one
enables to save and load the weights of the ANN. Finally, the last group enables to move the e-puck.
The e-puck dance is particularly useful to add a noise during the training phase.
To train the e-puck, choose a pattern to learn, place the e-puck in front of the pattern and
enable the learning mode.
To test the e-puck, place the e-puck anywhere and enable the testing mode.
[P.1] Press the ”1” and the ”B” key strokes, then enable the learning mode thanks to the
”L” key stroke. Without moving the e-puck, press alternatively the ”1” to ”4” key strokes in order
to train the other landmarks until the error on each of these landmarks is less than 10 % (<0.1).
You shall press randomly the numeric keys to achieve a fair evolution for each landmark, otherwise
the weights might be biased towards one of the output. Then press the ”T” button for testing the
results.
[Q.4] Does your e-puck recognize well the landmarks when you place it thanks to the ”1”
Other Utilization of the ANN [Challenge]
[Q.5] List some possible applications of an ANN and of the backpropagation algorithm on the
e-puck without using the camera.
[P.9] Implement one item of your list.
Going Further
This exercise was inspired by papers about the OCR using the backpropagation algorithm, further
information my be found in 7 .
As a matter of fact, there exist other methods for the pattern recognition. The proposed
method has the advantage to be biologically inspired and to be rather intuitive. This method can
be extended in two ways: the utilization of more realistic neuron models and the utilization of some
extension of the training algorithm. On the other hand, some completely different methods can be
used such as statistically-based methods. The following Wikipedia page summarizes the existing
solutions for supervised learning: Supervised learning
6 Rat’s
7 W.
Life. Rat’s life — robot programming contest, 2008. http://www.ratslife.org
Gerstner. Supervised learning for neural networks: A tutorial with java exercises, 1999
127
128
UNSUPERVISED LEARNING USING PARTICLE SWARM OPTIMIZATION (PSO) [ADVANCED]97
98
Unsupervised Learning using Particle Swarm Optimization
(PSO) [Advanced]
, pw and nw.
In previous exercise we saw how to give a robot the capacity to learn something. This was done
using an artificial neural network and the backpropagation algorithm, this was a supervised learning
method. Now with PSO we will learn the basics of unsupervised learning. We want the robot to
avoid obstacles as in Obstacle Avoidance Module with the Obstacle Avoidance Module but this
time we will let it learn by itself. This technique is used when we don’t have a teaching set, a
set of input and their expected output, i.e., when good models do not exist and when the design
space is too big (infinite) to be systematically searched. The role of the engineer is to specify the
performance requirements and the problem encoding, the algorithm will do the rest.
Some insights about PSO
PSO is a relatively young but promising technique, it has been introduced by Eberhart and Kennedy
in a 1995 paper 8 . Since then the algorithm has undergone many evolutions, more informations
can be found in 9 and 10 . The key idea behind the algorithm is to explore a search space using a
population of particles. Each of these particles represents a candidate solution and is characterized
by a position and a velocity vector in the n-dimensional search space. Both vectors are randomly
initialized and the particles are the flown through the search space. To evaluate the performance of
a candidate solution we use a fitness function. This fitness function has to be designed specifically
for the given problem, it will have to rank the performance of the solution. Typically we design
a function returning values within the range [0,1] where 0 is the worst result and 1 is a perfect
score. Each particle keeps track of it’s personal best solution called pbest and of the global best
solution called gbest. Then at each iteration the velocity is changed (accelerated) toward its pbest
and gbest. The flowchart shows the main PSO evolutionary loop.
Since we work in a multidimensional space we express the particles position as xi,j and the
velocity as vi,j where i is the index of the particle and j represents the dimension in the search
space. x∗i,j is then the personal best solution achieved by particle i and x∗i0 ,j is the global best
solution. The main equations are simple to understand and are expressed in this form:
vi,j = w · (vi,j + pw · rand() ·
(x∗i,j
− xi,j ) + nw · rand() ·
(x∗i0 ,j
− xi,j ))
xi,j = xi,j + vi,j
We notice the rand() function which generates an uniformly distributed random number between
0 and 1. We also notice three parameters that can be modified to tune the performance of the
algorithm
w
8 R.
Eberhart and J. Kennedy. A new optimizer using particle swarm theory. Micro Machine and Human Science,
1995. MHS ’95., Proceedings of the Sixth International Symposium on, pages 39-43, Oct 1995
9 Y. Shi and R. Eberhart. A modified particle swarm optimizer. Evolutionary Computation Proceedings, 1998.
IEEE World Congress on Computational Intelligence., The 1998 IEEE International Conference on, pages 69-73,
May 1998
10 R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. Swarm Intelligence, 1(1):33-57, August
2007
CHAPTER 9. ADVANCED PROGRAMMING EXERCISES
Implementation
Open the following world file:
.../worlds/advanced_particle_swarm_optimization.wbt
In this exercise the goal is to give the ability to some robots to evolve on their own. To
do that we use PSO and an artificial neural network (ANN, see Pattern Recognition using the
Backpropagation Algorithm). Each robot will be running the same controller but with different
weights for the ANN. PSO will act on the weights to evolve each controller, this means that the
search space is a N-dimensional search space where N is the number of weights. The figure shows
a representation of the network with one hidden layer and it’s input and output.
[Q.1] How could we specify that the robot should avoid obstacles as a fitness function ?
[Q.2] Is the strategy “stay still while my sensors are not activated” successful with your
function ? How to avoid this, i.e. how to encourage the robot to move fast ?
[Q.3] With your new function, what happens if the robot turns on itself at full speed ? If
this strategy gives good results re-design your function to discourage this behavior.
[Q.4] Why do we use the previous speed as one of the input of the ANN ? Same question
for the bias.
If you didn’t design a fitness function in the previous questions you can use the one proposed
in 11 and used again in 12 . It is presented this way:
F = V · (1 −
√
∆v) · (1 − i)
0≤V ≤1
0 ≤ ∆v ≤ 1
0≤i≤1
where V is the average absolute wheel speed of both wheels, ∆v is the average difference between
the wheel speed in absolute value and i is the average value of the most active infrared sensor.
[Q.5] Why is this fitness function adapted to each point of question 1-3 ?
[P.1] Implement your fitness function or this function in the robot controller. (Hint: look
for the TODO in the code)
11 D. Floreano and F. Mondada. Evolution of homing navigation in a real mobile robot. Systems, Man, and
Cybernetics, Part B, IEEE Transactions on, 26(3):396-407, June 1996
12 J. Pugh, A. Martinoli, and Y. Zhang. Particle swarm optimization for unsupervised robotic learning. Swarm
Intelligence Symposium, 2005. SIS 2005. Proceedings 2005 IEEE, pages 92-99, June 2005
UNSUPERVISED LEARNING USING PARTICLE SWARM OPTIMIZATION (PSO) [ADVANCED]99
100
CHAPTER 9. ADVANCED PROGRAMMING EXERCISES
Figure 9.12: Neural network used for this exercise
Figure 9.11: Evolutionary optimization loop used by PSO
129
130
UNSUPERVISED LEARNING USING PARTICLE SWARM OPTIMIZATION (PSO) [ADVANCED]101
[P.2] In the controller of the supervisor you have to set the number of run and their
duration, good values are between 30 and 100 runs with a duration of at least 30 seconds. Now you
can run the simulation a first time in fast mode... and get a coffee, learning is a long process.
[Q.6] At the end of the simulation the simulation enters DEMO mode which means that all
robots take the global best result as their controller. Analyse their behavior, is it a good obstacle
avoidance algorithm ? Is any robot stuck after a while ? How could we improve the quality of the
solution ?
PSO modifications
We propose a way to improve the quality of the solution, we will modify the main PSO loop to
give it more robustness against noise. In the step “Evaluate new particle position” we will also
re-evaluate the personal best position for each particle. This helps to eliminate lucky runs were a
bad solution gets a good fitness by chance. In the code this is done by evaluating all the personal
best solutions together and it is done once each three run.
[P.3] You can do this in the code of the supervisor, define NOISE RESISTANT to 1. Now
build and run the simulation again.
[Q.7] Is the resulting solution better ? Since we didn’t change the number of run, we divide
the number of evolution runs by enabling the noise resistant version. Why does this make sense ?
In the file pso.c you might have noticed the three parameters w, nw and pw. Those are the
parameters used in the evolution function of PSO.
?
[Q.8] What is the use of the w parameter ? What could happen if you set it to a value >1
Going further
To get further informations about PSO you can have a look at the paper proposed in this exercise
R. Eberhart and J. Kennedy. A new optimizer using particle swarm theory. Micro Machine and
Human Science, 1995. MHS ’95., Proceedings of the Sixth International Symposium on, pages 3943, Oct 1995, 13 , 14 , 15 , 16 . You might also want to look at the Genetic Algorithms (GA) which is
also an algorithm to perform unsupervised learning. Basic informations and pointers can be found
on wikipedia.
13 Y. Shi and R. Eberhart. A modified particle swarm optimizer. Evolutionary Computation Proceedings, 1998.
IEEE World Congress on Computational Intelligence., The 1998 IEEE International Conference on, pages 69-73,
May 1998
14 R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. Swarm Intelligence, 1(1):33-57, August
2007
15 D. Floreano and F. Mondada. Evolution of homing navigation in a real mobile robot. Systems, Man, and
Cybernetics, Part B, IEEE Transactions on, 26(3):396-407, June 1996
16 J. Pugh, A. Martinoli, and Y. Zhang. Particle swarm optimization for unsupervised robotic learning. Swarm
Intelligence Symposium, 2005. SIS 2005. Proceedings 2005 IEEE, pages 92-99, June 2005
102
CHAPTER 9. ADVANCED PROGRAMMING EXERCISES
SLAM [Advanced]
SLAM is the acronym for Simultaneous Localization And Mapping. The SLAM problem is for a
mobile robot to concurrently estimate both a map of its environment and its position with respect
to the map. It is a current research topic so we won’t go in too many details but we will try to
understand the key ideas behind this problem and give references for those who are interested..
Map building, first attempt
To illustrate the difficulty of this problem we will start with a naive approach and show it’s limitations. To do that, open the following world file:
.../worlds/advanced_slam.wbt
We will use a simple grid to model the map. The robot will start in the middle of the grid and
will be able to modify it if it sees an obstacle. The grid is initialized to zero in each cell which
means that it is initially empty. The robot will write a one in each cell where it finds an obstacle.
Of course this will use the odometry to have an idea of the robot’s position, we will reuse the same
module as in exercise on odometry. Then we will display it on the screen so that we can examine
it.
[P.1] Run the simulation and notice the behavior of the robot, he is doing wall following.
Now let it do a whole turn of this simplified arena, stop the simulation and look at the map displayed
on the screen. The black points are obstacles while the white ones are free space, the red one is the
robot current position.
[Q.1] What can you notice about this map ? Look in particular at the width of the trace,
at the point where the loop closes and in the corners.
[Q.2] What would happen in the map if you let the robot do 5 turns of the arena ? Check
your prediction in simulation. Why is this result an issue ? Imagine what would happen with 10
or even 100 turns !
On the picture you can see four of these runs plotted. The top row shows what you might obtain
if your odometry is not perfectly calibrated, the loop is not closed or at least not properly. On
the third image you see what is returned with a good calibration. It’s better but even now this is
not a perfect rectangle as the arena is. Finally the fourth picture shows the result of a very long
run performed with the best calibration we found. Since the odometry cannot be perfect there will
always be a small increase in the error during the run. On a long term this will cause a rotation
of the plotted path and on a very long term (several minutes to an hour) we obtain this circle
produced ba the rotation of the approximate rectangle.
This odometry error is a problem but if you think of it you will certainly come up with few
other issues regarding this mapping technique. As a start in the reflexion we could stress the
fact that a big matrix is very greedy in terms of memory. There exist several way to improve
the map representation, starting with data structures, you might want to search for topological
representation, line feature extraction or geometrically constrained SLAM.
103
SLAM [ADVANCED]
104
CHAPTER 9. ADVANCED PROGRAMMING EXERCISES
this rule as:
p(x | y) = η · p(y | x) · p(x)
• p(x) is the prior probability distribution which gives the information we have about X before
incorporating the data y
• p(x | y) is called the posterior probability distribution over X and it is the information we have
on X knowing the data y
• p(y | x) is usually called likelihood or generative model because it describes how state variables
X cause sensor measurements Y
We will now skip the Markov assumption and just give a bit of terminology before giving the
algorithm:
• xt is the state of the robot and environment at instant t. it will only include the robot position
in our example but it could include much more (robot speed, location of moving objects in
the environment, etc...)
• ut is the control action given before state xt . It will be the motor command.
Figure 9.13: 4 runs of the mapping controller
Robot localization
This section is based on the laboratory D of the course Mobile Robots at EPFL. Now that we saw
that map building is harder than it seems we will explore the localization problem. The robot is
given a map and it must find out where it is located on the map. This is done by comparing the
IR sensors measurements with the pre-established map and using odometry. The robot will have
to integrate the sensor measurements over time since a single measurement is usually not sufficient
to determine the position.
We can distinguish several localization problems so we will define some of them and limit ourselves to a single one. The first discrimination that must be done is static vs. dynamic environment,
in our case the environment will be fixed and the robot only will move. Then the approach can be
local or global, we can do only position tracking meaning we know where the robot is positionned
and just have to follow it or localize it on the whole map “from scratch”. We will do a global
localization example in this exercise. A third opposition is active vs. passive algorithm, while
passive algorithm just process sensor data active ones will control the robot in order to minimize
the position uncertainty, we will use a passive algorithm.
The base idea of our approach will be to assign to each cell of the grid a probability (or belief)
and to use Bayes rule to update this probability distribution. Bayes rule states the following:
p(x | y) =
p(y | x) · p(x)
p(y)
Since p(y) doesn’t depend on x we will only consider it as a normalizer and we can reformulate
• zt is the measurement data in state xt .
• m is the map given to the robot, it takes no index since it does not vary
Any of these quantities can be expressed as a set, zt1:t2 = zt1 , zt1+1 , zt1+2 , ... , zt2 . We can now
define the state transition probability which is the probabilistic law giving the evolution of state:
p(xt | xt−1 , ut )
We can also define the measurement probability which is the probability to measure value zt
knowing that we are in state xt :
p(zt | xt )
Finally we have to define the belief which is the posterior probability of the state variable
conditioned over the available data:
bel(xt ) = p(xt | z1:t , u1:t )
And if we compute it before including the latest measurement we call it the prediction:
bel(xt ) = p(xt | z1:t−1 , u1:t )
Computingbel(xt ) from bel(xt ) is called the measurement update. We can now give the generic
algorithm for Markov localization (which is just an adaptation of the Bayes rule to the mobile robot
localization problem).
Now this algorithm can’t be applied to our grid based map representation so we will give it
under another form, as a discrete Bayes filter. This will allow us to implement the algorithm.
131
132
SLAM [ADVANCED]
105
Algorithm 1 Markov localization(bel(xt−1 ), ut , zt , m)
for all xt do
R
bel(xt ) = p(xt | ut , xt−1 , m) · bel(xt−1 ) dxt−1
bel(xt ) = η · p(zt | xt , m) · bel(xt )
end for
return bel(xt )
Algorithm 2 Discrete Bayes filter({pk,t−1 }, ut , zt )
for all k P
do
pk,t = t p(Xt = xt | ut , Xt−1 = xi ) · pi,t−1
pk,t = η · p(zt | Xt = xt ) · pk,t
end for
return {pk,t }
We can imagine that as replacing a curve by a histogram and it is exactly what happens if we
think of it in on dimension, we have discretized the probability space. This algorithm is already
implemented, you can use the same world file as for the mapping part but you will have to change
the controller of the e-puck.
[P.2] Look in the scene tree for the e-puck node and change the field controller to advanced slam 2
instead of advanced slam 1 and save the world file.
On the camera window we will plot the belief of the robot position as shades of red, the walls are
shown in black and the most probable location is shown in cyan (there can be multiple best positions,
especially at the beginning of the simulation). At the beginning the probability distribution is
uniform over the whole map and it evolves over time according to the IR sensors measurements and
to odometry information.
[P.3] Run the simulation and let the robot localize itself. Why don’t we have a single red
point on the map but an area of higher values ?
[Q.3] Can you explain why do we only have an small part of the former grid for this
algorithm ?
You might have noticed that the robot has only one possible position when it has localized itself.
How is this possible since the rectangle is symmetric ? This is because we further simplified the
algorithm. We just have to localize the robot in 2 dimensions (x, y) instead of 3 (x, y, θ). This
means the robot knows its initial orientation and it removes the ambiguity.
[Q.4] Read the code, do you understand how the belief is updated ?
[Q.5] Why do we apply a blur on the sensor model and on the odometry measurements ?
What would happen if we didn’t ?
[P.4] Try to move the robot before it localizes itself (after only one step of simulation), the
robot should of course be able to localize itself !
We have presented an algorithm to perform robot localization but there are many others ! If
106
CHAPTER 9. ADVANCED PROGRAMMING EXERCISES
you are interested you can look at Monte-Carlo localization which is also based on the Bayes rule
but represent the probability distributions by a set of particle instead of an histogram. Extended
Kalman Filter (EKF) localization is also an interesting algorithm but it is slighty more complicated
and if you want to go further you can have a look at Rao-Blackwellized filters.
Key challenges in SLAM
As you have seen with the two previous examples there are several challenges to overcome in order
to solve the SLAM problem. We will list some of them here but many others might arise. We take
in account the fact that the e-puck evolves in a fixed environment, the change in environment are
irrelevant for us.
• The first problem we have seen is that the sensors are noisy. We know from exercise on
odometry that the odometry precision decreases over time. This means that the incertitude
over the position of the robot will increase. We have to find a way to lower this error at some
point in the algorithm. But odometry is not the only sensor which generates noise, the IR
sensors must be taken in account too. Then if we used the camera to do SLAM based on
vision we would have to deal with the noise of the camera too.
• One solution to previous problem takes us to another problem, the correspondence problem
which is maybe the hardest one. The problem is to determine if two measurement taken at
a different point in time correspond to the same object in the physical world. This happens
when the robot has made a cycle and it is again at the sam point in space, this is also known
as “closing the loop”. If we can rely on the fact that two measurements are the same physical
object we can dramatically diminish the incertitude on the robot position. This problem
includes the localization process which is a challenge in itself.
• Another problem that might show up is the high dimensionality of the map. For a 2D grid
map we already used 200x200 integer, imagine if we had to do the same with a 3D map over
a whole building. This is the map representation problem. Since we also have a possibly
unbounded space to explore memory and computational power might be issues too.
• A last challenge is to explore the map because the robot should not only build a map and
localize itself but it should also explore it’s environment as completely as possible. This means
explore the greatest surface in the least amount of time. This problem is related to navigation
and path planning problem, it is commonly referred to as autonomous exploration.
Now you have an better idea of the challenges that we have to face before being able to do
SLAM !
But SLAM does not limit to IR sensors, one could use the camera to perform vision based SLAM.
We could dispose landmarks on the walls of the maze and reuse the artificial neural networks
of exercise [sec:backprop] to recognize them and improve our algorithm. This could be done if
one would like to participate to the Rat’s Life contest. Many researchers also use laser range
sensors to get precise data scans of a single position. We can also find work of SLAM in flying
or underwater vehicles in 3D environments. Current research topics in SLAM include limiting
the computational complexity, improving data association (between an abserved landmark and the
map) and environment representation.
SLAM [ADVANCED]
107
Going further *[Challenge]*
If you want to go further you can read papers on SLAM, good references are 17 and 18 . 19 presents
a solution to the SLAM problem based on EKF. 20 presents SLAM with sparse sensing, you might
also have a look at 21 which shows an advanced algorithm called FastSLAM. Other resources can
be found on the internet, http://www.openslam.org/ provides a platform for SLAM researchers
giving them the opportunity to publish their algorithms.
[P.5] Implement a full SLAM algorithm in the Rat’s Life framework.
17 H. Durrant-Whyte and T. Bailey. Simultaneous localization and mapping: part I. Robotics & Automation
Magazine, IEEE, 13(2):99-110, June 2006
18 T. Bailey and H. Durrant-Whyte. Simultaneous localization and mapping (slam): part II. Robotics & Automation
Magazine, IEEE, 13(3):108-117, Sept. 2006
19 M.W.M.G. Dissanayake, P. Newman, S. Clark, H.F. Durrant-Whyte, and M. Csorba. A solution to the simultaneous localization and map building (SLAM) problem. Robotics and Automation, IEEE Transactions on,
17(3):229-241, Jun 2001
20 K.R. Beevers and W.H. Huang. Slam with sparse sensing. Robotics and Automation, 2006. ICRA 2006.
Proceedings 2006 IEEE International Conference on, pages 2285-2290, May 2006
21 M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit. FastSLAM: A factored solution to the simultaneous
localization and mapping problem. In In Proceedings of the AAAI National Conference on Artificial Intelligence,
pages 593-598. AAAI, 2002
108
CHAPTER 9. ADVANCED PROGRAMMING EXERCISES
133
134
110
Chapter 10
Cognitive Benchmarks
The cognitive benchmark topic was already introduced in section Enjoy Robot Competition. This
chapter provides first a quick introduction about this topic. Then, it introduces one cognitive
benchmark in more details: Rat’s Life. To avoid redundancy with Rat’s Life official web site we
provide only a quick description of the contest and some clues to start quickly with this benchmark.
The last part is dedicated to other active cognitive benchmark.
Introduction
Generally speaking, a benchmark is a method for quantifying a specific attribute. For example,
in the computer field, if one wants to establish a list of CPUs sorted by their performance, a
CPU benchmark is performed on each CPU. A typical CPU benchmark consists to solve a long
computation. Quicker the CPU finishes this computation, more efficient it is. Once every CPU
performance is quantified, it’s easy to rank them according to this metric.
Similarly, cognitive benchmark is a method for quantifying the intelligence (and not only the
artificial intelligence). For example, the mirror test1 measures the self-awareness. An odorless mark
is placed on the animal without him noticing it. Then, the animal is placed in front of a mirror.
The animals can be sorted in three classes: the animal doesn’t take interest in the mark, the animal
takes interest in the mark of its reflection, or the animal takes interest in its own mark. If the
animal belongs to the last class, it is self-awareness (the inverse is not necessary right). An animal
ranking can be done according to this criterion.
In robotics, cognitive benchmarks are used to compare the quality of the artificial intelligence
of several robot controllers. This comparison is performed on a measurable criterion as the time
to solve a problem or the victory in a round of a game. The rules of the benchmark must be
unambiguous. To increase the motivation of the participants, the cognitive benchmarks are often
shaped as contests. Thanks to this competition spirit, at the end of the contest, the participants
achieve often to very good solutions for the given problem.
A chess game is given as example of a cognitive benchmark. The rules of this game are limited
and unambiguous. If a player wins a game he gets three points, if he loses he doesn’t get any point,
and if the two players tie each of them get one point. In a tournament where each player meets all
1 More
CHAPTER 10. COGNITIVE BENCHMARKS
the others, the number of points describes precisely the chess level of each participant. Moreover,
this cognitive benchmark is an hybrid benchmark, i.e., completely different entities (for example,
humans and computer programs) can participate.
In robotic research, the cognitive benchmarks are useful mainly for comparing the research results. Indeed, some techniques advocated by the researchers in scientific publication lack comparison
with other related research results. Thanks to this kind of event, pros and cons of a technique can
be observed in comparison with another one. Generally speaking, cognitive benchmarks stimulate
the research.
About the life span of a benchmark, the EURON website says: ”A benchmark can only be
considered successful if the target community accepts it and uses it extensively in publications, conferences and reports as a way of measuring and comparing results. The most successful benchmarks
existing today are probably those used in robot competitions.” 2
Finally, when one knows that the RoboCup3 official goal on the long view (”By 2050, develop
a team of fully autonomous humanoid robots that can win against the human world champion team
in soccer.”), one can still expect a lot of surprises in the robotic field.
Rat’s Life Benchmark
This section presents you the Rat’s Life contest. This section gives only a quick presentation of this
contest. On the other hand, you’ll find here clues about robot programming to take a quick start
in the contest.
Presentation
Rat’s Life is a cognitive benchmark. In an unknown maze, two e-pucks are in competition for the
resources. They have to find energy in feeders to survive. When a feeder has been used, it is
unavailable for a while. The contest can be done either in simulation or in reality. The real mazes
c bricks. The figure depicts a simulation of contest on the 6th maze.
are composed of Lego
Figure 10.1: A Webots simulation of the Rat’s Life contest
2 Source:
http://www.euron.org/activities/benchmarks/index.html
3 More information on: http://www.robocup.org/
information on wikipedia: Mirror test
109
RAT’S LIFE BENCHMARK
111
112
CHAPTER 10. COGNITIVE BENCHMARKS
The aim of the contest is to implement an e-puck controller in order to let your e-puck survive
longer than its opponent. If you have done the previous exercises, you have all the background
needed to participate to this contest. Except that the programming language is Java instead of C.
But as the Webots API is quite identical and since Java is more user-friendly than C, the transition
will be easy.
The world file is available in Webots (even in its free version) and located in:
webots_root/projects/contests/ratslife/worlds/ratslife.wbt
where webots root corresponds to the directory where webots is installed.
On the official Rat’s Life website, you will find the precise contest rules, information about
the participation, the current ranking, information about the real maze creation, etc. At this
step, I suggest you to refer to the official Rat’s Life website before going on with this curriculum:
http://www.ratslife.org
From now on, we expect you to know the rules and the specific vocabulary of Rat’s Life.
Main Interest Fields and Clues
The main robotic challenges of the Rat’s Life contest are the visual system, the navigation in an
unknown environment and the game strategy. The best solutions for each of these topics are still
an open question, i.e., there is no well-defined solution. At the end of the contest, a solution may
distinguish itself from the others for some topics. Actually it isn’t the case yet. This section gives
a starting point for each of these topics, i.e., some standard techniques (or their references) and
the topic difficulties to start the contest quickly. Obviously, these clues aren’t exhaustive and they
must not restrain your creativity and your research.
Figure 10.2: Example of an image processing workflow for an alight feeder
The second figure depicts a possible workflow to detect a landmark. This problem is trickier
than the previous one. Here is a possible solution. First, the hue channel of the image can be
computed. With this color space, it is easy to distinguish the four landmark colors. Then, a blob
detection is performed on this information. The size of the blob gives clues about the distance of
the landmark. The blob which seems to be a landmark is straightened out (for example with a
homography6 ), and is normalized (for example on a 4x3 matrix). It’s now easy to compare the
result with a database to get the number of the landmark.
Visual-System — Pattern Recognition
In the Rat’s Life contest, two e-puck sensors are used: the IR sensor and the camera. The first
one is simple to use. The information returned by an IR sensor is quite intuitive and its use is
more or less limited to distinguish the walls. However, the information returned by the camera is
much more difficult to treat. Moreover, the camera has to distinguish and estimate the position
and the orientation of several patterns, an alight feeder, an off feeder, another e-puck and especially
a landmark.
Pattern recognition is a huge subject in which a lot of techniques exist. The goal of this document
is not to give you a complete list of the existing solution but to give you a way to start efficiently
the contest. So, a simple solution would be to use a blob detection 4 algorithm. If the blob selection
is calibrated on the appropriate set of pixels, the problem becomes easier.
The first figure depicts a possible workflow to detect an alight feeder. The first operation is to
get the value channel (with a simple RGB to HSV color space 5 conversion). Then, a blob detection
algorithm is applied in order to get the blob corresponding to the source light. Finally, some
extreme blobs (too small size, too up blob, etc) are removed. The resulting blob x-position can be
sent more or less directly to the motor. To improve the robustness of the present workflow one may
add another blob detection which searches a red glow around the light point.
4 More
information on: Blob detection
5 More information on: HSV color space
Figure 10.3: Example of an image processing workflow for a landmark
Generally speaking, a computer vision workflow is composed of:
• The image acquisition: This part is simply performed by using the camera get image()
function. This returns either the simulated or the real camera pixel values.
• The pre-processing: The pre-processing part aim is to increase the quality of the image. For
example, a Gaussian blur can be applied on the image in order to remove the noise. In the
previous examples, the pre-processing corresponds to the conversion from the RGB to the
HSV color space.
• The feature extraction: This part is the most difficult to choose. In the previous examples,
a blob detection was used. But a completely other way can be used like the edge detection
6 More
information on wikipedia: Homography
135
136
RAT’S LIFE BENCHMARK
113
(to start with this topic, refer to the Canny edge detector 7 and the Hough transform 8 ). More
complex techniques may be related to texture, shape or motion.
• Detection/segmentation: The detection part aim is to remove the unusable information. For
example, to remove all the too small blobs or to select the interest area.
• High-level processing: At this step, the information is almost usable. But, the landmark blob
for example must be treated for knowing to which class it belongs. This part is often a machine
learning 9 problem. This is also a huge field in computer science. If I had to refer only two
completely different methods, it would be the bio-inspired back propagation algorithm which
uses an artificial neural network for the classification, and the statistical-inspired Support
Vector Machine (SVM)10 .
Vision-based Navigation — Simultaneous Localization and Mapping (SLAM)
The vision-based navigation and the mapping are without any doubt the most difficult part of
the Rat’s Life contest. To be competitive, the e-puck has to store its previous actions. Indeed,
if an e-puck finds the shortest path between the feeders, it will probably win the match. This
requires building a representation of the map. For example, a simple map representation could be
to generate a graph which could link the observations (landmark, feeder, dead end) with the actions
(turn left, go forward). We introduce here a powerful technique to solve this problem. Once the
map is created, there are techniques to find the shortest path in a maze (refer to Dijkstra’s or A*
algorithms).
Simultaneous Localization And Mapping (SLAM) is a technique to build a map of an unknown
environment and to estimate the robot current position by referring to static reference points. These
points can be either the maze walls (using the IR sensors: distance sensor-based navigation), or the
landmarks (using the camera: vision-based navigation). Refer to the exercise on SLAM for more
information.
114
CHAPTER 10. COGNITIVE BENCHMARKS
• Energy management: From which battery level the e-puck has to dash for a feeder? Is it
useful to destroy the energy of a feeder?
• Opponent perturbation: How can the e-puck block its opponent? Which feeder is used by the
opponent?
Other Robotic Cognitive Benchmarks
Throughout the world, there exists a huge number of robotic contests. In most of them, the
contest starts by the design and the building of the robot. In this case, the comparison of the
robot cognitive part isn’t obvious because it is influenced by the robot physical skills. Therefore, a
standard platform is needed to have a cognitive benchmark, i.e., the same robot and the same maze
c
for each participant. Rat’s Life uses the e-puck as standard platform and unambiguous Lego
mazes. Similarly, the RoboCup and FIRA organization have a contest category where the used
robot is identical, but the environment is more ambiguous.
RoboCup — Standard Platform League
The RoboCup organizes contests of soccer game for robots. There are several categories. One of
them uses a standard platform: a biped robot called Nao (see the figure). This robot replaces the
quadruped Sony AIBO robot. The first contest using this platform will begin in 2008. Two teams
of four Naos will contest in a soccer game match. This contest possesses mainly two additional
topics in comparison to Rat’s Life: the collectivity and the biped moves.
Game Strategy
Rat’s Life is a contest based on the competition. Establishing an efficient strategy is the first step to
win a match. The game strategy (and so the e-puck behavior) must solve the following key points:
• Maze exploration: How must the e-puck explore the maze (randomly, right hand algorithm)?
When can it explore the maze?
• Coverage zone: How many feeder are necessary to survive? What will be the e-puck behavior
when it has enough feeder to survive?
• Reaction to the opponent strategy: What the e-puck has to do if the opponent blocks an
alley? What should its behavior be if the opponent follows it?
7 More
8 More
9 More
information on wikipedia: Canny edge detector
information on wikipedia: Hough transform
information in exercises on Pattern recognition, Particle Swarm Optimization and on wikipedia: Machine
learning
10 More information on wikipedia: Support vector machine
Figure 10.4: Nao, the humanoid robot of Aldebaran Robotics
You will get more information on the RoboCup official website: http://www.robocup.org
OTHER ROBOTIC COGNITIVE BENCHMARKS
115
The Federation of International Robot-soccer Association (FIRA) — Kheperasot
The FIRA organizes also soccer games contests for robot. There is also a standard league which
uses a differential wheeled robot: the Khepera.
The FIRA official website is: http://www.fira.net
116
CHAPTER 10. COGNITIVE BENCHMARKS
137
138
118
Appendix A
Document Information & History
History
This book was created on the Wikibooks project and developed on the project by the contributors listed in Appendix A, page 117. For convenience, this PDF was created for download from
the project. The latest Wikibooks version may be found at http://en.wikibooks.org/wiki/
Cyberbotics’_Robot_Curriculum.
PDF Information & History
This PDF was compiled from LATEX on March 2, 2009, based on the 2 March 2009 Wikibooks
textbook.The latest version of the PDF may be found at http://en.wikibooks.org/wiki/Image:
Cyberbotics’_Robot_Curriculum.pdf.
Authors
Cyberbotics Ltd., Olivier Michel, Fabien Rohrer, Nicolas Heiniger, DavidCary, Trolli101, and
anonymous contributors.
117
APPENDIX A. DOCUMENT INFORMATION
120
Appendix B
GNU Free Documentation License
Version 1.2, November 2002
c 2000,2001,2002 Free Software Foundation, Inc.
Copyright 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies of this license document, but
changing it is not allowed.
Preamble
The purpose of this License is to make a manual, textbook, or other functional and useful
document “free” in the sense of freedom: to assure everyone the effective freedom to copy and
redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily,
this License preserves for the author and publisher a way to get credit for their work, while not
being considered responsible for modifications made by others.
This License is a kind of “copyleft”, which means that derivative works of the document must
themselves be free in the same sense. It complements the GNU General Public License, which is a
copyleft license designed for free software.
We have designed this License in order to use it for manuals for free software, because free
software needs free documentation: a free program should come with manuals providing the same
freedoms that the software does. But this License is not limited to software manuals; it can be used
for any textual work, regardless of subject matter or whether it is published as a printed book. We
recommend this License principally for works whose purpose is instruction or reference.
1. APPLICABILITY AND DEFINITIONS
This License applies to any manual or other work, in any medium, that contains a notice
placed by the copyright holder saying it can be distributed under the terms of this License. Such
a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under
the conditions stated herein. The “Document”, below, refers to any such manual or work. Any
member of the public is a licensee, and is addressed as “you”. You accept the license if you copy,
modify or distribute the work in a way requiring permission under copyright law.
119
APPENDIX B. GNU FREE DOCUMENTATION LICENSE
A “Modified Version” of the Document means any work containing the Document or a portion
of it, either copied verbatim, or with modifications and/or translated into another language.
A “Secondary Section” is a named appendix or a front-matter section of the Document
that deals exclusively with the relationship of the publishers or authors of the Document to the
Document’s overall subject (or to related matters) and contains nothing that could fall directly
within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a
Secondary Section may not explain any mathematics.) The relationship could be a matter of
historical connection with the subject or with related matters, or of legal, commercial, philosophical,
ethical or political position regarding them.
The “Invariant Sections” are certain Secondary Sections whose titles are designated, as being
those of Invariant Sections, in the notice that says that the Document is released under this License.
If a section does not fit the above definition of Secondary then it is not allowed to be designated as
Invariant. The Document may contain zero Invariant Sections. If the Document does not identify
any Invariant Sections then there are none.
The “Cover Texts” are certain short passages of text that are listed, as Front-Cover Texts
or Back-Cover Texts, in the notice that says that the Document is released under this License. A
Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.
A “Transparent” copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document
straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text
formatters or for automatic translation to a variety of formats suitable for input to text formatters.
A copy made in an otherwise Transparent file format whose markup, or absence of markup, has
been arranged to thwart or discourage subsequent modification by readers is not Transparent. An
image format is not Transparent if used for any substantial amount of text. A copy that is not
“Transparent” is called “Opaque”.
Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and
standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary
formats that can be read and edited only by proprietary word processors, SGML or XML for which
the DTD and/or processing tools are not generally available, and the machine-generated HTML,
PostScript or PDF produced by some word processors for output purposes only.
The “Title Page” means, for a printed book, the title page itself, plus such following pages as
are needed to hold, legibly, the material this License requires to appear in the title page. For works
in formats which do not have any title page as such, “Title Page” means the text near the most
prominent appearance of the work’s title, preceding the beginning of the body of the text.
A section “Entitled XYZ” means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language.
(Here XYZ stands for a specific section name mentioned below, such as “Acknowledgements”,
“Dedications”, “Endorsements”, or “History”.) To “Preserve the Title” of such a section
when you modify the Document means that it remains a section “Entitled XYZ” according to this
definition.
The Document may include Warranty Disclaimers next to the notice which states that this
License applies to the Document. These Warranty Disclaimers are considered to be included by
reference in this License, but only as regards disclaiming warranties: any other implication that
139
140
121
these Warranty Disclaimers may have is void and has no effect on the meaning of this License.
2. VERBATIM COPYING
You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License
applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading
or further copying of the copies you make or distribute. However, you may accept compensation
in exchange for copies. If you distribute a large enough number of copies you must also follow the
conditions in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly display
copies.
3. COPYING IN QUANTITY
If you publish printed copies (or copies in media that commonly have printed covers) of the
Document, numbering more than 100, and the Document’s license notice requires Cover Texts, you
must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover
Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly
and legibly identify you as the publisher of these copies. The front cover must present the full title
with all words of the title equally prominent and visible. You may add other material on the covers
in addition. Copying with changes limited to the covers, as long as they preserve the title of the
Document and satisfy these conditions, can be treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit legibly, you should put the first
ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent
pages.
If you publish or distribute Opaque copies of the Document numbering more than 100, you must
either include a machine-readable Transparent copy along with each Opaque copy, or state in or
with each Opaque copy a computer-network location from which the general network-using public
has access to download using public-standard network protocols a complete Transparent copy of the
Document, free of added material. If you use the latter option, you must take reasonably prudent
steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent
copy will remain thus accessible at the stated location until at least one year after the last time
you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the
public.
It is requested, but not required, that you contact the authors of the Document well before
redistributing any large number of copies, to give them a chance to provide you with an updated
version of the Document.
4. MODIFICATIONS
You may copy and distribute a Modified Version of the Document under the conditions of sections
2 and 3 above, provided that you release the Modified Version under precisely this License, with
the Modified Version filling the role of the Document, thus licensing distribution and modification
of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in
the Modified Version:
122
APPENDIX B. GNU FREE DOCUMENTATION LICENSE
A. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document,
and from those of previous versions (which should, if there were any, be listed in the History
section of the Document). You may use the same title as a previous version if the original
publisher of that version gives permission.
B. List on the Title Page, as authors, one or more persons or entities responsible for authorship of
the modifications in the Modified Version, together with at least five of the principal authors
of the Document (all of its principal authors, if it has fewer than five), unless they release you
from this requirement.
C. State on the Title page the name of the publisher of the Modified Version, as the publisher.
D. Preserve all the copyright notices of the Document.
E. Add an appropriate copyright notice for your modifications adjacent to the other copyright
notices.
F. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the
Addendum below.
G. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts
given in the Document’s license notice.
H. Include an unaltered copy of this License.
I. Preserve the section Entitled “History”, Preserve its Title, and add to it an item stating at
least the title, year, new authors, and publisher of the Modified Version as given on the Title
Page. If there is no section Entitled “History” in the Document, create one stating the title,
year, authors, and publisher of the Document as given on its Title Page, then add an item
describing the Modified Version as stated in the previous sentence.
J. Preserve the network location, if any, given in the Document for public access to a Transparent
copy of the Document, and likewise the network locations given in the Document for previous
versions it was based on. These may be placed in the “History” section. You may omit a
network location for a work that was published at least four years before the Document itself,
or if the original publisher of the version it refers to gives permission.
K. For any section Entitled “Acknowledgements” or “Dedications”, Preserve the Title of the
section, and preserve in the section all the substance and tone of each of the contributor
acknowledgements and/or dedications given therein.
L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles.
Section numbers or the equivalent are not considered part of the section titles.
M. Delete any section Entitled “Endorsements”. Such a section may not be included in the
Modified Version.
N. Do not retitle any existing section to be Entitled “Endorsements” or to conflict in title with
any Invariant Section.
123
O. Preserve any Warranty Disclaimers.
If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option
designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version’s license notice. These titles must be distinct from any other
section titles.
You may add a section Entitled “Endorsements”, provided it contains nothing but endorsements
of your Modified Version by various parties–for example, statements of peer review or that the text
has been approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to
25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version.
Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through
arrangements made by) any one entity. If the Document already includes a cover text for the same
cover, previously added by you or by arrangement made by the same entity you are acting on behalf
of, you may not add another; but you may replace the old one, on explicit permission from the
previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to use
their names for publicity for or to assert or imply endorsement of any Modified Version.
5. COMBINING DOCUMENTS
You may combine the Document with other documents released under this License, under the
terms defined in section 4 above for modified versions, provided that you include in the combination
all of the Invariant Sections of all of the original documents, unmodified, and list them all as
Invariant Sections of your combined work in its license notice, and that you preserve all their
Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical Invariant
Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same
name but different contents, make the title of each such section unique by adding at the end of
it, in parentheses, the name of the original author or publisher of that section if known, or else a
unique number. Make the same adjustment to the section titles in the list of Invariant Sections in
the license notice of the combined work.
In the combination, you must combine any sections Entitled “History” in the various original
documents, forming one section Entitled “History”; likewise combine any sections Entitled “Acknowledgements”, and any sections Entitled “Dedications”. You must delete all sections Entitled
“Endorsements”.
6. COLLECTIONS OF DOCUMENTS
You may make a collection consisting of the Document and other documents released under this
License, and replace the individual copies of this License in the various documents with a single
copy that is included in the collection, provided that you follow the rules of this License for verbatim
copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually under
this License, provided you insert a copy of this License into the extracted document, and follow this
License in all other respects regarding verbatim copying of that document.
124
APPENDIX B. GNU FREE DOCUMENTATION LICENSE
7. AGGREGATION WITH INDEPENDENT WORKS
A compilation of the Document or its derivatives with other separate and independent documents
or works, in or on a volume of a storage or distribution medium, is called an “aggregate” if the
copyright resulting from the compilation is not used to limit the legal rights of the compilation’s
users beyond what the individual works permit. When the Document is included in an aggregate,
this License does not apply to the other works in the aggregate which are not themselves derivative
works of the Document.
If the Cover Text requirement of section 3 is applicable to these copies of the Document, then
if the Document is less than one half of the entire aggregate, the Document’s Cover Texts may be
placed on covers that bracket the Document within the aggregate, or the electronic equivalent of
covers if the Document is in electronic form. Otherwise they must appear on printed covers that
bracket the whole aggregate.
8. TRANSLATION
Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special
permission from their copyright holders, but you may include translations of some or all Invariant
Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers,
provided that you also include the original English version of this License and the original versions
of those notices and disclaimers. In case of a disagreement between the translation and the original
version of this License or a notice or disclaimer, the original version will prevail.
If a section in the Document is Entitled “Acknowledgements”, “Dedications”, or “History”, the
requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual
title.
9. TERMINATION
You may not copy, modify, sublicense, or distribute the Document except as expressly provided
for under this License. Any other attempt to copy, modify, sublicense or distribute the Document
is void, and will automatically terminate your rights under this License. However, parties who have
received copies, or rights, from you under this License will not have their licenses terminated so
long as such parties remain in full compliance.
10. FUTURE REVISIONS OF THIS LICENSE
The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version,
but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.
Each version of the License is given a distinguishing version number. If the Document specifies
that a particular numbered version of this License “or any later version” applies to it, you have the
option of following the terms and conditions either of that specified version or of any later version
that has been published (not as a draft) by the Free Software Foundation. If the Document does
not specify a version number of this License, you may choose any version ever published (not as a
draft) by the Free Software Foundation.
141
142
125
ADDENDUM: How to use this License for your documents
To use this License in a document you have written, include a copy of the License in the document
and put the following copyright and license notices just after the title page:
c YEAR YOUR NAME. Permission is granted to copy, distribute and/or
Copyright modify this document under the terms of the GNU Free Documentation License, Version
1.2 or any later version published by the Free Software Foundation; with no Invariant
Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is
included in the section entitled “GNU Free Documentation License”.
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the “with . . .
Texts.” line with this:
with the Invariant Sections being LIST THEIR TITLES, with the Front-Cover Texts
being LIST, and with the Back-Cover Texts being LIST.
If you have Invariant Sections without Cover Texts, or some other combination of the three,
merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing these
examples in parallel under your choice of free software license, such as the GNU General Public
License, to permit their use in free software.

Documents pareils