Interactive Key Frame Motion Editor for Humanoid Robots

Transcription

Interactive Key Frame Motion Editor for Humanoid Robots
Interactive Key Frame Motion Editor
for Humanoid Robots
Bachelor Thesis
Simon Philipp Hohberg
02/27/2012
Freie Universität Berlin
Fachbereich Mathematik und Informatik
Adviser
Dipl.-Inf. Daniel Seifert
Superviser
Prof. Dr. Raúl Rojas
Eidesstattliche Erklärung
Ich versichere, die Bachelorarbeit selbstständig und lediglich unter Benutzung der
angegebenen Quellen und Hilfsmittel verfasst zu haben.
Ich erkläre weiterhin, dass die vorliegende Arbeit noch nicht im Rahmen eines anderen Prüfungsverfahrens eingereicht wurde.
Berlin, 28. Februar 2012
Abstract
Static motions enable humanoid robots to solve static problems where a dynamic
solution would not provide noteworthy benefits. The most popular technique to
create static motions for humanoid robots is keyframing. Since a key frame motion
defines many joint angles, tools for the design of key frame motions should support
the motion designer to deal with them.
In this thesis, an already existing simple motion editor is analyzed regarding its
drawbacks and inconveniences to formulate requirements that increase the usability
of a newly implemented editor. With respect to these requirements it is described
how a motion editor using a 3D robot model and the concept of time-joint angle
diagrams to visualize trajectories is implemented. Also, an interpolation using cubic
Bézier curves is realized to create smooth motions in contrast to linear interpolated
motions. The purpose of the 3D robot model that is included in the motion editor
is also to enable users to create motions without a real robot.
Furthermore, the implemented motion editor is configured by a robot description
file that allows the editor to work with different types of robot.
Contents
1 Introduction
1.1 FUmanoids Team . . . . . . . . . .
1.2 FUmanoid Platform and FUremote
1.3 Related Work . . . . . . . . . . . .
1.4 Structure of Thesis . . . . . . . . .
.
.
.
.
.
.
.
.
2 Theory
2.1 Motions . . . . . . . . . . . . . . . . .
2.1.1 Keyframing . . . . . . . . . . .
2.2 Software Patterns . . . . . . . . . . . .
2.2.1 Observer . . . . . . . . . . . . .
2.2.2 Model View Controller/Adapter
2.2.3 Blackboard . . . . . . . . . . .
2.3 Bézier Curves . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
2
3
6
.
.
.
.
.
.
.
9
9
9
10
10
10
11
12
3 Requirements Analysis
15
3.1 Analysis of the former MotionEditor . . . . . . . . . . . . . . . . . . 15
3.2 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4 Implementation
4.1 Technologies . . . . . . . . . . . . .
4.1.1 Google Protocol Buffers . .
4.1.2 Eclipse Rich Client Platform
4.1.3 Java3D . . . . . . . . . . . .
4.2 Robot Model . . . . . . . . . . . .
4.2.1 Robot Description . . . . .
4.2.2 Robot Model with Java3D .
4.3 Motions . . . . . . . . . . . . . . .
4.3.1 Interpolation . . . . . . . .
4.3.2 Motion Description . . . . .
4.4 Architecture . . . . . . . . . . . . .
4.5 FUmanoid Motion Player . . . . .
4.6 The new MotionEditor . . . . . . .
5 Conclusion and Future Work
. . . . . .
. . . . . .
and SWT
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
19
19
19
19
20
20
20
23
24
24
29
30
31
32
35
i
Bibliography
37
List of Figures
39
ii
1 Introduction
The avail of a robot is mostly given by its ability to interact. Since motions are an
essential part of interaction, they are very important for robots in general.
One main problem of humanoid robotics is the complex motion model. Not only
walking but also any other motion needs to be defined. To simplify this motion
model, key frame motions can be used to define certain motions statically. Creating
a key frame motion is very complex in most cases because many joints have to be
considered and defined over time. Therefore, it is mandatory for a key frame motion
editor to be intuitive, descriptive, interactive and easy to use.
Based on experiences with a very simple motion editor that was frequently used
for many years a new motion editor was implemented with the goal to fulfill these
qualities.
This thesis analyzes the requirements for a motion editor and shows which design
decisions were made to achieve those requirements as well as the implementation
details for most important aspects of an interactive key frame motion editor.
1.1 FUmanoids Team
FUmanoids is the humanoid robot soccer team of the Freie Universität (FU) Berlin.
The FUmanoids were founded in 2006 and took part in the RoboCup competition
for the first time in 2007. RoboCup is short for Robot World Cup Initiative and is
the most important competition in several fields of robot research. There is not only
a cup for soccer playing robots, but also for rescue robots (RoboCup Rescue) and
domestic robots (RoboCup @Home).
RoboCup Soccer is divided into leagues for different kinds of robots. These include
humanoid robots, small size non-humanoid robots, mid size non-humanoid robots,
standard platform robots as well as a simulation league. The humanoid league, then
again, is subdivided into different sizes (kid size, teen size and adult size), each with
different size restrictions for the overall robot size, arm length, leg length and so on.
The FUmanoid robots are human-like autonomous robots playing in the kid size
league.
The concept of a cup for soccer playing robots was mainly formed to foster the
development of robotics and artificial intelligence in an entertaining and applicationoriented way. Soccer is especially reasonable for this, because it comprises the main
1
Chapter 1
Introduction
issues for humanoid robots, i.e. localization, walking and motions in general, vision
and team play. To ease the development of soccer playing robots in the beginning
of RoboCup, each goal was marked with a different color and colored markers were
placed next to the soccer field. Some of these assistances were already removed since
then, some are still used. In the future, all of them will be removed and the whole
environment will be adapted to a real soccer field.
1.2 FUmanoid Platform and FUremote
The FUmanoid platform is the software running on the FUmanoid robots. It includes all the artificial intelligence components for vision, localization and behavior,
but also walking and playing motions.
FUremote is a set of tools for the work with the FUmanoid robots. It includes
tools for configuration, debugging, simulation and motion generation. FUremote
can connect to robots running the FUmanoid platform and send and receive data
from the robot to control it. FUremote is based on the Java Eclipse Rich Client
Platform (subsection 4.1.2) and is therefore plug-in based. Figure 1.1 shows the
FUremote with several views open. These views are defined by certain plug-ins of
FUremote and can be closed or moved. One of these plug-ins is the MotionEditor
which is displayed in an additional view when opened.
Figure 1.1: FUremote application
2
1.3 Related Work
1.3 Related Work
Creating motions for robots is very similar to motion creating for animations. Thus
almost any 3D animation tool is related to an application like the MotionEditor.
Blender is a free and widely used tool for 3D rendering and is therefore used as
reference representing 3D animation tools in general.
3D rendering tools are much more complex than an editor for robot motion design
due to the fact that they are much more general tools making it very complicated
to use them. Nevertheless, looking at the subset of functionality that is necessary
for creating animations, concepts have been taken over by motion editors. The
keyframing technique (subsection 2.1.1) which is used to generate motions for robots
is an animation technique.
Figure 1.2 shows the 3D rendering and animation tool Blender. In the center of
the picture there is the 3D model that will be animated. At the bottom there is
the time line for the animation. The time line defines the number of frames for the
animation. The user also specifies a frame rate, i.e frames per second, defining the
actual duration of the motion. Clicking at a certain time in the time line allows the
user to define a key frame for some 3D object with its current position, rotation and
scale.
Figure 1.2: Blender with 3D model of the robot 2012
3
Chapter 1
Introduction
Another concept of animation are bones. A bone defines a rigid body with a joint
at one ending. By attaching bones they can be used to create a skeleton for a 3D
object. The mesh of this object is then connected to the bones. When creating
an animation, this skeleton can be used to either rotate certain bones directly or
automatically calculate the joints rotations by translating the end of a bone chain.
The mesh connected to the bones is then automatically adapted.
File formats for animations either store the positions, rotations and scales of each
object for each key frame or only the translations, rotations and scalings performed.
A robot motion can possibly be recorded with a 3D animation tool like Blender.
Nevertheless, it would take great effort to play a motion recorded this way, since the
animation and its key frames are defined for 3D objects and not for joints or motors.
So either a plug-in for Blender is written that knows the real robot and calculates
the joint angles during the design process, so they can be stored in a format that
is understood by the robot, or the animation format is converted to such a format
after the design phase.
Aldebaran Robotics is a company specialized in robotics. They invented the humanoid NAO robot for commercial purposes. The software for the NAO robot
includes the functionality for designing and capturing motions as well as for creating and programming interactive behavior. This software is called Choreographe
(Figure 1.3).
Figure 1.3: Aldebaran Choreograph user interface [12]
As shown in Figure 1.3 Choreographe includes a 3D model of the NAO robot which
visualizes the current pose of it.
4
1.3 Related Work
On the left, there is a panel displaying available behavioral components and predefined robot poses. The behavioral components can be dragged to the panel in the
middle where they can be connected to each other to create a behavior for the robot.
This has nothing to do with the creation of motions, but allows the user to connect
the behavior with motions.
When designing a motion, Choreograph initially shows only the Timeline Panel
(Figure 1.4). This panel is divided in three parts: motion, behavior layer and details
area. For motion design only the motion area is interesting. It depicts the key frames
of the motion along a time axis. The unit of this axis is deciseconds and a key frame
is visualized as gray box. When the user clicks into the time line, the pose for this
time is shown in the 3D view. The user can then also set any joint angles in the
3D view which inserts a new key frame at this position. This is done by clicking on
a part of the robot in the 3D view. This opens a window where the user can set
the values for the joints of this part. A key frame does not have to set all but can
define only a subset of joint angles. Key frames are interpolated with Bézier curves
by default.
Figure 1.4: Choreograph Timeline Panel: 1. Motion Area, 2. Behavior Layer Area,
3. Details Area [12]
The Choreographe Timeline Editor for a motion shown in Figure 1.5 can be opened
up in a new window by clicking on the pen in the motion area. This editor can be
used in two different modes: curve mode and worksheet mode. The curve mode is
very descriptive and therefore much more interesting. It shows the trajectory for
each joint selected on the left as a curve with a specific color along the time. The
y-axis of this diagram depicts the joint angle in degrees.
A Key frame is visualized as point of the joint’s trajectory. These points can be
moved along the time or joint angle axis to edit the motion but they are restricted
to multiples of 100 milliseconds (one decisecond) along the time axis.
Furthermore, there can be different interpolations used. Besides the default automatic Bézier interpolation, there is constant, linear and Bézier interpolation with
adaptable control handles.
All in all, Aldebarans Choreographe uses some good concepts to make motion designing easier and more intuitive. This is mainly due to the 3D model but also the
5
Chapter 1
Introduction
Figure 1.5: Aldebaran Choreograph Timeline Editor in curve mode[12]
Timeline Editor in curve mode. Unfortunately, these two features do not share the
same window, which is inconvenient sometimes.
The main drawback of the Choreographe is that it is NAO specific and cannot be
used with other robots at all.
Kouretes Motion Editor (KME) is the name of another editor for the design of
motions.[5] It was originally designed for NAO robots, but can be used with different
robots as well. A robot is defined by a XML file. This file is then used by the KME
to create motions for this robot. This is a very good approach, since it decouples
robot and editor and allows it to work with different robots.
Figure 1.6 shows the KME. The interface is very simple. It shows a slider for each
joint, status information on the left and the actual motion at the bottom. A motion
consists of a list of key frames. Here, a key frame is again a list of joint values and
a duration value that defines the time between the current and next key frame. A
key frame always defines all joint angles. When a key frame is selected from the
motion, the joint values for it can be adapted with the joint sliders. The type of
interpolation between the key frames cannot be changed.
1.4 Structure of Thesis
First the theoretical basics of motions and implementation related topics are discussed. Afterwards, the existing MotionEditor is analyzed as well as problems and
drawbacks are identified, leading to the requirements for the new motion editor. In
chapter 4 the actual realization of the MotionEditor and the most important aspects
for the implementation including design decisions and used technologies is described.
Finally, the new MotionEditor is evaluated with regard to whether it fulfills the requirements. Also, achieved enhancements are pointed out. At last, after drawing a
6
1.4 Structure of Thesis
Figure 1.6: Kouretes Motion Editor[5]
conclusion, an outlook for possible future features and improvements is given.
The interface of the KME is very simple but not descriptive. It is not possible to
get a direct feedback when editing a motion. Nevertheless, the concept of a robot
description loaded from a file is very useful.
The concepts of this editor are very similar to the former FUmanoid MotionEditor
(section 3.1).
7
Chapter 1
8
Introduction
2 Theory
2.1 Motions
Motions of humanoid robots can be divided into two types of motions: dynamic
and static motions. For human beings it is normal to perform motions dynamically.
That means, each time such a dynamic move is performed its execution may slightly
vary, because the move is always performed with respect to feedback like remaining
balanced when walking over rough terrain or not crushing a cup when grabbing
it. Because developing such a dynamic motion behavior for robotic motions is very
complex, some motions are defined in a static way where feedback can be ignored.
That means that these motions are always performed in the same way and they are
not adapted to any influence from outside.
Implying that a motion is made to solve any kind of problem, dynamic motions can
be considered a generalized solution to this problem, whereas a static motion solves
only one fixed problem. Nevertheless, a motion in general is always a mathematical
function mapping time on joint angles. For static motions this mapping is predefined
and immutable, whereas the mapping of dynamic motions can vary each time the
motion is performed.
Static motions are also known as offline motions and dynamic motions as online
motions.[17]
The most common technique for creating static motions is keyframing.
2.1.1 Keyframing
The keyframing technique allows the creator of a static motion to specify a set
of joint positions called key frames. Interpolating these key frames enables the
defining of the joint positions for the entire duration of the motion. Various kinds of
interpolations are possible and eventually produce different motions as a result. The
simplest form of interpolation is linear interpolation, although cubic Bézier curves
or B-Splines can be used to interpolate between two adjacent points resulting in
smoother trajectories. However, a combination of these interpolation types can also
be useful.
9
Chapter 2
Theory
2.2 Software Patterns
For interactive systems it is most important to separate the graphical user interface from the functional logic. That allows that the user interface can be changed
and that the functionality can be expanded without recreating the whole application. That is why the development of front-end applications usually follows specific
software patterns to achieve a flexible and extensible implementation.
2.2.1 Observer
The observer pattern is a very basic behavioral software pattern for interactive
software development. It defines two types of software components: observer and
observed. The observed component automatically notifies all its observers about any
change of state, which enables a loosely coupled one-to-many dependency between
the observing and the observed software components.[2] Figure 2.1 shows a class
diagram for the use of the observer pattern in the Model View Controller pattern.
Here, View and Controller assume the role of observers of the Model attached to it.
When the Model is altered, it will notify all attached observers by calling the update
method from the Observer interface.
2.2.2 Model View Controller/Adapter
The most important aspect for fronted applications is the separation of data including core functionality – generally called model – and representation of this data
for the user – called view. The Model View Controller (MVC) and the Model View
Adapter (MVA) (also Model View Mediator) architectural software patterns are most
common for implementing this separation. Both patterns are very similar. Yet, for
reasons of completeness, they shall be discussed here.
As the view is a representation of the model it needs to be synchronized with the
model by updating it every time the model changes. This is usually called Observer
pattern. When the model changes, it informs all its observers about this changes.
This guarantees that the view remains synchronized.
Besides model and view a third type of software component is necessary which links
model and view so that the user input is processed and a certain functionality is
provided. In the MVC pattern, this logic is defined by the controller. The controller
listens for user input on the view component, processes this data regarding its logic
and updates the model, which will then update the view to display the changes.
In the MVA pattern this third component is called adapter or mediator. Unlike
MVC, model and view are not linked directly but all communication passes the
adapter. As in MVC the adapter is notified about user input by the view, processing
this input and finally updating the model. However the model now notifies the
10
2.2 Software Patterns
Figure 2.1: MVC class diagram[4]
adapter about changes instead of triggering the view directly. The adapter then
notifies the view.[1]
Figure 2.2: Schematic MVA pattern[1]
2.2.3 Blackboard
The blackboard pattern is the generalization of the observer pattern and is therefore
a behavioral pattern. It is used to collect data at one central point, the blackboard.
Different software subsystems provide their results to the blackboard. That way,
the combination of this data can be used to solve a more complex problem.
Besides the blackboard itself, two further kinds of components make up the blackboard pattern: the producers which are the source of data and write this data to the
11
Chapter 2
Theory
blackboard and the consumers that read and work on the data from the blackboard.
This pattern has its origin in artificial applications and is normally used to solve
problems that do not have a deterministic solution.[4] Specialized software components working on some part of data provide partial solutions to allow the calculation of an approximate solution of a more complex problem. Each component is
specialized for solving a certain problem and works on this independent of the other
components.
Although, the blackboard pattern has its origin in artificial non-deterministic applications, it fits front-end applications as well, as they have many properties in
common. In front-end application there are also many different independent and
specialized components that have to work together to provide complex functionalities. The implementation of features requires several data sources and is mostly
independent of other features.
2.3 Bézier Curves
Bézier curves are named after Pierre Bézier who simultaneously with Paul de Casteljau developed these curves.[6]
To understand Bézier curves, it is reasonable to start with linear interpolation.
Equation 2.1 defines a linear interpolation of the points a and b for 0 ≤ t ≤ 1 by
mapping t to x as shown in Figure 2.3, whereas x (0) = a and x (1) = b.
x = x (t) = (1 − t) · a + t · b
(2.1)
Barycentric coordinates for three points a, b, x are defined by Equation 2.2.[3] α and
β are then barycentric coordinates with respect to a and b. For linear interpolation
x has the barycentric coordinates α = 1 − t and β = t. The point x can therefore
be called barycentric combination of the two points a and b.
x = αa + βb;
α+β =1
(2.2)
Equation 2.1 is called an affine mapping of the three points 0, t, 1 in one dimensional
space to a, x, b. [3]
Linear interpolations are affine invariant[3], which means that the relative position
of t on the straight line between 0 and 1 is the same for x on the line between a and
b after mapping. For example, that if t = 12 , then the resulting point x is the middle
point between a and b on the straight line through these two points.
12
2.3 Bézier Curves
Figure 2.3: Linear interpolation[3]
For a set of points this linear interpolation can be used repeatedly to produce a
Bézier curve. For example, given three points b0 , b1 and b2 , the linear interpolation of
these points creates two straight lines in the first place (Equation 2.3, Equation 2.4).
Repeating linear interpolation of these lines leads to Equation 2.5 which then results
in Equation 2.6 by inserting Equation 2.3 and Equation 2.4 into Equation 2.5. [3]
b10 (t)
b11 (t)
b20 (t)
b20 (t)
=
=
=
=
(1 − t) b0 + tb1
(1 − t) b1 + tb2
(1 − t) b10 (t) + tb11 (t)
(1 − t)2 b0 + 2t (1 − t) b1 + t2 b2
(2.3)
(2.4)
(2.5)
(2.6)
Equation 2.6 produces a parabolic curve shown in Figure 2.4. First t is mapped to
the straight line between b0 and b1 resulting in the point b10 . Next, t is mapped to the
second straight line between b1 and b2 producing the point b11 . Finally, t is mapped
to the line through b10 and b11 giving the point b20 on the parabola.
This repeated linear interpolation is called de Casteljau algorithm. Its generalized
form is denoted in Equation 2.7 as described in [3] for a set of points b0 , b1 , . . . , bn .
bri = (1 − t) br−1
(t) + tbr−1
i
i+1 (t)
(2.7)
The most important characteristic of Bézier curves is that they always run through
the first and last point. All the other points are called control points or control
handles. They are mostly not on the curve.
13
Chapter 2
Theory
Figure 2.4: Second order Bézier curve[3]
For this thesis, cubic (third order) Bézier curves are most important. Besides starting and ending point, these curves define two more control handles. Figure 2.5 shows
such a cubic Bézier curve with starting point b0 , control handles b1 , b2 and ending
point b3 .
Figure 2.5: Cubic Bézier curve constructed using de Casteljau algorithm[3]
14
3 Requirements Analysis
This section analyzes the existing MotionEditor and summarizes the requirements
for the implementation of a new one based on the results of this analysis.
3.1 Analysis of the former MotionEditor
Figure 3.1 shows a stand up motion in the old MotionEditor. The editor depicts a
table where each column represents a motor. The motor ID is shown in the table
head and each row resembles a key frame (subsection 2.1.1). The rightmost column
is a special column, defining the time in milliseconds that passes between the former
key frame and the current key frame. Since a row represents a key frame, it defines
the rotation value for each motor and the time for reaching these values. When
the motion is played, the robot will go through the rows and strike every pose by
calculating the speed each motor needs to reach the desired position in the specified
time. The single key frames are linearly interpolated.
Because the MotionEditor needs to know the motor IDs for the robot, the old
MotionEditor uses a predefined set of robots. For example, the robot in Figure 3.1
has no motor with ID one. That is why the motor cannot be set and always remains
zero.
The old MotionEditor provides few basic features. It can read a single or multiple
motors and rows as well as it can set a single or all motor values of one row. It
is also able to play a set of adjacent rows or the whole motion. For working with
the table it provides features for inserting, adding, deleting and copying rows. In
addition, the editor allows to enable or disable a single or multiple motors. Usually
a motor tries to hold its current position and therefore can not be moved. This is
the enabled state of a motor. When a motor is disabled it can be rotated freely and
when it is enabled again, it holds the new position the motor was moved to.
There are many obvious drawbacks for this kind of editor. First of all, it is impossible
to see what the motion or the pose for a certain row looks like. Further, the user
does not know how changing a motor value will affect the rotation of this motor
because (depending on the motor) increasing or decreasing the value will rotate the
motor which can result in a left, right or up, down movement. Therefore, the usual
way of working with the old MotionEditor was to put the real robot in a certain
position, then read the motor values for this pose, put the robot in the next position,
15
Chapter 3
Requirements Analysis
Figure 3.1: Old MotionEditor
read the motors and so on. This always requires the availability of the real robot
to create a motion and also required that the real robot was put in the same pose
again, if the further pose was not correct or should be adapted.
Another drawback is that the editor provides only linear interpolation. Nevertheless
different interpolation types using splines might be useful to create smoother and
more natural motions.
Due to the table structure of the motions it is not possible to set the value for only
a single or a subset of motors for a given time instead of all motors. This makes it
difficult to define different motor motions independently, for example rotating one
motor slowly from one position to another while rotating another motor back and
forth fast. This is indeed problematic, because the motion would have to have a
new row each time the fast rotating motor changes its direction. In all these rows
the position for the slowly rotating motor would have also to be set although this is
unwanted.
The motions are serialized by converting the table into a comma separated list of
motor values and a line break after each row.
3.2 Requirements
Considering these drawbacks the main requirements for the new MotionEditor could
be deduced. A visualization of the robot is important for displaying the motion and
to directly show how changes to motor values affect the robot’s pose at a specific
time.
16
3.2 Requirements
However, the MotionEditor should be independent of the robot it is used with,
instead of supporting only predefined robot types. Therefore, a robot description
format is required that describes the motors, kinematics and appearance of the robot
enabling the editor to visualize a model of the robot.
Another requirement is that each motor or joint has to have its own trajectory independent of the other motors. These trajectories need to be displayed as curves and
points over time mapping time to joint angles to provide an intuitive representation.
In addition, the interpolation for each part of a trajectory has to be adaptable and
different types of interpolations should be available. Besides linear interpolation,
there should be an interpolation producing round and smooth motions. This also
requires a new motion format for storing and sending motions.
The MotionEditor has to provide tools that are easy to use for working with the
points of a trajectory. It has to provide functionality for adding and removing,
copying, pasting and moving points. Because a humanoid robot is usually very
symmetric regarding its joints, it would be useful to have a mirror tool, that allows
to copy and mirror a trajectory of the left shoulder joint to the right shoulder joint
for example, so both joints perform the same move.
Motions created with the old MotionEditor could further be sped up by shortening
the period between single key frames. As this is important for motion designers,
this would have to be possible in the new MotionEditor as well .
Motors should not only be named with IDs but also with names describing the equivalent human joint. Further, when connected to a real robot the editor should provide
information about the temperature and load of each motor to prevent overheating
of motors.
Generally, the goal for the MotionEditor should be to allow the creation of motions
without a real robot by visualizing the motion and the robot’s movement in 3D.
Finally, all motions created with the old MotionEditor have to be loadable with the
new editor.
17
Chapter 3
18
Requirements Analysis
4 Implementation
4.1 Technologies
To provide new features for the motion editor, decisions for and against certain
technologies were made. The most important technologies the motion editor uses
are the Google Protocol Buffers, the Eclipse Rich Client Platform (RCP) including
the Standard Widget Toolkit (SWT) and Java3D.
4.1.1 Google Protocol Buffers
Protocol buffers (protobuf ) are a format for the serialization of structured data.
Because it is extensible, very efficient and can be applied in an object oriented
manner, it is used for all communication between MotionEditor and FUmanoid
platform. It also allows code generation from the protobuf description making it
very comfortable to use.
A protocol buffer defines a message which can be thought of as a class without
methods or a struct in the C programming language. Such a message consists of
a set of typed single or repeated fields. The generated code of a message defines a
class for this message which can be instantiated. It also provides getter and setter
methods for all fields and methods for serialization.
4.1.2 Eclipse Rich Client Platform and SWT
The Eclipse Rich Client Platform (RCP) is a framework for the development of client
applications. It uses the Open Services Gateway initiative (OSGi) service platform
implementation Equinox. The OSGi service platform is a specification for hardware
independent, modularized platforms on top of a Java virtual machine (JVM). Eclipse
RCP uses this plug-in based architecture to provide basic features commonly used for
client applications while being extensible at the same time. A plug-in is a software
component which can be inserted or removed from the framework easily. When
using the Eclipse RCP framework to build a rich client application it is mandatory
to use some basic plug-ins providing core features. Besides these plug-ins developers
would create their own plug-in or plug-ins implementing all the features for their
special application.
19
Chapter 4
Implementation
Eclipse RCP uses the Standard Widget Toolkit (SWT) for all user interface elements.
SWT is a library for creating graphical user interfaces. It uses native, i.e. platform
dependent, widgets which allows the creation of applications with the usual platform
look. For Java there are two more widely used user interface libraries. These are
the Abstract Window Toolkit (AWT) and Swing. Whereas Swing defines own user
interface components, AWT provides native widgets like SWT.
Eclipse RCP allows the use of SWT only. SWT is generally not compatible with
AWT or Swing but provides an adapter for AWT components which allows to include
AWT elements.
4.1.3 Java3D
Java3D is a high-level application programming interface (API) for rendering 3D
objects. It provides the usual Java platform independence and is object oriented.[15]
Multiple OpenGL bindings exist for SWT like the Lightweight Java Game Library
(LWJGL) and the Java OpenGL bindings JOGL and gljava.[13] One advantage of
these bindings is their seamless integration in SWT, whereas their drawbacks are
their only low-level functionality and OpenGL narrowness. Java3D, in contrast,
abstracts from the underlying system graphics library and, therefore, can be used
with an OpenGL binding as well as Direct3D. On the other hand Java3D can be used
with the Abstract Window Toolkit (AWT) only. But there is a simple solution for this
problem as SWT provides an adapter which allows the integration of AWT widgets
into SWT. Because Java3D can be used very easily and provides an abstraction from
the real graphics library used, the decision was taken to use Java3D for rendering
the 3D model.
Java3D is a scene-graph-based API. That means that all 3D objects rendered are
leafs of a 3D scene tree. The inner nodes of this tree are called groups, so every 3D
object is assigned to a group. Figure 4.1 shows this tree structure for a 3D scene.
4.2 Robot Model
One goal for the motion editor was to make it independent of the robot’s specific
construction because there are often changes to the hardware or the whole robot
construction. Therefore, a robot description format was needed that describes the
robot and that is provided to the editor.
4.2.1 Robot Description
The Robot Description File (RDF) was originally defined by [7]. To meet the requirements of the newly implemented MotionEditor, this format was extended.
20
4.2 Robot Model
Figure 4.1: Java3D Scene Graph [15]
For the MotionEditor the IDs and names of the motors are most important because
they are needed to map a motion for one joint to a certain motor. Furthermore, the
editor needs information about the motors regarding their working angle and the
value region the motor is working on. Yet, to make the editor more user-friendly
the description contains also all data necessary for creating a 3D model of the robot
and the kinematic information to solve the forward kinematic for a given set of joint
angles enabling the editor to animate motions in 3D.
The root protobuf message for a robot description is figure, which is mainly made
up by a set of bodies, links and sensors. The bodies describe the appearance of the
robot, whereas the links specify the kinematics.
The sensor description defines not only pressure sensors and so forth but also the
motors. The sensor description of a motor contains the motor ID, the values the
motor uses for specifying the motor position, the allowed voltages and temperatures.
Yet, it does not carry any kinematic information or information about the motor’s
appearance.
Bodies define everything that is visible: its dimensions and appearance. A body can
be a simple box or sphere or it can be defined by a 3D object stored in an external
file. It also determines the position and rotation of the body and may contain
information about the material and density. Moreover, a body can be defined to
be an end-effector, which can be used for inverse kinematics in future versions of
the motion editor. For creating more complex connected bodies a body may have a
parent body building a tree structure of bodies. A RobotDescribtion was created for
the FUmanoid robot 2012. The 3D model for this robot is visualized in Figure 4.2.
Links define the kinematics as already mentioned but also combine all the different
parts of the robot description. A link determines a rotation axis and the position
of it. Also, a link can have multiple child links that are rotated by this link leading
21
Chapter 4
Implementation
to a tree structure of links. This link tree is sufficient for describing any linear link
combination. Because parallel mechanics can not be described with this and the
current FUmanoid robot 2012 construction uses such a parallel mechanic, there was
a need for links to define optional parallel links and opponent links.
Figure 4.2: 3D model of the FUmanoid robot 2012
A parallel mechanic is a construction of four links arranged in a parallelogram and
four rigid bodies each connecting two links and representing the sides of the parallelogram (see Figure 4.3). This construction guarantees that opposite sides always
remain parallel.
When a link defines a parallel link, this parallel link also has to be rotated every
time the link rotates. An opponent link, however, also has to be rotated when the
link rotates but in the other direction. As Figure 4.3 shows, the links l1 and l2 are
opponent links. When link l1 rotates clockwise, l2 has to rotate just as far in the
opposite direction due to the mechanical construction.
A link can have an associated body representing it and a motor driving this link.
The motor is of course optional because there are links that do not have a motor.
22
4.2 Robot Model
Figure 4.3: Parallel Mechanics straight (a) and rotated (b)
4.2.2 Robot Model with Java3D
In Java3D there are two important classes for translating, rotating and scaling objects: TransformGroup and Transform3D. A TransformGroup is always associated
with a Transform3D which is wrapping a 4 × 4 matrix defining position, rotation
and scale of the TransformGroup’s children which could be TransformGroups again.
Hence the final position and rotation of a 3D object is the product of all transformations from the root node to this object[15].
The RevoluteJoint class from the motion editor is the equivalent for a link of revolute type in the robot description, but also provides functions for manipulating the
3D model. A RevoluteJoint can be associated with a motor and a body like a link in
the robot description. When using the Java3D API to rotate a 3D object around a
certain axis the object is usually moved to the origin, rotated and afterwards moved
back, because rotation is always done with respect to the origin and not to the
object’s position. To avoid this the RevoluteJoint class defines two different TransformGroups separating rotation and translation. Because the final transformation
for a node is the product of all transform matrices from the root node to this node,
the separation has no effect on the absolute translation or rotation. The TransformGroup for translation is the parent node of the rotation TransformGroup. All nodes
that should be rotated by the joint are attached to the rotation TransformGroup.
If the rotation TransformGroup is rotated, this is done as if the group would be at
the origin and results in an in-place rotation without any movement. The body of
the joint is either attached to the movement TransformGroup, so it is not rotated
when the joint rotates, or to the rotation TransformGroup, resulting in a rotation
of the body when the joint rotates.
23
Chapter 4
Implementation
In the robot description all positions and rotations are absolute. Therefore it is necessary to calculate the relative position and rotation of a node (body or link) to a
given absolute position and rotation of a parent node. This relative transformation
(Nrel ) can be calculated by multiplying the inverse of the parent absolute transformation (Pabs ) with the absolute transformation of the node (Nabs ) (Equation 4.2)
because the final transformation of a node is given by the product of all transformations from the root node to the child node (Equation 4.1).
Pabs · Nrel = Nabs
−1
Nrel = Pabs
· Nabs
(4.1)
(4.2)
4.3 Motions
Motions define trajectories for each joint of a robot. These joint trajectories can
consist of several joint angles, the key frames for this joint. The actual motion of a
joint is created by interpolating over these key frames resulting in a function defining
joint angles for the whole time of the joint’s motion. Because the current pose of
a robot when starting a motion is unknown, a motion determines an initialization
time that is used to bring all joints into the starting position of the motion.
The interpolation used for a joint trajectory is very important as it has a big influence
on the resulting motion. Hence providing multiple interpolation types is essential
for a motion editor.
4.3.1 Interpolation
A motion for a single joint always consists of several points defined by the user and
a type of interpolation interpolating these points and creating the actual motion.
The simplest kind is linear interpolation. For a smooth trajectory interpolations
using Bézier curves, B-Splines or NURBS (non-uniform rational B-splines) are very
common. Bézier curves have a simple mathematical background and can be easily implemented. Therefore, Bézier curves were chosen as an alternative to linear
interpolation producing smooth trajectories.
It is useful to combine different interpolation types in one trajectory to speed up
certain parts by using linear interpolation, for example. That is why the implementation should allow to define the interpolation type for each part between two points
in a trajectory. Figure 4.4 shows the implementation for this interpolation in the
new MotionEditor. ComplexInterpolation, here, is the resulting interpolation that
describes the trajectory of a joint. This interpolation consists of multiple InterpolationParts. These parts consist of an ObservablePoint on the one hand that is a
24
4.3 Motions
point defined by the user and on the other hand specifies the kind of interpolation
that is used for interpolating from the point of the part before to this part’s point.
It is remarkable that ComplexInterpolation and InterpolationParts together form a
double-linked list. This means that an InterpolationPart references a predecessor
and successor and ComplexInterpolation holds a reference of the first and last part
of this list. When a new point is added to the ComplexInterpolation, a new InterpolationPart with some interpolation is created and inserted into this list which is
sorted with regard to the point’s x-coordinate of the parts.
Figure 4.4: Simplified interpolation implementation class diagram
Until now, ComplexInterpolation would be a list of several parts each with its own
interpolation. Therefore each interpolation instance would interpolate between to
points only, resulting in stiff trajectories for the BezierInterpolation. InterpolationParts contain more logic that enables them to check their neighbors and merge their
interpolation instances if they are of the same type. The interpolation then contains more than only two points. Figure 4.5 illustrates this behavior for the Bézier
interpolation. In (a) InterpolationParts are not aware of their neighbor parts and in
(b) the same trajectory is interpolated by a single BezierInterpolation instance that
adjusts the handles for a Bézier point if it lies vertically between two points.
The purpose of the Bézier interpolation is to produce more human-like, smooth
motions. To achieve this, the resulting mathematical function of a trajectory that
is generated by an interpolation has to be continuously differentiable. That means
there is a speed for any time of the trajectory which can be applied to the motor.
Cubic Bézier curves are useful to interpolate between two points. So, when using
cubic Bézier curves, an interpolation over a set of points consists of several cubic
Bézier curves connecting one point with the next. Besides the starting and ending
point, a cubic Bézier curve also has to define two control handles deforming the
curve.
A cubic Bézier curve approximates the control polygon defined by the starting, ending and control points. Therefore, a cubic Bézier curve with the points P0 (x0 , y0 ),
P1 (x1 , y1 ), P2 (x2 , y2 ), P3 (x3 , y3 ) is continuously differentiable if x0 < x1 ≤ x2 < x1 .
If this is guaranteed, the whole interpolation can only violate the quality of continuous differentiability at the connection points, so the points for the interpolation themselves. According to [8], given two cubic Bézier curves with the points
25
Chapter 4
Implementation
Figure 4.5: Bézier interpolation with multiple interpolation instances (a) and a
single instance (b)
P0 P1 , P2 , P3 and Q0 Q1 , Q2 , Q3 , with durations of ∆t1 and ∆t2 , connected in the
point P3 , which is equal to the point Q0 , the curves are continuously differentiable
in the points P3 and Q0 if P2 , P3 and Q1 are collinear and Equation 4.3 is fulfilled.
Q1 − P3
P3 − P2
=
∆t1
∆t2
(4.3)
For a point that is an extremum like P2 in Figure 4.6 the BezierImplementation
defines the x-value of the left control handle as the point’s x-coordinate minus one
third of the distance between the last point’s x-value and this point’s x-value. The
right control handle is defined accordingly as the point’s x-coordinate plus one third
of the distance between the point’s x-value and the next point’s x-value. Both
handle’s y-value is set equal to the point’s y-coordinate. Therefore, the first criteria
of collinearity is fulfilled.
Given a point P that is a extremum with a left handle P
right handle P R = P +
0
∆t2
3
L
=P−
0
∆t1
3
!
and a
!
whereas ∆t1 is the difference of the previous point’s
x-coordinate and P ’s x-coordinate and ∆t2 is the difference of the next point’s xcoordinate and P ’s x-coordinate, the second criteria is also fulfilled as shown in
Equation 4.3.1.
26
4.3 Motions
Differentiabiliy proof for extrema.
P − PL
PR − P
=
∆t1
∆t2
!
!
0
0
P + ∆t2 − P
P − P + ∆t1
3
3
=
∆t1
∆t
!
! 2
0
0
· ∆t−1
=
· ∆t−1
∆t1
∆t2
1
2
3
(4.4)
(4.5)
(4.6)
3
Figure 4.6 shows a special case for P3 . Since it is vertically between its predecessor
P2 and its successor P4 , the handles for this point are calculated differently to avoid
a step-like interpolation as illustrated in Figure 4.5. This type of point will be called
saddle point consecutively. The first idea of how to calculate the handles for such
a point was to use the average slope of the linear interpolation of the predecessor
and the saddle point and of the successor and the saddle point for the straight line
through the control handles and the saddle point. This can result in a curve that
goes vertically beyond or beneath the predecessor or successor when the saddle point
is vertically very close to one of them. That is because the average slope is always
the same for a fixed predecessor and successor. Figure 4.7 shows such a case. The
third point is a saddle point and vertically very close to the second point. Using the
average slope results in a Bézier curve that goes vertically beyond the second point.
This behavior maybe unwanted, because the joint will go beyond the maximum
angle originally defined by the designer of the trajectory. Hence, this has to be
avoided.
Figure 4.6: Interpolation using cubic Bézier curves
To solve this problem, the ratio between the slopes of the linear interpolations
defined by the previous and the current point and the next and the current point
27
Chapter 4
Implementation
have to be considered when calculating the handles for a saddle point. The nearer
this saddle point is vertically to one of its neighbors, the more important the small
slope has to be. Consequently, the slope for the straight line through both handles
and the saddle point was defined as shown in Equation 4.7. This definition adjusts
the slope so that it is reaching zero when one of the slopes reaches zero and becomes
the average slope if both slopes are equal.
m=

 mprev
mprev + mnext
next
· m
 mnext
2
mprev
f or mprev < mnext
f or mprev > mnext
(4.7)
Because a control handle of a saddle point should have the same influence on the
curve as a control handle of a point that is an extremum, the distance was also
defined as ∆t
as shown in Figure 4.6 for P3 and P3R .√Since the distance of two points
3
on a straight line with slope m is defined by d = ∆x2 + ∆y 2 with ∆y = m · ∆x
a point P on the !straight line S with slope m and distance d to S is given by
q 2
x
d
. There are two solutions for P , but only the
P =S+
with x = 1+m
x·m
solution with positive x-value is interesting here. Consequently, the control
handles
!
S
L
S+
and S
R
√∆t2
3 1+m
∆t
√ 2 ·m
3 1+m
of a saddle point S are defined as S
L
= S−
√∆t1
3 1+m
∆t
√ 1 ·m
3 1+m
and S R =
!
. Equation 4.3.1 shows that the criteria of continuous differentiability
for this definition is fulfilled.
Differentiability proof for saddle points.
S − SL
SR − S
=
∆t1
∆t2
!
!
1
√
√1
· ∆t1
· ∆t2
3 1+m
3 1+m
−S
S−S+
S+
√m
√m
· ∆t1
· ∆t2
3 1+m
3 1+m
=
∆t1
∆t2
!
!
1
√1
√
· ∆t1
· ∆t2
−1
3 1+m
3 1+m
· ∆t1 =
· ∆t−1
2
√m
√m
·
∆t
·
∆t
1
2
3 1+m
3 1+m
(4.8)
(4.9)
(4.10)
A Bézier curve is defined on the interval 0 ≤ t ≤ 1. When using them to interpolate
key frames, it is mandatory to get the y-value for a given x-value, i.e. the joint angle
for a given time. This is not trivial, since a Bézier curve is an affine mapping of
the interval 0 ≤ t ≤ 1 to the actual curve (section 2.3). Therefore, a x-value cannot
just be linearly mapped to t to get the corresponding y-value. Rather, a numerical
28
4.3 Motions
Figure 4.7: Using average slope for a point vertically between its neighbors
solution to this problem would use the inverse function of the Bézier curve. Because
the used curves are of third order, solving this is very difficult.
The BezierInterpolation, therefore, searches for the corresponding value between 0
and 1 that maps to a given x-value. First, it tries to map the input x-value linearly
to t and calculates the resulting x-value that is given by this t. If this is not the
same as the input value, t is adapted adding or subtracting a certain step-length.
This step-length is initially 0.1. If the calculated x-value is greater than the desired
value and then smaller again, the step-length is halved. This never produced more
than ten iterations for the search of t. The resulting t is then used to calculate the
y-value.
4.3.2 Motion Description
Motions designed with the MotionEditor have to be serialized and deserialized for
saving, loading and playing the motion on the robot. The old motion format was
self-made defining all joint positions for each time step of the motion implying linear interpolation. Since different interpolations need to be considered now, the new
motion format has to take this into account. It would have been possible to use
the old format by splitting a motion created with the new MotionEditor into many
key frames with a short period and also imply linear interpolation, so Bézier interpolations would have been linearly approximated. Because the period of these
key frames would have to be very short for Bézier curve approximation, this would
produce a large amount of joint positions. Therefore, a new motion format based
on Google Protocol Buffers (subsection 4.1.1) was created that is more abstract and
also following the already discussed interpolation implementation (subsection 4.3.1).
Accordingly, a motion consists of a set of joint motions or joint trajectories. This
motion of a joint is a list of motion parts, each part defining a point and an interpolation type. This interpolation is used to interpolate the part’s point and the point
of the predecessor part. The interpolation of the first part of a joint motion can
be ignored since it has no predecessor. This motion format implies that the entity
using such a motion is capable of the implementations for the interpolations used.
29
Chapter 4
Implementation
4.4 Architecture
Front-end applications have to be highly extensible, flexible and loosely coupled
to allow the integration of new features and adaption of functionality due to user
feedback. Therefore, the architecture is most important and design patterns like
MVC and MVA where developed to guarantee these qualities.
The new MotionEditor uses a combination of the MVA and blackboard pattern (see
Figure 4.8). While the separation of view and model can be achieved quite easily, the
adapter component has to be well structured, because it is implementing the actual
functionality and interacts with both view and model. Therefore, a blackboard
architecture is used for the adapter component to separate data producers and
consumers.
The blackboard is used to register producers and consumers for certain RegisteredData. When a producer informs the blackboard about new data, the blackboard
forwards this notification to all consumers registered for this data. The producer
that initiated this notification is also provided to the consumers. The consumer can
then access additional data or use functionality that is implemented by the producer.
Producers are mostly listeners applied to a view component listening for user actions
and providing these events through the blackboard to any consumer that uses this
information implementing some functionality. All data that is produced is also statically accessible via the DataRegistry and, therefore, can be produced anonymously,
i.e. some component that is not a registered producer. It is very important for
consumers to provide data for other consumers, because consumers usually listen
for only one kind of data but also need further data to provide their functionality.
For example, the PointDragger is registered on mouse position data provided by
the MousePositionProducer. When it receives a new mouse position, it has to check
whether the mouse is pressed at the moment and if any points are selected. When
all these conditions are met, it drags the selected points.
Due to the blackboard architecture and statically accessible data, all components
are very loosely coupled and new features can be easily integrated.
A motion editor has to supply many different functionalities. Since there is only a
limited set of user input possibilities, features are usually summed up in tools, each
specialized in providing some kind of functionality. This means, depending on the
currently activated tool, the user input results in a different behavior. This concept
was also realized in the new MotionEditor. Consumers are not only registered for
specific data but also for a tool. All producers are providing their data as already
described independent of the tool but the blackboard checks which tool is currently
activated and notifies only those consumers that were registered for this tool.
30
4.5 FUmanoid Motion Player
Figure 4.8: Simplified MotionEditor architecture class diagram for selected classes
4.5 FUmanoid Motion Player
The FUmanoid motion player is the implementation on the robot side for playing
motions sent to the robot or stored on the robot. The existing implementation used
by the old MotionEditor simply set a speed and a goal position for the defined period
for each motor as defined by the old motion format. According to this old format,
all motors were set simultaneously. Because now a motion of a motor can start any
time, this implementation had to be adapted. Furthermore, the implementation had
to be extended to play motions using Bézier interpolation, too. Since the motion
format only defines the interpolation type and not how to play it, it is up to the
robot side implementation to deal with the different interpolations. Whereas the
implementation for playing motions using linear implementation is straight forward,
the implementation for Bézier interpolated motions is a little bit more difficult. The
use of Bézier interpolation instead of linear interpolation was to generate smoother
motions by creating a continuous differentiable trajectory for a joint. To realize
smooth motions, the joint motors of a robot have to be controlled so they perform
the motion as defined. Unfortunately, this cannot be completely accomplished since
the motors used as joints currently utilized in the FUmanoid robots can only be
instructed to reach a desired motor position or angle using a given speed. Consequently, the motor can only linearly interpolate trajectories and it is necessary to
approximate motions using Bézier interpolation by subdividing the curve into small
steps. The smoothness of a motion therefore depends on the step-length used to
subdivide the trajectory. In the implementation for the FUmanoid robots, a value
of 10 ms was used for the step-length and delivered quite smooth results.
A static definition of the step-length may not be the best solution since the cubic
31
Chapter 4
Implementation
Bézier curves are usually almost linear in the middle and have many slope changes
the nearer to the starting or ending. Therefore, an improved solution would use a
dynamic step-length with regard to the slope changes.
The described implementation runs on the main processing unit of the current FUmanoid robot. Besides this processing unit, the robot also comes with a motor
board. Its main task is to communicate with the motors, i.e. setting and reading
motor values or status information. It has lesser computational power than the main
processing unit. Anyhow, in future versions the motion player can be run on this
unit instead of the main processing unit so motions and behavior are separated. Two
types of implementation are possible for this. Either the motor board calculates the
motor positions based on the motion format that was discussed (subsection 4.3.2),
or this motion format is evaluated by the main processing unit and converted into a
sequence of joint values that is then executed by the motor board. The first solution
would result in a higher load of the motor board, whereas the latter would require
very little computational time.
4.6 The new MotionEditor
Figure 4.9 shows the new MotionEditor. On the left there is a list of robots which
can be connected and worked with. It is also possible to add dummy robots if a real
robot is not available. This list is part of FUremote and not MotionEditor specific.
On the right-hand side there is a view displaying a 3D model for the robot. This
model is based on the robot description provided on the startup of the MotionEditor.
The actual editor view is displayed in the center. It shows a diagram for each joint
of the robot that is selected in the 3D model. This is done by clicking on the joints
in the 3D model. Double-clicking on a joint will select all joints that are rotated by
this joint, i.e. the kinematic chain to the end-effector. The x-axis of the diagram
represents the time in milliseconds, the y-axis defines the joint angles in degree or the
absolute motor values. Above each digram there is a panel that gives information on
the motor’s ID, name and status information and also buttons to enable or disable
the motor and to configure the diagram.
When hovering over a diagram of a motor, the motor is marked in the 3D model.
The user then instantly knows which motor this is.
Two different tools can be used to work on the motion. The initial tool is the Selector.
It can be used to insert points by clicking in a diagram of a motor. Furthermore,
multiple points can be selected, dragged, copied, pasted and removed.
The second tool is the Shifter. This tool was intended to allow to speed up parts of
the motion. It can be used in two different modes to either select all points left of the
cursor or right of the cursor. When pressing the mouse, all points on the specified
side of the cursor are selected. Keeping the mouse button pressed and moving the
mouse along the x-axis will then move all selected points along this axis only.
32
4.6 The new MotionEditor
Figure 4.9: New MotionEditor
The 3D model always shows the robot pose for the current mouse position. When
a point is dragged, it visualizes the rotational changes providing instant feedback to
the user. Furthermore, when moving the mouse along the x-axis, the user is able to
see how the motion will be performed.
A cross-hair is displayed for the mouse position, too, which enables to align points.
By double-clicking on a point, a dialog opens that can be used to set the accurate
values of the point.
All points in the diagram are connected by using the interpolation that is selected
in the drop-down menu in the panel at the top. This panel also shows the two tools
that can be used to manipulate the points in the diagram.
A diagram of a joint can also display trajectories of other joints, so the joint’s
motion can be synchronized with other joints. However, the additionally displayed
trajectories cannot be modified.
The joint diagrams can be zoomed in and out, to go into detail or to get an overview
of the trajectories. Additionally, it is possible to stretch or shrink the time axis, to
easily move points over great distances or set their timing precisely.
Functionality for playing the whole or only selected joint motions in either real or
3D is integrated as well as setting only a single pose of the motion.
For compatibility with the old motion format the new MotionEditor can load the
old motion format and adapt it to the new one.
33
Chapter 4
34
Implementation
5 Conclusion and Future Work
The MotionEditor that was used by the FUmanoid team until now had many drawbacks. These were analyzed to formulate requirements for a user-friendly and efficient motion designing tool. The most important concepts to fulfill the requirements
of intuitiveness and motion visualization are a 3D robot model and the illustration
of trajectories as time-joint angle diagrams. These were implemented in the new
MotionEditor giving direct feedback when editing motions and enable very easy
motion creation. Another important requirement is the independence of the motion
editor with regard to the robot that is used. It was shown that this can be achieved
by using a robot description format and also how this can be defined, so the editor
is able to create motions for different robots.
The newly created MotionEditor was implemented using a well modularized and
extensive architecture using state of the art design patters as MVC and MVA. The
concept of tools is common for designing applications and allows the consequent
separation of functionalities for different purposes. The editor therefore also realizes
this concept and provides two tools for motion editing. These can be used to efficiently design motions. Also, different kinds of interpolation can be used to connect
key frames. The possibility to choose between either linear or Bézier interpolation
allows the creation of fast or smooth motions.
All in all, the new MotionEditor fulfills almost all analyzed requirements. It allows
fast and easy motion creation in a descriptive and user-friendly way and can be used
with different types of robots.
Although it simplifies the creation of motions in comparison with the old one in
many ways, there are still some potential enhancements. Especially, it is still not
possible to create a completely working motion without a real robot. This requires
an improved 3D model that includes physics to give feedback of the robot’s balance.
Moreover, physics would also allow the integration of dynamic conditions. This
means that if the user wants the robot to remain balanced the robot’s pose would
be automatically adjusted for each key frame guaranteeing this constraint.
Furthermore, the integration of inverse kinematics could be very useful, since it
allows the user to directly position limbs or end-effectors in 3D space, whereas all
joint angles are automatically calculated.
It is also reasonable to move the motion player logic to the motor board of the FUmanoid robots in future versions to separate behavioral logic and motion execution.
35
Chapter 5
36
Conclusion and Future Work
Bibliography
[1] Robert Eckstein.
Java SE Application Design With MVC, August
2008. URL http://blogs.oracle.com/JavaFundamentals/entry/java_se_
application_design_with.
[2] Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides. Design Patterns,
Elements of Reusable Object-Oriented Software. Addison-Wesley, 2003.
[3] Gerald E. Farin. Curves and surfaces for CAGD. Morgan Kaufmann, 5th
edition, 2002.
[4] Frank Buschmann, Regine Meunier, Hans Rohnert, Peter Sommerlad, Michael
Stal. A System Of Patterns, Pattern-Oriented Software Architecture. John
Wiley & Sons Ltd, 1996.
[5] Georgios Pierris, Michail G. Lagoudakis. An Interactive Tool for Designing
Complex Robot Motion Patterns. Technical report, Technical University of
Crete, 2009.
[6] Hamid Fetouaki, Emma Skopin.
Bezier-Kurven, January 2009.
URL
http://pool-serv1.mathematik.uni-kassel.de/~fetouaki/seminar/
presentation_handout.pdf.
[7] Steffen Heinrich. Development of a Multi-Level Sensor Emulator for Humanoid
Robots. Master’s thesis, Freie Universität Berlin, January 2012.
[8] Judith Müller, Tim Laue, Thomas Röfer. Kicking a Ball - Modeling Complex
Dynamic Motions for Humanoid Robots. Technical report, Universität Bremen,
Fachbereich 3 - Mathematik und Informatik, 2011.
[9] Lizhuang Ma,Yan Gao, Xiaomao Wu, Zhihua Chen. From Keyframing to Motion Capture: The Evolution of Human Motion Synthesis. Technical report,
Shanghai Jiao Tong University.
[10] Marco Antonelli, Fabio Dalla Libera, Emanuele Menegatti, Takashi Minato,
Hiroshi Ishiguro. Intuitive Humanoid Motion Generation Joining User-Defined
Key-Frames and Automatic Learning. Technical report, University of Padua,
Osaka University, Osaka University.
[11] Marco Helmerich,Swen König, Franziska Wegner. Der Humanoide NAO - Betrachtung eines neuen Robotersystems -. Technical report, Dualen Hochschule
Baden-WÃŒrttemberg Karlsruhe, January 2010.
37
Bibliography
[12] Aldeberan Robotics.
Choregraphe User Guide.
URL http://www.
aldebaran-robotics.com/documentation/software/choregraphe/index.
html.
[13] Eclipse Foundation, Inc. Using OpenGL in SWT Applications. URL http:
//www.eclipse.org/swt/opengl/.
[14] Google Inc. Protobuf. URL http://code.google.com/p/protobuf/.
[15] Sun Microsystems.
Java 3D API Specification, June 1999.
URL
http://docs.oracle.com/cd/E17802_01/j2se/javase/technologies/
desktop/java3d/forDevelopers/j3dguide/j3dTOC.doc.html.
[16] The RoboCup Federation. RoboCup Homepage. URL http://www.robocup.
org/.
[17] Stefan Czarnetzki, Sören Kerner, Daniel Klagges. Combining Key Frame based
Motion Design with Controlled Movement Execution. Technical report, Dortmund University of Technology.
[18] Toshihiko Yanase, Hitoshi Iba. Evolutionary Motion Design for Humanoid
Robots. Technical report, University of Tokyo.
All websites were last accessed on 26th of February 2012.
38
List of Figures
1.1
1.2
1.3
1.4
1.5
1.6
FUremote application . . . . . . . . . . . . . . .
Blender with 3D model of the robot 2012 . . . .
Aldebaran Choreograph user interface [12] . . . .
Choreograph Timeline Panel: 1. Motion Area,
Area, 3. Details Area [12] . . . . . . . . . . . .
Aldebaran Choreograph Timeline Editor in curve
Kouretes Motion Editor[5] . . . . . . . . . . . .
2.1
2.2
2.3
2.4
2.5
MVC class diagram[4] . . . . . . . . . . . . . . .
Schematic MVA pattern[1] . . . . . . . . . . . . .
Linear interpolation[3] . . . . . . . . . . . . . . .
Second order Bézier curve[3] . . . . . . . . . . . .
Cubic Bézier curve constructed using de Casteljau
3.1
Old MotionEditor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.1
4.2
4.3
4.4
4.5
Java3D Scene Graph [15] . . . . . . . . . . . . . . . . . . . . . . . . .
3D model of the FUmanoid robot 2012 . . . . . . . . . . . . . . . . .
Parallel Mechanics straight (a) and rotated (b) . . . . . . . . . . . . .
Simplified interpolation implementation class diagram . . . . . . . . .
Bézier interpolation with multiple interpolation instances (a) and a
single instance (b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Interpolation using cubic Bézier curves . . . . . . . . . . . . . . . . .
Using average slope for a point vertically between its neighbors . . . .
Simplified MotionEditor architecture class diagram for selected classes
New MotionEditor . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.6
4.7
4.8
4.9
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
2. Behavior Layer
. . . . . . . . . . .
mode[12] . . . . .
. . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
algorithm[3]
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
3
4
.
.
.
5
6
7
.
.
.
.
.
11
11
13
14
14
21
22
23
25
26
27
29
31
33
39

Documents pareils