1
, q
2
, q
3
, q
4
],
and l = [l
1
, l
2
] are feet spring lengths used in the ground contact model. The diagram also
illustrates the geometric shape of the limbs and body, information used for collision detection
during planning.
96
6.3
Motion Planning Algorithm
6.3.1
Problem Formulation
Given the model described in the previous section, we now formulate the problem of finding a
feasible trajectory from an initial condition of the robot to a goal region defined by a desired
location of the center of mass. We describe the terrain as a simple height function, z = γ(x),
parameterized by the horizontal position, x. We would like the planned trajectory to avoid
disruptive contacts with the rough terrain, however the notion of “collision-free” trajectories
must be treated carefully since legged locomotion requires contact with the ground in order
to make forward progress.
To address this, we define a virtual obstacle function, Γ(x), which is safely below the
terrain around candidate foothold regions, and safely above the ground in regions where we
do not allow foot contact (cartooned in Figure 6-4). In our previous experience with planning
walking gaits [Byl et al., 2008, Byl and Tedrake, 2009a], it was clear that challenging rough
terrain could be separated into regions with useful candidate footholds, opposed to regions
where footholds that would be more likely to cause a failure. Therefore, we had developed
algorithms to pre-process the terrain to identify these candidate foothold regions based on
some simple heuristics, and we could potentially use the same algorithms here to construct
Γ(x). However in the current work, which makes heavy use of the motion primitive described
in the following sections, we found it helpful to construct separate virtual obstacles, Γ
m
(x),
parameterized by the motion primitive, m, being performed. Once the virtual obstacles
became motion primitive dependent, we had success with simple stereotyped virtual obstacles
as illustrated in Figure 6-4. The collision function illustrated is defined relative to the current
position of the feet. In the case shown in the figure, the virtual function forces the swing
leg to lift up and over the terrain, and ensures that the back foot does not slip, which
characterizes a successful portion of a bound. As soon as the front feet touch back down to
the ground after completing this part of the bound, a new collision function is defined, which
takes into account the new footholds, and forces the back feet to make forward progress in
the air.
Figure 6-4: Sketch of a virtual obstacle function, Γ
m
(x) in relation to the ground, γ(x).
97
We are now ready to formulate the motion planning problem for LittleDog bounding:
find a feasible solution, {x[0], u[0], x[1], u[1], ..., x[N ]}, which starts in the required initial
conditions, satisfies the dynamics of the model, x[n + 1] = f (x[n], u[n]), avoids collisions
with the virtual obstacles, Γ(x), and doesn’t violate the bounds on joint positions, velocities,
accelerations, and torques.
Given this problem formulation, it is natural to consider a sample-based motion planning
algorithm such as Rapidly-Exploring Random Trees (RRTs), due to their success in high-
dimensional robotic planning problems involving complex geometric constraints [LaValle
and Branicky, 2002]. However, as discussed throughout this thesis, these algorithms perform
poorly when planning in state space (where the dynamics impose “differential constraints”)
[LaValle, 2006,Cheng, 2005], especially in high dimensions. When applied directly to building
a forward tree for this problem, the standard RRT takes a prohibitive amount of time and
fails to make substantial progress towards the goal. The Reachability-Guided RRT, presented
in the previous chapter, uses rejection sampling to bias the sampling towards regions in
which the tree is able to grow, while taking into account the constraints imposed by the
dynamics. This sampling strategy can vastly improve planning efficiency of the RRT for
problems with challenging dynamics. However, approximating the reachable set by evenly
discretizing the action space on a higher dimensional problem such as LittleDog would be
prohibitive. In the following subsections, we describe two additional modifications to the RG-
RRT algorithm. First, we describe a parameterized “half-bound” motion primitive which
reduces the dimensionality of the problem. Then, we describe a mechanism for sampling with
a Voronoi bias in the lower-dimensional task space defined by the motion primitive. Using
the RG-RRT algorithm, together with these additional modifications, produces a motion
planner that can find bounding trajectories in a reasonable amount of time.
6.3.2
Macro-Actions / Motion-Primitive
The choice of action space, e.g. how an action is defined for the RRT implementation, will
affect both the RRT search efficiency, as well as completeness guarantees, and, perhaps as
importantly, path quality. In the case of planning motions for a 5-link planar arm with 4
actuators, a typical approach may be to consider applying a constant torque (or some other
simple action in joint space) that is applied for a short constant time duration, ∆T . One
drawback of this method is that the resulting trajectory found by the RRT is likely be jerky.
A smoothing / optimization post-processing step may be performed, but this may require
significant processing time, and there is no guarantee that the local minima near the original
trajectory is sufficiently smooth. Another drawback of using a constant time step with such
an action space is that in order to ensure completeness, ∆T should be relatively small (for
LittleDog bounding, .1 seconds seems to be appropriate). Empirically, however, the search
time increases approximately as 1/∆T , so this is a painful trade-off.
For a stiff PD-controlled robot, such as LittleDog, it makes sense to have the action
space correspond directly to position commands. To do this, we generate a joint trajectory
by using a smooth function, G, that is parameterized by the initial joint positions and
velocities, [q(0), ˙q(0)], a time for the motion, ∆T
m
, and the desired end joint positions and
98
velocities, [q
d
(∆T
m
), ˙q
d
(∆T
m
)]. This action set requires specifying two numbers for each
actuated DOF: one for the desired end position and one for the desired end velocity. A
smooth function generator which obeys the end point constraints, for example a cubic-spline
interpolation, produces a trajectory which can be sampled and sent to the PD controller.
If one considers bounding in particular and examines how animals, and even some robots
such as the Raibert hoppers, are able to achieve bounding behavior, it becomes apparent
that some simplifications can be made to the action space. We define a motion primitive that
uses a much longer ∆T and, therefore, a shorter search time, while also producing smooth,
Figure 6-5: First half of a double bound. Initial pose shown in red, final pose shown in
magenta. Axes are in meters.
Figure 6-6: Second half of a double bound. Initial pose shown in red, final pose shown in
magenta. Notice that in both of these figures, the swing foot “tucked in” in order to clear
the step.
99
jerk-free joint trajectories. The insight is based on the observation that a bound consists
of two phases: (1) rocking up on the hind legs while moving the front legs forward and (2)
rocking up on the front legs, while the hind legs move forward. In the first motion primitive,
which addresses phase-1, the hind legs begin moving first, and the motion is monotonically
“opening” the hip angles. This produces a forward acceleration of the COM, which also
generates a rotational moment around the pseudo-pin joint at the hind foot-ground interface.
In this case, the front legs come off the ground, and they are free to move to a position as
desired for landing. In this formulation, the hind legs move directly from the starting pose to
the ending pose in a straight line. Because the front feet are not weight bearing, it is useful
to “tuck” the feet, while moving them from the start to the end pose, in order to help avoid
obstacles. To take advantage of a rotational moment produced by accelerating the stance leg,
the back leg begins moving a short time before the front foot leg starts moving. Once both
the hind and the front legs have reached their desired landing poses, the remaining trajectory
is held constant until the front legs impact the ground. Examples of such a trajectory are
shown in Figure 6-5 and the top image in Figure 6-8.
A similar approach is utilized to generate motions for the second phase of bounding, with
the difference that the hip angles are “contracting” instead of “opening up”. As the front
leg becomes the stance leg, it begins moving just before impact with the ground. The back
leg movement is delayed for a short period to allow the system to start rocking forward on
to the front legs. When both legs reach their final positions, the pose is held until the the
back leg touches down. The resulting motions are shown in Figure 6-6 and the second image
in Figure 6-8.
Note that the start and end joint velocities in this motion primitive are always zero,
a factor which reduces the action space further. Using these motion primitives requires
a variable time step, ∆T
bound
, because this directly influences accelerations and, therefore,
moments around the passive joints. However, for each phase, one only needs to specify
4-DOF, corresponding to the pose of the system at the end of the phase.
The motion primitive framework used here is somewhat similar in approach to the work
in [Frazzoli et al., 2005]. In addition to formally defining motion primitives in the context of
motion planning, that work proposed planning in the framework of a Maneuver Automaton.
The automaton’s vertices consist of steady-state trajectories, or trim primitives (in the
context of helicopter flight), which are time invariant and with a zero-order hold on control
inputs. The edges of the automaton are maneuver primitives, which are constrained so that
both the start and the end of the maneuver are compatible with the trim primitives. This
is illustrated in the context of LittleDog bounding in Figure 6-7. Of course, feet collisions
in LittleDog act like impulses or hybrid dynamics, which negate the underlying invariance
assumptions of a trim primitive in [Frazzoli et al., 2005] and [Frazzoli et al., 2002], but the
idea is still representative.
6.3.3
Approximating the Reachable Set
In the last chapter, I showed that reachability-guided sampling can help deal with issues
imposed by dynamic constraints in systems such as the pendulum and Acrobot [Shkolnik
100
Figure 6-7: Maneuver Automaton for LittleDog Bounding.
et al., 2009]. In those systems, the reachable set was approximated by discretizing the action
space. However, for the work described here, even the reduced 4-dimensional action space
becomes too large to discretize efficiently. For example, using only 3 actions per dimension
(and assuming a constant motion primitive time step), there would be 81 actions to apply
and simulate in order to approximate the reachable set for each node.
Instead of discretizing in such a high-dimensional action space, the reachable set can be
approximated by understanding the failure modes of bounding. Failures may occur primarily
in one of three ways: 1) the robot has too much energy and falls over; 2) the robot has too
little energy, so the stance foot never leaves the ground, violating our assumption that one
foot always leaves the ground in a bounding gait; or 3) a terrain collision occurs. In the
case when the system has a lot of kinetic energy, the best thing to do is to straighten all
of the limbs, which raises the center of mass and converts the kinetic energy into potential
energy. At the other extreme, if the robot does not have much kinetic energy, an action
that lowers the COM while accelerating the limbs inwards tends to produce a rotational
moment, if it is possible to do so. Thus, two extreme motions can be generated, for each
phase of bounding, which prevent the two common failure modes. The reachable set is then
generated by interpolating joint positions between the two extremes. This one-dimensional
discretization is usually rich enough to capture the energy related failure modes and generate
bounds which strike the right balance to continue bounding further.
As mentioned previously, the reachable set can help the RG-RRT to plan around Regions
of Inevitable Collisions (RIC) [LaValle et al., 2001, Fraichard et al., 2003, Fraichard and
T., 2007, Chan et al., 2008]. Nodes which have empty reachable sets, which may occur
because the node has too much or too little energy, in the case of LittleDog bounding, are
in X
RIC
, even if the node itself is collision free (X
f ree
). The RG-RRT takes advantage of
this, because when a node has an empty reachable set, the node serves as a “blocking”
function for sampling. Because of the rejection sampling of the RG-RRT, such a node can
101
not be expanded upon, and any samples that map to this node will be discarded, encouraging
sampling in other areas that can be expanded.
6.3.4
Sampling in Task Space
In Chapter 2, I showed that the Voronoi Bias can be implemented in a lower dimensional
task space [Shkolnik and Tedrake, 2009]. This can be achieved by sampling in task space,
and finding the node in the configuration-space tree, the projection of which is closest to
the task space sample. Reducing the dimensionality of the search with a task space Voronoi
bias can significantly improve search efficiency, and if done carefully, does not impact the
completeness of the algorithm.
As described in the previous section, our action space involves a half-bound (half a period
of a complete bound). At the start and end of an action (e.g. the state at any given node on
the tree), the robot is approximately touching the ground with both feet, and joint velocities
are approximately zero. Samples are therefore similarly constrained. Additionally, samples
are chosen such that they are not in collision, and respect joint bounds, with some minimum
and maximum stance width. The region in actuated joint space can be mapped to a region
in Cartesian space for the back and front foot, corresponding to a 4-dimensional manifold.
A sample is drawn by first choosing a horizontal coordinate, x, of the robot’s back foot,
and then selecting 4 joint angles from the manifold, while checking to ensure that collision
constraints are validated.
Given a sample described above, the y-coordinate of the robot foot is set to the ground
position at x, and the passive joint is computed using the constraint that both feet are on
the ground. Thus, sampling the 5 dimensional space maps to a point q
s
in the 7 dimensional
configuration space of the robot. The 5 dimensional sampling space, which neglects velocities,
is significantly smaller than the complete 16 dimensional state space, and produces a task-
space Voronoi Bias. When a sample is drawn, the closest node is found by minimizing the
Euclidean distance from the sample to the Tree, as well as to the Reachable Region of the
Tree. The sampling is repeated until a sample closest to the Reachable Region is found. An
action, u, is then created by selecting the 4 actuated joint angles from the sample, q
s
. An
action time interval ∆T
bound
is chosen by uniformly sampling from T ∈ [.3, .7] seconds.
6.3.5
Simulation Results
We have presented three modifications to the standard implementation of the RRT to plan
bounding motions with LittleDog: 1) reachability guidance; 2) a simple motion primitive;
and 3) task-space biasing. Each of these components could be implemented separately, but
they worked particularly well when when combined. To show this, we qualitatively compare
results by applying various combinations of these modifications.
First, a standard RRT was implemented, without any modifications, with the task of
finding bounding motions on flat terrain. The state-space was sampled uniformly in the
regions near states that have already been explored. We experimented with several types
of action spaces, including a zero-order hold on accelerations in joint space, or a zero-order
102
(a)
t = 0 → .71
(b)
t = .71 → 1.17
(c)
t = 1.17 → 1.75
(d)
t = 1.75 → 2.20
Figure 6-8: Bounding trajectory over logs
hold on velocities in joint space. We found the best results by specifying a change in joint
positions, (∆q
a
), and using a smooth function generator to create position splines with zero
start and end velocities. This approach worked best using time intervals of .1 - .15 seconds,
but the resulting RRT was not able to find a complete bounding motion plan in a reasonable
amount of time (hours). By implementing reachability guidance and task-space sampling,
the planner was able to find a double-bound trajectory on flat terrain, after 2-3 minutes
of search on average. Some of these plans were successfully executed on flat terrain by the
actual robot. In addition to being relatively slow to plan motions, the trajectories returned
by the RRT had high-frequency components which were difficult for the robot to execute
without saturating motors. The long planning time and sub-optimal gaits were not ideal
for this task. Plans might be smoothed and locally optimized, but this is a time-consuming
process given the complexity of the dynamics.
When we utilized the half-bound motion primitive, the trajectories produced were smooth
by the way the primitive is defined, with one hip movement and one knee movement per each
half-bound motion. Furthermore, the half bound has a relatively long duration, typically .5
seconds in length, which shortens search time. The complete planner described in this section
is able to plan a double-bound on flat terrain in a few seconds. A continuous bound over two
meters is typically found in less than a minute. The complete planner was also able to plan
over intermittent terrain, where foot holds were not allowed in given regions. Additionally,
the planner was successful in handling a series of 7cm steps, and was also successful in
planning bounds to get up on top of a terrain with artificial logs with a maximum height
difference of 8cm. The simulated log terrain corresponds to a laser scan of real terrain board
used in the Learning Locomotion program to test LittleDog performance. An example of a
bounding trajectory is shown in Figures 6-8 and 6-9. The bottom link in the robot leg is
103
(e)
t = 2.20 → 2.66
(f)
t = 2.66 → 3.14
(g)
t = 3.14 → 3.78
(h)
t = 3.78 → 4.45
(i)
t = 4.45 → 4.98
Figure 6-9: Bounding trajectory over logs, continued
10cm, and the top link is 7.5cm; given that the bottom of the body is 3cm below the hip,
variations of 7cm in terrain height represents approximately 50% of maximum leg travel of
the robot.
Planning was also attempted using the motion primitive, but without reachability
guidance. To do so, samples were drawn in each iteration of the RRT algorithm, and the
nearest neighbor function returned the closest node in the tree. Because the closest node in
the tree and the sample often looked similar, it did not make sense to try to expand towards
the sample. Instead, once a sample was chosen, a motion primitive action was chosen at
random. Using random actions is an accepted approach for RRT’s, and the sampling itself
produces a Voronoi Bias to encourage the tree to expand into unexplored regions of space,
on average. In this case, however, the motion primitive based RRT was never able to find
feasible plans over rough terrain without reachability-guidance, even when given over 12
hours to do so. The combination of motion primitives, task-space biasing and reachability-
guidance, however, is able to plan trajectories over the log terrain repeatedly, within 10-15
minutes of planning time required on average. Of course, on simpler terrain, the planning
time is much faster.
104
Figure 6-10: Bounding over logs with LittleDog
105
6.3.6
Experimental Results
In experiments with the real robot, open-loop execution of the motion plan found by the RRT
quickly diverged with time from the model predictions, even on flat terrain. Trajectories are
unstable in the sense that given the same initial conditions on the robot and a tape of position
commands to execute, the robot trajectories often diverge, sometimes catastrophically. To
demonstrate the feasibility of using the motion primitives described in this paper, these
motion primitives were used to achieve a short bound sequence over the log terrain. Unlike
in the model, the actual logs are not planar.
To address this, tracks were laid on the
logs corresponding to the stance width of the robot along the terrain. The planner was
constrained to only allow foot holds where the points on adjacent tracks were of the same
height. With some tuning, the motion primitives produced bounding over the logs on the
real robot, shown in Figure 6-10. This trajectory was successful approximately 20% of the
time, even though care was taken to have the same initial position for each trial.
Trajectory optimization and feedback stabilization will be required to enable the
execution of the complete trajectory. Feedback will help to compensate for model errors
and the inherent instability of the system itself, as demonstrated by our experiments on
the actual robot (see Figure 6-11). Care was taken to ensure that the physics model is
smooth and continuous, so gradients can be used to help in these tasks. To implement
reliable bounding over rough terrain in experiment, sensor-based feedback will be essential.
Figure 6-12 shows the sensing and control environment for LittleDog, and also illustrates
the additional complexity introduced by various delays and the variety of potentially noisy
sensors in the system.
6.4
Dostları ilə paylaş: |