Pure 2D Visual Servo control for a class of under

Pure 2D Visual Servo control for a class of under-actuated dynamic
systems 1
T. Hamel† and R. Mahony
†I3S, UNSA-CNRS,
2000 route des Lucioles-Les algorithmes,
06903 Sophia Antipolis, France.
Abstract–A pure image-based strategy for visual servo
control of a class of dynamic systems is proposed. The
proposed design concerns the dynamics of unmanned
aerial vehicles capable of quasi-stationary flight (hover
and near hover flight). The visual servo control task
considered is to locate the camera relative to a stationary target. The paper extends earlier work, [6], by
weakening the assumption on the construction of the
visual error used. In prior work some inertial information was used in the error construction to guarantee
the passivity properties of the control design. In this
paper the visual error is defined purely in terms of the
2D image features derived from the camera signal.
1 Introduction
Visual servo control concerns the problem of using a
camera to provide sensor information to servo-position
a robotic system. Classical visual servo control was
developed for serial-link robotic manipulators with the
camera typically mounted on the end-effector [7]. More
recently applications involving mobile systems have
been considered [11]. Visual sensing will be a vital
technology for the host of low cost unmanned aerial
vehicle (UAV) applications that are already under development [4, 13, 6]. Visual servo systems may be divided into two main categories [7]: 3D (Position-Based
- involving reconstruction or pose estimation) and 2D
(Image-Based - working directly in the image data).
The first approach is a dual estimation and control
problem in which the state (camera pose) of the system
is estimated using visual information and the control
design is a classical state-space design [7]. In the second category, the robotic task is posed directly in terms
of image features rather than in Cartesian space. A
controller is designed to manoeuver the image features
towards a goal configuration that implicitly solves the
original Cartesian motion planning problem [12, 2, 7].
Classical Image-Based control design uses a simple
linearising control on the image kinematics [2] that
leads to complex non-linear dynamics and is not easily extended to dynamic system models. Most exist1 This research was supported in part by ”Ministère de la
Jeunesse, de l’Education Nationale et de la Recherche” of the
France government and by CNRS ROBEA Grant.
‡
‡ Dep. of Eng.,
Australian Nat. Univ.,
ACT, 0200 Australia.
ing applications exploit a high gain or feedback linearisation (computed torque) design to reduce the system to a controllable kinematic model, for which the
Image-Based visual servo techniques were developed
[7]. There are very few integrated IBVS control designs for fully dynamic system models and even fewer
that deal with under-actuated dynamic models. In
prior work by the authors [6] proposes an IBVS control
scheme for stabilization of a class of under-actuated
dynamic systems that model certain types of UAV. In
order to extend the control design to the full dynamic
system we exploited the fundamental passivity properties of the rigid-body motion of the UAV airframe
to avoid cross coupling of dynamic states in the second an third order derivatives of the image kinematics.
Working within this framework it was possible to apply backstepping control design techniques and derive
a control Lyapunov function for the system. Unfortunately, in order to guarantee the passivity properties of
the error dynamics it was necessary to use some inertial information in the error definition and this clearly
limits the applicability of the earlier work.
In this paper we study the question of using a pure image based visual error and achieving IBVS control of a
class of dynamic systems associated with UAV systems
capable of quasi-stationary flight. The visual feature
considered is a simple weighted centroid feature and
the goal feature is fixed in the camera image. To understand the difficulty of the associated control problem, imagine a UAV (such as a helicopter) hovering
above a fixed target. Imagine that the observed target
is slightly to the left of the goal vector in the image.
Due to the dynamics of the vehicle, in order to move
left the airframe must be inclined to the left, leading
the observed landmark moving left in the image while
the goal target remains fixed in the image. It follows
that to design a stable IBVS control algorithm it is
necessary to allow the image error to increase locally
in order to allow the system dynamics to act to move
the UAV to the desired position such that when the
attitude of the UAV is stabilized the image error will
be zero. The material presented in the present paper
is an extension of our previous work [6] where inertial
information was used to modify the goal target in order
to avoid the situation just described. In this paper, we
deal with the effects of a fixed goal target by treating
them as perturbations of the first order image kinematics bounded by error terms that can be derived from
the higher order dynamic stabilization problem. In this
manner we propose a decoupled control design followed
by a fully coupled stability and robustness analysis of
the system using a structured control Lyapunov function construction. Since the image error construction
relies purely on information derived from the 2D image
data we refer to the approach as pure 2D visual servo
control. It is still necessary estimate and compensate
for the direction and magnitude of gravity using an
inertial measurement unit that also provides velocity
measurements.
The paper is organised as follows: Section 2 presents
the dynamic system model considered. Section 3 describes the visual features and defines the image based
error used. Section 4 derives a Lyapunov control function for the positioning task and analysis the stability
of the closed-loop system. Section 5 applies the control strategy to a simplified model for the dynamics
of a four rotor vertical take-off and landing (VTOL)
vehicle known as an X4-flyer [5, 1] and presents some
simulation results. The final section provides a short
summary of conclusions.
2 Problem Formulation.
In this section, we present the fundamental equations of
motion of unmanned aerial vehicles capable of stationary hovering at one location. The system model considered in the sequel is equivalent to those introduced
in the literature to model the dynamics of helicopters
[13, 3]. Let I = {Ex , Ey , Ez } denote the world frame
and let A = {E1a , E2a , E3a } denote the body-fixed frame
of the rigid body. The position of the rigid body in the
world frame is denoted ξ = (x, y, z) ∈ I and its attitude (or orientation) is given by a rotation R : A → I,
where R ∈ SO(3) is an orthogonal rotation matrix.
Let V (resp. Ω) denote the linear (resp. angular) velocity of the body expressed in the body fixed frame.
Let m denote the total mass and I = diag(I1 , I2 , I3) a
diagonal matrix denoting the inertia of the body. The
dynamics of a rigid body are1 :
ξ˙ = RV
E1
A
a
Γ
1
Γ
E
2
E
x
0
E
Γ
3
E
a
2
ξ
a
3
E
y
z
Figure 1: Reference frames, forces and torques for an
Unmanned Aerial Vehicle (UAV).
shown by the torques Γ1 , Γ2 and Γ3 around axes E1a ,
E2a and E3a respectively. The force F combines thrust,
lift, gravity and drag components. It is convenient to
separate the gravity component mgEz = mgRT e3 from
the thrust force2
F := −T e3 + mgRT e3
(5)
where T ∈ R is a scalar input representing the magnitude of external force applied in direction e3 . Control
of the airframe is obtained by using the torque control Γ = (Γ1 , Γ2 , Γ3 ) to align the force F0 := T e3 as
required to track the goal trajectory.
3 Image Dynamics
In visual servo control problems, the image dynamics
must be expressed in the frame of reference of the camera. Since we consider ‘eye-in-hand’ configurations, the
motion of the camera frame inherits dynamics in the
body fixed frame. In order to simplify the derivation in
the sequel, it is assumed that the camera fixed frame
coincides with the body fixed frame A. Let a set of n
points Pi0 of a fixed target (P˙ 0 i = 0), relative to the
inertial frame (I) and observed from the camera, and
let Pi be the representation of Pi0 in the camera fixed
frame, then Pi is given by
Pi = RT (Pi0 − ξ)
(1)
mV̇ = −mΩ × V + F = −msk(Ω)V + F
(2)
IΩ̇ = −Ω × IΩ + Γ = −sk(Ω)IΩ + Γ.
(4)
Ṙ = Rsk(Ω),
F
(3)
The exogenous force and torque are denoted F and Γ
respectively (cf. Fig. 1) The exogenous force and torque
inputs considered correspond to a typical arrangement
found on a VTOL aircraft (cf. Sec. 5). The exogenous inputs are written as a single translational force,
denoted F in Figure 1, along with full torque control,
1 The notation sk(Ω) denotes the skew-symmetric matrix such
that sk(Ω)v = Ω × v for the vector cross-product × and any
vector v ∈ R3 .
A backstepping control design has passivity-like properties from virtual input to the backstepping error [8].
In our previous work [6] it was shown that these structural passivity-like properties are present in the image
space dynamics if and only if the spherical projection
of an observed point is used. Denoting the spherical
projection of an image point Pi by pi the image dynamics are given by:
ṗi = −sk(Ω)pi + πp
V
ri
(6)
2 This is a reasonable approach for the dynamics of a UAV in
quasi-stationary flight. The implication of this decomposition is
the key of our control strategy (Sec. 4).
Here ri = |Pi | (the focal length is assumed to be unity
for simplicity). The matrix πp = (I − pi pTi ) is the projection onto the tangent space of the spherical image
surface at the point pi (I is the 3 × 3 identity matrix).
axis q ∗ . It is necessary that q ∗ defines a goal that
is attainable by the vehicle considered otherwise the
dynamic model of the vehicle will not be in equilibrium
when q = q ∗ .
For a given target the centroid of the target based on
spherical information is defined to be
The image based error considered is the difference between the measured centroid and the target vector expressed in the camera fixed frame
q :=
n
X
i=1
pi ∈ R 3
(7)
As defined in [6], the above centroid contains more information than the classical image blob centroid. Here
the norm of the image feature |q| encodes effective
depth information. When the target points are far
from the camera all the {pi } are co-linear and the norm
is maximal. As the camera moves toward the target
points they spread in the image and cancel each other
in the summation leading to a decrease in the norm
|q|. Thus, the image feature q contains three effective
degrees of information and can be used as the basis of
a position stabilization control design.
Recalling Eq. 6, it may be verified that
q̇ = −Ω × q − QV,
where
Q=
i=n
X
πp
i=1
i
ri
> 0.
(8)
(9)
The visual servo control task considered is that of positioning a camera relative to a stationary (in the inertial frame) target. In our previous work [6], in addition
to visual information additional inertial information is
explicitly used in the error formulation. In particular,
the inertial direction of the goal target in the image
is fixed. That is a goal q ∗ was chosen that has fixed
inertial orientation and moves in the image plane. The
advantage of such a choice is clear when one recalls the
example mentioned in the introduciton of a UAV (such
as a helicopter) hovering above a fixed target. If the
image vector has fixed inertial direction, then as the
UAV tilts the image error does not change. The goal
vector q ∗ (t) moves in the (spherical) image plane in
exactly the same manner as the observed target. It is
only as the vehicles changes its position that the image
error changes. In prior work [6], this property was exploited to decouple the position stabilization from the
attitude stabilization problem. If no inertial information for the goal vector is known then it is necessary
to define a fixed goal vector and deal with coupled dynamic equations.
Define q ∗ ∈ A to be the desired target vector expressed
in the camera fixed frame.
q ∗ := (q1∗ , q2∗ , q3∗ )T ∈ A
The norm |q ∗ | encodes the effective depth information
for the desired limit point while the direction of q ∗
defines the camera attitude up to rotation around the
δ1 := q − q ∗ .
(10)
δ̇1 = −sk(Ω)δ1 − QV − sk(Ω)q ∗
(11)
Deriving δ1 yields:
The above equation (Eq. 11) defines the kinematics of
the visual error δ1 .
It is of interest to study the structural properties of
Eq. 11. Consider the storage function |δ1 |2 . The
derivative of this function is
d
|δ1 |2 = −δ1T sk(Ω)δ1 − δ1T QV − δ1T sk(Ω)q ∗ .
dt
The first term is zero due to the skew symmetry of Ω.
Since the matrix Q > 0 is positive definite, the second term can be seen as an inner product between δ1
and V . In this sense we think of the second term as a
the supply function to the storage function. Choosing
V = δ1 acts to decrease |δ1 |2 . The last term is the perturbation due to the fixed goal assumption and was not
present in the prior work of the authors [6] due to the
choice of an inertial goal vector. When the goal vector
is fixed in the image plane it is impossible to avoid the
presence of the perturbation term. The term resembles
the perturbations due to small-body forces that lead
to zero dynamic effects studied in recent works [4, 9].
This sort of term is particularly difficult to deal with
explicitly in the control design. It depends on the angular velocity Ω and destroys the triangular nature of
the system that is necessary to apply the backstepping
approach. The approach taken in this paper is to leave
this term as a perturbation of the stability analysis of
the first order image kinematics. By choosing the control gains governing the higher order backstepping errors correctly this error perturbation can be controlled
and dominated in the integrated stability analysis.
The image Jacobian3 [7] (denoted J) is obtained by
rewriting Eq. 11 in the classical form
µ
¶
µ
¶
£
¤ V
V
sk(q ∗ ) + sk(δ1 )
δ̇1 = Q
=J
.
Ω
Ω
In common with classical IBVS algorithms, the image
Jacobian depends on the unknown depth of the image
features. That is, although the points {pi } are known,
the depth parameters {|Pi |} cannot be measured directly from visual data. However, for the proposed image error the unknown parameters enter only into the
3 Also
known as the interaction matrix [2].
definition of the matrix Q (Eq. 9), and here they enter
in a structured manner such that the matrix Q > 0
is always positive definite. It is this property that is
exploited in Section 4 in the control design, avoiding
the necessity of estimating or approximating the image Jacobian, a fundamental difficulty in most IBVS
algorithms [2, 7, 10]. In the present development we
consider a region in space on which the matrix Q is
uniformly bounded
λmin < {λi (Q)} < λmax
(12)
by two arbitrary bounds, λmax > λmin > 0, and design
a robust control algorithm valid on the entire region
considered.
4 Visual servo control for a VTOL aircraft
In this section, a control design based on robust backstepping techniques [8] is proposed for visual servo control of the under-actuated system that we consider.
The full dynamics of the error δ1 may be written:
δ̇1 = −sk(Ω)δ1 − QV − sk(Ω)q ∗
(13)
mV̇ = −mΩ × V + F = −msk(Ω)V + F
(14)
IΩ̇ = −Ω × IΩ + Γ = −sk(Ω)IΩ + Γ.
(16)
Ṙ = Rsk(Ω),
(15)
Before introducing the main result of the paper, we
introduce the following change of variables:
m
V − δ1 ,
k1
¢
m ¡
−T e3 + mgRT e3 + δ2
δ3 = 2
k1 k2
δ2 =
(17)
(18)
The error δ2 is introduced to regulate the linear velocity of the camera and ensures that it comes to rest. The
additional error vector δ3 incorporates information on
the attitude of the camera. This is natural for a system
such a VTOL aircraft capable to do stationary flight
since the desired motion can only be obtained by exploiting the attitude dynamics to regulate δ1 . If the
position and linear velocity are regulated then the total external force must be zero, −T e3 + mgRT e3 = 0.
It follows then,
Re3 = e3 ,
T = mg.
(19)
This means that the pitch and roll rotations of the
rigid body are directly stabilised via the stabilisation
of the error δ3 . However, the yaw rotation around the
direction e3 is not concerned by the stabilisation of the
error δ3 . To stabilize this remaining degree of freedom
an additional error criteria independent of q must be
used [6]. In this paper and in order to simplify the analysis in the sequel, a simple proportional control is used
to stabilise the yaw angular velocity. More precisely,
we apply a linearising control in the third component
of angular velocity to set, Ω3 (0) = 0 and apply a stabilizing control to add robustness to the design
Γ3 = e3 T (Ω × IΩ) − kΩ Ω3
(20)
where kΩ is a suitable gain. The proposed control algorithm, in the sequel, requires a formal time derivative
of the input T [6]. To provide this input, T is dynamically extended.
Ṫ = U
(21)
The motivation for adding an integrator is to ensure decoupling between translational and rotational dynamics as shown in the sequel.
Recalling Eqn’s 13-25, the full dynamics of the visual
error δ1 in terms of the additional errors δi (i = 1 . . . 3)
can be written:
k1
k1
Qδ1 − Qδ2 + sk(q ∗ )Ω
(22)
m
m
k1
k1 k2
k1
δ3
δ̇2 = − sk(Ω)δ2 + Qδ1 − (k2 I − Q)δ2 +
m
m
m
− sk(q ∗ )Ω
(23)
k1
k1
k1 k2
δ̇3 = − sk(Ω)δ3 + Qδ1 − (k2 I − Q)δ2 +
δ3
m
m
m
¶
¶
µ µ
m
k 2 k2
(24)
+ 2
sk T e3 − 1 q ∗ Ω − Ṫ e3
k1 k2
m
δ̇1 = − sk(Ω)δ1 −
Define δ4 as a last error term. It is introduced to regulate the pitch and roll angular velocities as
µ
¶
m2
k12 k2 ∗
sk
T
e
−
π
q
Ω+πe3 δ3 ,
3
e
k12 k2 (k1 k2 + k3 ) 3
m
(25)
where πe3 = (I − e3 eT3 ).
δ4 =
Introducing Eqn’s 21 and 25 in the expression of the
derivative of error δ3 (Eq. 24), it yields
k1
k1
k1 k2 + k 3
Qδ1 − (k2 I − Q)δ2 +
π e3 δ 4
mµ
m
m
¶
k1 k2 + k 3
m
k3
∗
δ3 + 2 U e 3
− δ3 − (I − πe3 ) sk(q )Ω −
m
m
k1 k2
(26)
δ̇3 = −sk(Ω)δ3 +
Deriving the expression of δ4 (Eq. 25) and recalling
Eqn’s 22, 23, 21 and 26, it yields
k1
k1
πe Qδ1 − πe3 (k2 I − Q)δ2
m 3
m
m2
k1 k2 + k 3
π e3 δ 4 + 2
πe U sk(e3 )Ω
+
m
k1 k2 (k1 k2 + k3 ) 3
µ
¶
m2
k12 k2 ∗
k3
πe sk T e3 −
q Ω̇
− π e3 δ 3 + 2
m
k1 k2 (k1 k2 + k3 ) 3
m
(27)
δ̇4 = −πe3 sk(Ω)δ3 +
Using the previous assumption (Ω3 = 0, ∀t ∈ R+ ) and
considering only the existence of roll and pitch velocities, the expressions of the derivatives of the errors δ3
and δ4 become:
Proof:
Recall that:
k1
k1
k1 k2 + k 3
Ω̇ = −I−1 sk(Ω)IΩ + I−1 Γ
Qδ1 − (k2 I − Q)δ2 +
π e3 δ 4
mµ
m
m
¶using the fact that:
k3
k1 k2 + k 3
m
∗
¶
µ
− δ3 − (I − πe3 ) sk(q )πe3 Ω −
δ3 + 2 U e 3
k 2 k2
k12 k2 ∗
m
m
k1 k2
πe3 sk T e3 −
q πe3 = (T − 1 q3∗ )sk(e3 )
m
m
(28)
k1
k1
and that πe3 δ4 = δ4 ,
δ̇4 = −πe3 sk(Ω)δ3 + πe3 Qδ1 − πe3 (k2 I − Q)δ2
m
m
taking the derivative of L (substituting from Eqn’s 22k1 k2 + k 3
m2
+
π e3 δ 4 + 2
πe3 U sk(e3 )πe3 Ω
23, 28-29 and using the feedback control given by Eqn’s
m
k1 k2 (k1 k2 + k3 )
30 and 31) it yields:
¶
µ
k3
m2
k12 k2 ∗
− π e3 δ 3 + 2
πe sk T e3 −
q πe3 Ω̇
k1
k3
k4
k1
m
k1 k2 (k1 k2 + k3 ) 3
m
L̇ = − δ1T Qδ1 − δ2T (k2 I3 − Q)δ2 − |δ3 |2 − |δ4 |2
m
m
m
m
(29)
k1 T
k1 T
k1 T
k1 T
+ δ3 Qδ1 + δ3 Qδ2 + δ4 Qδ1 + δ4 Qδ2
It is now possible to present the main result of the
m
m
m
m
paper:
+ (δ1 − δ2 )T sk(q ∗ )Ω
(34)
δ̇3 = −sk(Ω)δ3 +
Theorem 4.1 Consider the dynamics given by Eqn’s
22-29. Let λmax be the maximal bound on eigenvalues
of Q. Let the vector controller given by :
k12 k2 k1 k2 + k3 T
k 2 k2
e3 δ3 − 1 eT3 sk(q ∗ )Ω
(30)
mà m
m
I
U sk(e3 )Ω
Γ = I−1 sk(Ω)IΩ − kΩ Ω3 e3 −
k12 k2 ∗
T − m q3
µ
k 2 k2 (k1 k2 + k3 )
k1 k2
k1 k2
− 1
δ2 −
δ3
sk(e
)
sk(Ω)δ3 +
3
m2
m
m
¶¶
k1 k2 + k 3 + k 4
(31)
−
δ4
m
U=
Let L be the Lyapunov function candidate:
L=
1
1
1
1
|δ1 |2 + |δ2 |2 + |δ3 |2 + |δ4 |2
2
2
2
2
Set
f (T ) =
(32)
k1 k2 (k1 k2 + k3 )
³
´
k2 k
m T − 1m 2 q3∗
and choose the positive control gains to satisfy
k2 > λmax ,
r
2mgq3∗
,
k1 <
k2
k 2 k2
(k2 + λmax )(λmax + f (Tmin )|q ∗ |)2
, Tmin = 2 1 q3∗
k3 >
λmin (k2 − λmax )
m
k4 > 0.
Then, for any initial condition such that the initial
value of the Lyapunov function candidate
1
1
L(0) = |δ1 (0)|2 + |δ2 (0)|2
2
2
¶2
µ
1
1
1 mg
∗
2
2
+ |δ3 (0)| + |δ4 (0)| <
− 2q3 ,
2
2
2 k12 k2
(33)
the Lyapunov function is exponentially decreasing, ensuring exponential stability of the errors δi for i = 1..4.
Note that, due to the presence of the vector Ω, the Lyapunov function may not be monotonically decreasing.
Recalling the expression of the error term δ4 Eq. 25, it
yields:
k1
(35)
πe3 Ω = − f (T )sk(e3 )(δ4 − δ3 )
m
Introducing now the above relationship in Eq. 34 the
Lyapunov function derivative becomes:
k1
k3
k4
k1 T
δ1 Qδ1 − δ2T (k2 I3 − Q)δ2 − |δ3 |2 − |δ4 |2
m
m
m
m
k1 T
k1 T
+ δ3 (Q − f (T )A)δ1 + δ3 (Q + f (T )A)δ2
m
m
k1 T
k1 T
+ δ4 (Q + f (T )A)δ1 + δ4 (Q − f (T )A)δ2
m
m
∗
where A = −sk(q )sk(e3 )
L̇ = −
From the above development, one has:
L̇ ≤ −X T ΣX
where X T = [δ1 δ2 δ3 δ4 ] and

k1 Q
0

0
k
(k
I
− Q)
1
2
Σ=
k1
 − k1 M
−
2
2 N
− k21 N
− k21 M
− k21 M
− k21 N
k3
0
where:

− k21 N
− k21 M 


0
k4
M =(Q − f (T )AT )
N =(Q + f (T )AT )
The above quadratic expression is negative definite if
and only if the symmetric matrix Σ is positive definite. This is true if and only if the principal minors of
Σ are positive. It yields then, the following sufficient
conditions:
k1 > 0,
k2 > λmax ,
1
(k2 + λmax )(λmax + f (T )||AAT ||22 )2
,
λmin (k2 − λmax )
k4 > 0.
k3 >
In order to derive the conditions provided by the theorem statement, the following two sided bound on the
control T are obtained from Eq. 18:
mg −
k12 k2
k 2 k2
(|δ2 | + |δ3 |) < T < mg + 1 (|δ2 | + |δ3 |)
m
m
Combining the above two sided bound with the expression of the Lypunov function, it follows:
mg −
k 2 k2 √
k12 k2 √
2L < T < mg + 1
2L
m
m
From Eq. 31, the uniqueness of the control input is
k2 k
guaranteed as long as T 6= 1m 2 q3∗ .
Using Eq. 33 and condition on the gain k1 of the theorem, it yields the following bound on the initial condition of the control T :
k 2 k2
k 2 k2 p
k 2 k2 p
2 1 |q ∗ | < mg − 1
2L0 < T0 < mg + 1
2L0
m
m
m
It remains to show that this inequality is verified for
∀t ∈ R+ .
By continuity, the choice of gains k1 , . . . , k4 ensures the
positiveness of the principal minors of the matrix Σ and
the result is proved.
The individual thrust of each motor is denoted T(.) ,
while κ is the proportional constant giving the induced
couple due to air resistance for each rotor and d denotes
the distance of each rotor from the centre of mass of
the X-4 flyer.
The parameters used for the dynamic model are m =
9.6, I = diag(0.4, 0.56, 0.22), d = 0.25m, κ = 0.01 and
g = 9.8.
The simulation undertaken considers the case of stabilisation of the X4-flyer already in hover flight to a new
set point several metres distance from the initial condition. Consider the case where one wishes to position
the camera parallel to a plan target characterised by
a square. In the case of a pin-hole camera the visual
measurements available are the projective coordinates
of the four points defining the square. These coordinates are then transformed into spherical coordinates.
The desired target vector q ∗ is chosen such that the
camera set point is located 3 metres above the square.
For the simulation undertaken, the values
λmin = 0.01m−1 and λmax = 5m−1
were chosen. The region in task space for which these
bounds remain valid is a compact set that excludes the
target points (cf. Figure 3).
5 Example System and Simulation
6
5
T fl
T fr
4
z−axis
In this section, the procedure presented in Section 4
is applied to an idealized model of the dynamics of an
X4-flyer.
3
2
1
T rl
0
T rr
−3
−2
−1
0
y−axis
1
2
3
Figure 3: A cross section in the (x(or y), z) plane of region
for which the bounds λmax < 5, λmin > 0.01 are
valid
Figure 2: A prototype (un-instrumented) X4-flyer with
thrust inputs.
An X4-flyer consists of four individual fans fixed to a
rigid cross frame (cf. Fig. 2). It operates as an omnidirectional vehicle capable of quasi-stationary flight. An
idealized dynamic model of the X4-flyer [5, 1] is given
by the rigid body equations (Eqn’s 1-16) along with
the external force and torque inputs are (cf. Fig. 2):
T = Trr + Trl + Tf r + Tf l ,
Γ1 = d(Tf r + Tf l − Trr − Trl ),
Γ2 = d(Trl + Tf l − Trr − Tf r ),
Γ3 = κ (Tf r + Trl − Tf l − Trr ) .
(36)
(37)
(38)
(39)
The initial condition was chosen such that the X4 was
in stable stationary flight in the specified region (Fig. 3)
at the beginning of the simulation
ξ0 = (5 2.5 − 4)T , R0 = I3 , ξ˙0 = Ω̇ = 0.
According to standard aeronautical conventions height
is measured down relative to the aircraft and hence the
height of the X4 is negative with respect to the world
frame.
According to the theorem statement, the control gains
used were k1 = 0.2 kg.m.s−1 , k2 = 10 kg −1 , k3 =
30.5 kg.s−1 , k4 = 3 kg.s−1 . These gains satisfy the
conditions of the theorem.
In Figures 4-5, the performances of the algorithm are
shown.
5
[2] B. Espiau, F. Chaumette, and P. Rives. A new
approach to visual servoing in robotics. IEEE Transactions on Robotics and Automation, 8(3):313–326, 1992.
4
z
3
2
[3] M. Dahlen E. Frazzoli and E. Feron. Trajectory
tracking control design for autonomous helicopters using a backstepping algorithm. In Proceedings of the
American Control Conference ACC, pages 4102–4107,
Chicago, Illinois, USA, 2000.
1
0
4
2
8
6
0
4
−2
2
[4] E. Frazzoli, M. A. Dahleh, and E. Feron. Realtime motion planning for agile autonomous vehicles.
AIAA Journal of Guidance, Control, and Dynamics,
5(1):116–129, 2002.
0
−4
−2
−6
y
−4
x
Figure 4: Positioning of the X4-flyer with respect to the
target.
___ phi
References
[1] E. Altug, J. Ostrowski, and R. Mahony. Control of a quadrotor helicopter using visual feedback.
In Proceedings of the IEEE International Conference
on Robotics and Automation, ICRA2002, Washington
DC, Virginia, USA, 2002.
0.3
[5] T. Hamel, R. Mahony, R. Lozano, and J. Ostrowski. Dynamic modelling and configuration stabilization for an X4-flyer. In International Federation of
Automatic Control Symposium, IFAC 2002, Barcelona,
Spain, 2002.
0.25
[6] T. Hamel and R. Mahony. Visual servoing of an
under-actuated dynamic rigid-body system: An image
based approach. IEEE Transactions on Robotics and
Automation, 18(2):187–198, April 2002.
0.2
−−− psi 0.15
0.1
0.05
−.−.− theta
0
−0.05
0
2
4
6
8
10
t
12
14
16
18
20
6
___ x
−−− y
4
[8] M. Krstic, I. Kanellakopoulos, and P. V. Kokotoviv. Nonlinear and adaptive control design. American
Mathematical Society, Rhode Islande, USA, 1995.
2
−.− −z
0
−2
−4
[7] S. Hutchinson, G. Hager, and P. Cork. A tutorial
on visual servo control. IEEE Transactions on Robotics
and Automation, 12(5):651–670, 1996.
0
10
20
30
40
50
t
60
70
80
90
100
Figure 5: Evolution of the attitude and position of the
X4-flyer in ”roll (theta), pitch (psi) and yaw
(phi)” Euler angles and Cartesian space coordinates (x, y, and z).
6 Conclusion
This paper has proposed a pure image-based strategy
for visual servo control of a class of under-actuated
dynamic systems. It extends our previous work [6] by
avoiding the use of some inertial information in the definition of the visual error. After tedious but standard
calculation, we have proposed a decoupled (position
from orientation dynamics) control design followed by
a fully coupled stability and robustness analysis. The
simulation of the control of an X4-flyer shows the good
performance of the proposed control algorithm.
[9] R. Mahony and T. Hamel. Robust trajectory
tracking for a scale model autonomous helicopter. To
appear in the International Journal of Non-linear and
Robust Control, 2003.
[10] E. Malis and F. Chaumette. Theoretical improvements in the stability analysis of a new class of modelfree visual servoing methods. IEEE Transaction on
Robotics and Automation, 18(2):176–186, 2002.
[11] Y. Ma, J. Kosecka, and S. Sastry. Vision guided
navigation for a nonholonomic mobile robot. IEEE
Transactions on Robotics and Automation, 15(3):521–
536, 1999.
[12] C. Samson, M. Le Borgne, and B. Espiau. Robot
Control: The task function approach. The Oxford Engineering Science Series. Oxford University Press, Oxford, U.K., 1991.
[13] O. Shakernia, Y. Ma, T. J. Koo, and S. Sastry.
Landing an unmanned air vehicle: vision based motion estimation and nonlinear control. Asian Journal
of Control, 1(3):128–146, 1999.
Download PDF