Page - 219 - in Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Image of the Page - 219 -
Text of the Page - 219 -
TheLagrange-multipliersparedenotedasadjointvariablesandarearbitraryat thispoint. Integration
byparts of the term ∫
px˙dt leads to
J= ∫ tf
t0 (
H+ p˙Tx
) dt+S(tf,x(tf))−pTx ∣∣∣tf
t0 , (6)
where the HamiltonianH(x,u,p,t) = h(x,u,t)+pTf(x,u,t) is introduced. In order to find a
minimum of the cost functionalJ with respect touwe consider the variation of J according to a
small changeδuwhich isgivenby
δJ= ∫ tf
t0 [(
Hx+ p˙ T )
δx+Huδu ]
dt+ [
Sx(tf,x(tf))−pT(tf) ]
δx(tf)+p T(t0)δx(t0). (7)
Due to the fact that no variationof the states at t= t0 is allowed, the termp(t0)δx(t0) is zero. If the
adjointvariablesaredefined, such that
p˙T=−Hx and pT(tf)=Sx(tf,x(tf)), (8)
the complex relations between δx and δuneed not to be computed and the variation ofJ according
toEquation (7) is reduced to
δJ= ∫ tf
t0 Huδudt. (9)
Equation (8) is a linear and time-variant system of differential equations which have to be solved
backwards in time starting at t= tf. Hence, the largest possible increase of δJ is obtained, if δu(t)
is chosen in the direction ofHTu. For that reasonHTu may be considered as the gradient of the cost
functionalJ(u).
4. Numerical determination of the optimalcontrol
Based on the adjoint gradient computation, outlined in the previous section, we may now search for
a controluwhich minimizes the objective functionalJ. First of all, the method of steepest descent
is described, where we always walk a certain distance along the negative gradient until we end up
in a local minimum of J. Due to the costly line search step during every iteration and the slow
convergencethegradientmethodisextendedtoaQuasi-Newtonmethod. Therefore,wehavetosolve
theproblemoffindingu such that thegradientbecomes zero.
4.1. The Method ofSteepest Descent
Themethodofsteepestdescent tries tofindaminimumofa functionor, subsequently,ofa functional
by walking always along the direction of its negativegradient. This concept has first been developed
tooptimalcontrolproblemsbyH.J.Kelley [8]and A.E.Bryson [9].
The gradient is already derived from the adjoint system which is shown in Section 3. Now we use
HTu and simply walk a short distance along the negative gradient of J. By reason of numerics the
continuous functionsarediscretized. Hence, thecost functional reads
J(u)≈ Jˆ(u1,u2, . . .,uN) (10)
219
Proceedings
OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
- Title
- Proceedings
- Subtitle
- OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
- Authors
- Peter M. Roth
- Kurt Niel
- Publisher
- Verlag der Technischen Universität Graz
- Location
- Wels
- Date
- 2017
- Language
- English
- License
- CC BY 4.0
- ISBN
- 978-3-85125-527-0
- Size
- 21.0 x 29.7 cm
- Pages
- 248
- Keywords
- Tagungsband
- Categories
- International
- Tagungsbände