Web-Books
im Austria-Forum
Austria-Forum
Web-Books
International
Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Seite - 219 -
  • Benutzer
  • Version
    • Vollversion
    • Textversion
  • Sprache
    • Deutsch
    • English - Englisch

Seite - 219 - in Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“

Bild der Seite - 219 -

Bild der Seite - 219 - in Proceedings - OAGM & ARW Joint Workshop 2016 on

Text der Seite - 219 -

TheLagrange-multipliersparedenotedasadjointvariablesandarearbitraryat thispoint. Integration byparts of the term ∫ px˙dt leads to J= ∫ tf t0 ( H+ p˙Tx ) dt+S(tf,x(tf))−pTx ∣∣∣tf t0 , (6) where the HamiltonianH(x,u,p,t) = h(x,u,t)+pTf(x,u,t) is introduced. In order to find a minimum of the cost functionalJ with respect touwe consider the variation of J according to a small changeδuwhich isgivenby δJ= ∫ tf t0 [( Hx+ p˙ T ) δx+Huδu ] dt+ [ Sx(tf,x(tf))−pT(tf) ] δx(tf)+p T(t0)δx(t0). (7) Due to the fact that no variationof the states at t= t0 is allowed, the termp(t0)δx(t0) is zero. If the adjointvariablesaredefined, such that p˙T=−Hx and pT(tf)=Sx(tf,x(tf)), (8) the complex relations between δx and δuneed not to be computed and the variation ofJ according toEquation (7) is reduced to δJ= ∫ tf t0 Huδudt. (9) Equation (8) is a linear and time-variant system of differential equations which have to be solved backwards in time starting at t= tf. Hence, the largest possible increase of δJ is obtained, if δu(t) is chosen in the direction ofHTu. For that reasonHTu may be considered as the gradient of the cost functionalJ(u). 4. Numerical determination of the optimalcontrol Based on the adjoint gradient computation, outlined in the previous section, we may now search for a controluwhich minimizes the objective functionalJ. First of all, the method of steepest descent is described, where we always walk a certain distance along the negative gradient until we end up in a local minimum of J. Due to the costly line search step during every iteration and the slow convergencethegradientmethodisextendedtoaQuasi-Newtonmethod. Therefore,wehavetosolve theproblemoffindingu such that thegradientbecomes zero. 4.1. The Method ofSteepest Descent Themethodofsteepestdescent tries tofindaminimumofa functionor, subsequently,ofa functional by walking always along the direction of its negativegradient. This concept has first been developed tooptimalcontrolproblemsbyH.J.Kelley [8]and A.E.Bryson [9]. The gradient is already derived from the adjoint system which is shown in Section 3. Now we use HTu and simply walk a short distance along the negative gradient of J. By reason of numerics the continuous functionsarediscretized. Hence, thecost functional reads J(u)≈ Jˆ(u1,u2, . . .,uN) (10) 219
zurück zum  Buch Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“"
Proceedings OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Titel
Proceedings
Untertitel
OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Autoren
Peter M. Roth
Kurt Niel
Verlag
Verlag der Technischen Universität Graz
Ort
Wels
Datum
2017
Sprache
englisch
Lizenz
CC BY 4.0
ISBN
978-3-85125-527-0
Abmessungen
21.0 x 29.7 cm
Seiten
248
Schlagwörter
Tagungsband
Kategorien
International
Tagungsbände

Inhaltsverzeichnis

  1. Learning / Recognition 24
  2. Signal & Image Processing / Filters 43
  3. Geometry / Sensor Fusion 45
  4. Tracking / Detection 85
  5. Vision for Robotics I 95
  6. Vision for Robotics II 127
  7. Poster OAGM & ARW 167
  8. Task Planning 191
  9. Robotic Arm 207
Web-Books
Bibliothek
Datenschutz
Impressum
Austria-Forum
Austria-Forum
Web-Books
Proceedings