Seite - 47 - in Joint Austrian Computer Vision and Robotics Workshop 2020
Bild der Seite - 47 -
Text der Seite - 47 -
[4] A. Boularias, J. A. Bagnell, and A. Stentz. Learn-
ing to manipulate unknown objects in clutter by re-
inforcement. In Proc. of AAAI Conf. on Artificial
Intelligence, pages1336–1342,2015.
[5] S. Calinon, P. Evrard, E. Gribovskaya, A. Billard,
and A. Kheddar. Learning collaborative manipula-
tion tasks by demonstration using a haptic interface.
In Proc. of Int. Conf. on Advanced Robotics, pages
1–6,2009.
[6] Y. Duan, M. Andrychowicz, B. Stadie,
O. Jonathan Ho, J. Schneider, I. Sutskever,
P. Abbeel, and W. Zaremba. One-shot imitation
learning. In Advances in Neural Information
Processing Systems30, pages1087–1098,2017.
[7] F.Gouidis,P.Panteleris, I.Oikonomidis, andA.Ar-
gyros. Accurate hand keypoint localization on mo-
biledevices. InProc.of IEEEInt.Conf.onMachine
Vision Applications, 2019.
[8] M. Hirschmanner, C. Tsiourti, T. Patten, and
M. Vincze. Virtual reality teleoperation of a hu-
manoid robot using markerless human upper body
pose imitation. In Proc. of IEEE-RAS Int. Conf. on
Humanoid Robots, 2019.
[9] D. Huang, S. Nair, D. Xu, Y. Zhu, A. Garg, L. Fei-
Fei, S. Savarese, and J. C. Niebles. Neural task
graphs: Generalizing to unseen tasks from a single
video demonstration. In Proc. of IEEE/CVF Conf.
on Computer Vision and Pattern Recognition, pages
8557–8566,2019.
[10] S. James, M. Bloesch, and A. J. Davison. Task-
embedded control networks for few-shot imitation
learning. InProc.ofConf.onRobotLearning,pages
783–795, 2018.
[11] D.Kalashnikov,A. Irpan,P.Pastor, J. Ibarz,A.Her-
zog,E. Jang,D.Quillen,E.Holly,M.Kalakrishnan,
V. Vanhoucke, and S. Levine. Scalable deep rein-
forcement learning for vision-based robotic manip-
ulation. In Proc. of Conf. on Robot Learning, pages
651–673, 2018.
[12] G. Konidaris, S. Kuindersma, R. Grupen, and
A. Barto. Robot learning from demonstration by
constructingskill trees. TheInt. JournalofRobotics
Research, 31(3):360–375,2012.
[13] V. Kru¨ger, D. L. Herzog, S. Baby, A. Ude, and
D. Kragic. Learning actions from observations.
IEEE Robotics Automation Magazine, 17(2):30–43,
2010.
[14] S.Levine,C.Finn,T.Darrell,andP.Abbeel.End-to-
end training of deep visuomotor policies. J. Mach.
Learn.Res., 17(1):13341373, Jan.2016.
[15] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and
D. Quillen. Learning hand-eye coordination for
robotic grasping with deep learning and large-scale
data collection. The Int. Journal of Robotics Re-
search, 47(4-5):421–436,2018. [16] Y. Liu, A. Gupta, P. Abbeel, and S. Levine. Imita-
tionfromobservation: Learningto imitatebehaviors
from raw video via context translation. In Proc. of
IEEE Int. Conf. on Robotics and Automation, pages
1118–1125,2018.
[17] P.Panteleris, I.Oikonomidis,andA.Argyros. Using
a single RGB frame for real time 3D hand pose es-
timation in the wild. In Proc. of IEEE Winter Conf.
onApplicationsofComputerVision,pages436–445,
2018.
[18] A. Pashevich, R. Strudel, I. Kalevatykh, I. Laptev,
and C. Schmid. Learning to augment synthetic
images for Sim2Real policy transfer. In Proc. of
IEEE/RSJ Int. Conf. on Intelligent Robots and Sys-
tems, pages2651–2657,2019.
[19] P. Pastor, L. Righetti, M. Kalakrishnan, and
S. Schaal. Online movement adaptation based on
previous sensor experiences. In Proc. of IEEE/RSJ
Int. Conf. on Intelligent Robots and Systems, pages
365–371,2011.
[20] R. Rahmatizadeh, P. Abolghasemi, L. Blni, and
S.Levine. Vision-basedmulti-taskmanipulation for
inexpensive robots using end-to-end learning from
demonstration. In Proc. of IEEE Int. Conf. on
RoboticsandAutomation, pages3758–3765,2018.
[21] S. Schaal. Is imitation learning the route to hu-
manoid robots? Trends in cognitive sciences,
3(6):233–242, 1999.
[22] M. Schneider and W. Ertel. Robot learning by
demonstration with local Gaussian process regres-
sion. In Proc. of IEEE/RSJ Int. Conf. on Intelligent
Robotsand Systems, pages255–260,2010.
[23] G. Schreiber, A. Stemmer, and R. Bischoff. The
fast research interface for the KUKA lightweight
robot. In IEEE ICRA 2010 Workshop on Inno-
vative Robot Control Architectures for Demanding
(Research) Applications – How to Modify and En-
hanceCommercialControllers, 2010.
[24] P. Sermanet, C. Lynch, Y. Chebotar, J. Hsu, E. Jang,
S. Schaal, and S. Levine. Time-contrastive net-
works: Self-supervised learning from video. In
Proc. of IEEE Int. Conf. on Robotics and Automa-
tion, pages1134–1141, 2018.
[25] D.Victor. Handtrack: Alibraryforprototypingreal-
time hand tracking interfaces using convolutional
neuralnetworks. GitHubrepository, 2017.
[26] T. Yu, C. Finn, S. Dasari, A. Xie, T. Zhang,
P. Abbeel, and S. Levine. One-shot imitation
from observing humans via domain-adaptive meta-
learning. InProc.ofRobotics: ScienceandSystems,
2018.
[27] T. Zhang, Z. McCarthy, O. Jow, D. Lee, X. Chen,
K. Goldberg, and P. Abbeel. Deep imitation learn-
ing for complex manipulation tasks from virtual re-
ality teleoperation. In Proc. of IEEE Int. Conf. on
RoboticsandAutomation, pages5628–5635,2018.
47
Joint Austrian Computer Vision and Robotics Workshop 2020
- Titel
- Joint Austrian Computer Vision and Robotics Workshop 2020
- Herausgeber
- Graz University of Technology
- Ort
- Graz
- Datum
- 2020
- Sprache
- englisch
- Lizenz
- CC BY 4.0
- ISBN
- 978-3-85125-752-6
- Abmessungen
- 21.0 x 29.7 cm
- Seiten
- 188
- Kategorien
- Informatik
- Technik