Page - 149 - in Joint Austrian Computer Vision and Robotics Workshop 2020
Image of the Page - 149 -
Text of the Page - 149 -
input
SL target
N2N input
SL target
N2N input
SL target
N2N
Figure3.Thefirst row depicts crops fromthecorrupted framexij alongwith thecorrespondingmanually edited target yÂŻ i
j.
Thesecondand third row showthe results obtainedusing the static modelNθS, whereas, the resultsof the dynamicmodel
aredepicted in the last three rows. Thecolumnsalternatebetweensupervised learning (SL)andN2Nresults andon the
rightweshowwhich loss functionwas usedduring training.
in which the reader was presented three versions of
the samescenesidebyside: (i)Theoriginal frames,
the output of the models trained using (ii) SL and
(iii)N2N (‖·‖ , = 0.1). Table2 presents the results
obtained from 24 people who were each shown 8
video sequences. It shows that the model trained with
N2N is best at removing the defects, at the cost of
over smoothing the images. Still, it was the overall
preferred method,with53.65%ofall samplesbeing
deemed “OverallBest”by theparticipants. 5.Conclusion
In thiswork we explored the possibilities ofusing
N2N learning for video restoration. We trained static
anddynamicalmodelsbyconsideringadjacentframes
using supervised learning and N2N, relying on robust
motion estimation. Using this paradigm we demon-
strated thatvideo restorationcanbe learnedbyonly
looking at corrupted frames at performance levels
exceeding those of supervised learning. This opens
149
Joint Austrian Computer Vision and Robotics Workshop 2020
- Title
- Joint Austrian Computer Vision and Robotics Workshop 2020
- Editor
- Graz University of Technology
- Location
- Graz
- Date
- 2020
- Language
- English
- License
- CC BY 4.0
- ISBN
- 978-3-85125-752-6
- Size
- 21.0 x 29.7 cm
- Pages
- 188
- Categories
- Informatik
- Technik