Web-Books
im Austria-Forum
Austria-Forum
Web-Books
International
Proceedings of the OAGM&ARW Joint Workshop - Vision, Automation and Robotics
Seite - 128 -
  • Benutzer
  • Version
    • Vollversion
    • Textversion
  • Sprache
    • Deutsch
    • English - Englisch

Seite - 128 - in Proceedings of the OAGM&ARW Joint Workshop - Vision, Automation and Robotics

Bild der Seite - 128 -

Bild der Seite - 128 - in Proceedings of the OAGM&ARW Joint Workshop - Vision, Automation and Robotics

Text der Seite - 128 -

Fusion of Point Clouds derived from Aerial Images Andreas Scho¨nfelder1,2, Roland Perko1, Karlheinz Gutjahr1, and Mathias Schardt2 Abstract—State of the art dense image matching in combi- nation with advances in camera technology enables the recon- struction of scenes in a novel high spatial resolution and offers new mapping potential. This work presents a strategy for fusing highly redundant disparity maps by applying a local filtering method to a set of classified and oriented 3D point clouds. The information obtained from stereo matching is enhanced by computing a set of normal maps and by classifying the disparity maps in quality classes based on total variation. With this information given, a filtering method is applied that fuses the oriented point clouds along the surface normals of the 3D geometry. The proposed fusion strategy aims at the reduction of point cloud artifacts while generating a non-redundant surface representation, which prioritize high quality disparities. The potential of the fusion method is evaluated based on airborne imagery (oblique and nadir) by using reference data from terrestrial laser scanners. I. INTRODUCTION While the processing of aerial and satellite imagery for the generation of 2.5D Digital Elevation Models (DEM) from Multi-View Stereo (MVS) systems is a standard procedure in the field of photogrammetry and remote sensing, the reconstruction of complex 3D scenes poses several new challenges. Therefore, this work focuses on a 3D fusion of point clouds, in contrast to classical mapping approaches that only produce and fuse 2.5D DEMs or elevation maps (cf. [14]). In order to process large frame airborne and satellite imagery, it is necessary to ensure that the MVS system is capable of processing data of arbitrary size in adequate runtime at highest possible geometric accuracy. The main contribution of this work is an easy to implement, scalable 3D point cloud fusion strategy which builds on clas- sic multi-view stereo pipelines. By restricting, respectively weighting, disparities based on their quality it is possible to generate surface representations of large-scale datasets in adequate runtime, simultaneously reducing the redundancy in the point cloud and increasing the geometric accuracy. II. STATE OF THE ART Typically, the processing of multiple stereo images yields one depth map or disparity map per stereo pair. To generate one consistent, non-redundant representation of the mapped scene, the depth maps have to be fused. Some MVS systems tackle this problem by linking surface points directly in the process of image matching. In contrast, MVS systems like PMVS [4], use multi-photo consistency measures to opti- mize position and normals of surface patches and iteratively 1Joanneum Research Forschungsgesellschaft mbH, Steyrergasse 17, 8010 Graz, Austria {firstname.lastname}@joanneum.at 2Graz University of Technology, Steyrergasse 30, 8010 Graz, Austria {firstname.lastname}@tugraz.at grow the surface starting from a set of feature points. In many MVS systems, depth maps are generated via Semi- Global Matching (SGM) [6] and spatial point intersection yielding one depth map per stereo pair. SGM is one of the most common stereo matching algorithms used in mapping applications offering robust and dense reconstruction while preserving disparities discontinues. Depthmapfusionor integration isoneof themainchallenges in MVS and different approaches have been developed over the last decades. Authors of [17] propose an excellent benchmark dataset for the evaluation of MVS surface re- construction methods. As mentioned in [12], the Middlebury MVS benchmark test demonstrates that global methods tend to produce the best results regarding completeness and ac- curacy, while local methods like [3] offer good scalability at smaller computational costs. Moreover MVS methods can be categorized based on their representation which can differ from voxels, level-sets, polygon meshes up to depth maps [17]. Authors like [5] and [15] focus on the fusion of depth maps to generate oriented 3D point clouds. The surface reconstruction in terms of fitting a surface to the reconstructedandfusedpoints isdefinedasapost-processing step which can be solved using algorithms like the generic Poisson surface reconstruction method proposed by Kazhdan et al. [8]. Regarding the processing of aerial imagery scalability is an important factor. As mentioned in [12], a number of scalable fusionmethods havebeenpresented in the lastyears, e.g. [3], [11], [18], yet they are still not able to process billions of 3D points in a single day or less [18]. Kuhn et al. [9] propose a fast fusion method via occupancy grids for semantic classification. The fusion method complements state-of-the- art depth map fusion as it is much faster. However, it is only suitable for applications that have no need for dense point clouds. All of the mentioned scalable fusion methods have in common, that octrees are used as underlying data structures. Kuhn et al. [10] introduce an algorithm for division of very large point clouds. They discuss different data structures and their capability for the decomposition of reconstruction space. In addition, Kuhn et al. [12] show that the 3D reconstruction of fused disparity maps can be improved by modeling the uncertainties of disparity maps. These uncertainties are modeled by introducing a feature based on Total Variation (TV) which allows pixel-wise classification of disparities into different error classes. Total variation in context with MVS was first introduced by Zach et al. [19]. Theyproposeanovel range integrationmethodusingaglobal energy functional containing a TV regularization force and an L1 data fidelity term for increased robustness to outliers. 128
zurück zum  Buch Proceedings of the OAGM&ARW Joint Workshop - Vision, Automation and Robotics"
Proceedings of the OAGM&ARW Joint Workshop Vision, Automation and Robotics
Titel
Proceedings of the OAGM&ARW Joint Workshop
Untertitel
Vision, Automation and Robotics
Autoren
Peter M. Roth
Markus Vincze
Wilfried Kubinger
Andreas Müller
Bernhard Blaschitz
Svorad Stolc
Verlag
Verlag der Technischen Universität Graz
Ort
Wien
Datum
2017
Sprache
englisch
Lizenz
CC BY 4.0
ISBN
978-3-85125-524-9
Abmessungen
21.0 x 29.7 cm
Seiten
188
Schlagwörter
Tagungsband
Kategorien
International
Tagungsbände

Inhaltsverzeichnis

  1. Preface v
  2. Workshop Organization vi
  3. Program Committee OAGM vii
  4. Program Committee ARW viii
  5. Awards 2016 ix
  6. Index of Authors x
  7. Keynote Talks
  8. Austrian Robotics Workshop 4
  9. OAGM Workshop 86
Web-Books
Bibliothek
Datenschutz
Impressum
Austria-Forum
Austria-Forum
Web-Books
Proceedings of the OAGM&ARW Joint Workshop