Page - 132 - in Proceedings of the OAGM&ARW Joint Workshop - Vision, Automation and Robotics
Image of the Page - 132 -
Text of the Page - 132 -
Fig. 5. Processing pipeline of the point cloud fusion: (1)Raw data from dense image matching (50.64 M points), (2) fused point cloud (1.73 M points),
(3) discarding weights smaller thanα=30 (0.47 M points), (4)mesh generation, and (right side)merged surface tiles.
Regarding the oblique dataset, best results can be achieved
by neglecting points with TV class 1. By doing so, execution
time is speedupbyafactorof2.2.Compared to the rawpoint
cloud the fusionprocedure reducesnoisewhile improving the
accuracy of the point cloud (see Fig. 6). A visual assessment
shows that the fused point cloud including all TV classes
and applying weights produces the best results regarding
completeness and outliers (see Fig. 7). As expected, roof
Fig. 6. Comparison of the main school facade before and after fusion
procedure (cf.Fig.5):MeandeviationbetweenDSMderived fromterrestrial
laser scanner data and point cloud (top), and standard deviation of the point
clouds DSM representing the level of noise (bottom).
Fig. 7. Taking all TV classes into account produces point clouds containing
less outliers (left), in contrast to point clouds restricted to TV classes> 1
(right). structures and other nadir oriented faces are reconstructed
with the highest precision. Table II shows that in all cases
the precision of the point cloud can be improved while
decreasing redundant information.
TABLE II
COMPARISON OF TEST AREAS BEFORE AND AFTER THE POINT CLOUD
FUSION.
Density RMSE Fused Mean Fused Std. Dev.
[pnts/m2] PC-TLS [m] PC-TLS [m] of DSM [m]
Tower South (raw) 2345.9 0.378 0.051 0.538
Tower South (fused) 49.4 0.204 0.003 0.087
Tower North (raw) 1781.4 0.427 -0.222 0.447
Tower North (fused) 45.3 0.195 -0.052 0.071
Tower West (raw) 3570.8 0.350 0.237 0.499
Tower West (fused) 62.7 0.256 0.152 0.155
Roof (raw) 13864.2 0.150 -0.023 0.218
Roof (fused) 178.7 0.122 0.028 0.105
B. Nadir Aerial Imagery
The nadir image dataset covers an area of approximate 1.5
× 1.7 km2 in the city of Munich. The dataset was acquired
by a DMC II 230 megapixel aerial image camera with a
spatial resolution of 10 cm and consists of 15 panchromatic
images. As depicted in Fig. 8, facade information can be
reconstructed by utilizing the proposed fusion routine. Due
to the wide angle of the aerial camera, enough information
is captured to produce 3D city models from nadir aerial
imagery.
V. CONCLUSION
A novel method for fusing 3D point clouds was presented.
The underlying point clouds originate from stereo matching
of aerial images and were enriched by the calculation of
surface normals and a classification of the disparity maps
into quality classes. The proposed filtering method then
fused the point cloud in direction of the surface normals
and used a weighting based on the classification. Evaluation
to ground truth data showed the increased quality of the
fused point cloud while reducing the redundancy. Overall,
132
Proceedings of the OAGM&ARW Joint Workshop
Vision, Automation and Robotics
- Title
- Proceedings of the OAGM&ARW Joint Workshop
- Subtitle
- Vision, Automation and Robotics
- Authors
- Peter M. Roth
- Markus Vincze
- Wilfried Kubinger
- Andreas Müller
- Bernhard Blaschitz
- Svorad Stolc
- Publisher
- Verlag der Technischen Universität Graz
- Location
- Wien
- Date
- 2017
- Language
- English
- License
- CC BY 4.0
- ISBN
- 978-3-85125-524-9
- Size
- 21.0 x 29.7 cm
- Pages
- 188
- Keywords
- Tagungsband
- Categories
- International
- Tagungsbände