Page - 130 - in Proceedings of the OAGM&ARW Joint Workshop - Vision, Automation and Robotics
Image of the Page - 130 -
Text of the Page - 130 -
facade patches where reference data is available. Analysing
the DSM derived from the classified pointcloud and the
reference data enables us to compute the weights in form
of a weighting function. The weighting function is derived
by calulating the standard deviation of the flatness error and
fitting an exponential function in a least squares manner. The
flatness error is defined as the point cloud deviations to a best
fitting plane and is also an indicator for the noise of the 3D
geometry [1].
Later on, we evaluate the fused pointcloud in a similiar way,
to gain insight on the potential and quality of the entire
fusion method. Specific information regarding the evaluation
routine, selected test areas and datasets are given in Section
IV.
C. Weighted-Median Based Fusion
The concept of median-based fusion originates from
fusion algorithms for the generation of 2.5D DSMs.
Rothermel et al. [15] adapted the idea by fusing point
clouds in 3D space along a defined filtering direction.
While for close range datasets the line of sight is suitable
as filtering direction, point-wise normals are used for the
fusion of aerial datasets. We adapt this fusion strategy using
a weighted-median based approach.
In a first step, an initial pointsetP is created from the input
point cloud by storing the input point cloud in an octree
data structure. The pointset P is derived by subsampling
the point cloud with the centroid of the points located in
a leaf node. In our work the entire fusion process was
realized with the aid of the Point Cloud Library (PCL ver.
1.8.0) [16] which also provides a custom tailored octree
implementation.
As a result of the disparity quality assessment every point
possesses a weight representing the quality of the point.
We add up the weights of all points located in the same
leaf node. Thus, the weight of the initial point pâP is an
indicator for the density and quality of the reconstructed
scene.
Subsequently, the point cloud is fused using nearest neighbor
queries optimized for cylindrical neighborhoods. For every
point in the initial pointset P a set of candidate points
Q, located in a cylinder with its central axis given by the
initial point and its normal, is derived. Points with surface
normals diverging more than 60⌠are discarded for further
processing. After the candidate pointset Q is detected,
the point p is filtered by projecting all candidate points
onto the surface normal of the initial point p. Taking the
weighted-median of all deviations to the point p yields the
new point coordinates. Especially for noisy data further
iterations can be inevitable to generate a consistent surface
representation. Between every iteration, duplicate points
are united to avoid redundant computations. A detailed
description of the original fusion routine including the
parameters and employed neighborhood queries is given in
[15].
In a first iteration, Rothermel etal. [15] includes all points of
the input point clouds for the identification of the candidate pointset Q. To speed up further iterations the filtering is
restricted to the initial pointsetpâP solely. In our case, we
restrict the filtering of the point cloud to the initial pointset
P from the beginning on. We compensate the loss of detail
of the input point cloud by approximating the density of the
captured 3D scene with the accumulated weight. The final
surface representation is derived by discarding points with
weights smaller than a defined threshold Îą. The influence
of the threshold is analyzed in Section IV-A. In this way
large and highly redundant 3D point clouds can be fused
in moderate time (e.g. processing 2.5 billion points on a
computer with 16 cores within a single day, resulting in a
fused point cloud whose density fits the spatial resolution
of the input imagery).
IV. RESULTS
In this section we discuss results obtained with the pro-
posed fusion pipeline. The datasets used for the evaluation
are provided by the ISPRS/EuroSDR project on âBenchmark
on High Density Aerial Image Matchingâ2 and consist of one
nadir and one oblique dataset.
A. Oblique Aerial Imagery
The oblique imagery dataset was acquired over the city of
Zu¨rich with a Leica RCD30 Oblique Penta camera consisting
of one nadir and four oblique 80 megapixel camera heads.
While the nadir camera head is pointing downwards, directly
towards the earth, the four oblique camera heads are tilted
at an angle of 35 degrees, each pointing in a different
cardinal direction. The entire datasets comprises 135 images,
captured from 15 unique camera positions. While the nadir
imagery leads to a Ground Sample Distance (GSD) (i.e.
the spatial resolution) of 6 cm the GSD of the oblique
views vary between 6 and 13 cm. Reference data captured
with terrestrial laser scans provide accurate and reliable
information for the evaluation of the datasets. The evaluation
was carried out by computing DSMâs of different facade
patches distributed over the test area. More information on
the image acquisition, benchmark and reference data can be
found in [2].
Photogrammetric Processing and Pre-processing. In a first
step, the image registration was carried out using the interior
and exterior orientation parameters provided along with the
image data. Subsequently images are matched in flight direc-
tionwithanoverlapof70%, resulting ina totalof314stereo-
pairs, containing approximately 10.6 billion points. After the
generation of disparity maps TV classes and normal maps
are computed. As mentioned in Section IV the weighting
function assigns a weight to every TV class which is then
used in the fusion process.
The derived weighting function is depicted in Fig. 3
and shows that a correlation between TV classes and the
geometric precision (i.e. level of noise) can be verified.
2http://www.ifp.uni-stuttgart.de/ISPRS-EuroSDR/
ImageMatching/
130
Proceedings of the OAGM&ARW Joint Workshop
Vision, Automation and Robotics
- Title
- Proceedings of the OAGM&ARW Joint Workshop
- Subtitle
- Vision, Automation and Robotics
- Authors
- Peter M. Roth
- Markus Vincze
- Wilfried Kubinger
- Andreas MĂźller
- Bernhard Blaschitz
- Svorad Stolc
- Publisher
- Verlag der Technischen Universität Graz
- Location
- Wien
- Date
- 2017
- Language
- English
- License
- CC BY 4.0
- ISBN
- 978-3-85125-524-9
- Size
- 21.0 x 29.7 cm
- Pages
- 188
- Keywords
- Tagungsband
- Categories
- International
- Tagungsbände