Page - 134 - in Proceedings of the OAGM&ARW Joint Workshop - Vision, Automation and Robotics
Image of the Page - 134 -
Text of the Page - 134 -
Superresolution Alignment with Innocence Assumption: Towards a Fair
Quality Measurement for Blind Deconvolution
Martin Welk1
Abstract—Quantitative measurements of restoration quality
in blind deconvolution are complicated by the necessity to
compensate for opposite shifts of reconstructed image and
point-spread function. Alignment procedures mentioned for this
purpose in the literature are sometimes not exactly enough
specified; alignment-free approaches sometimes do not take into
account the full variability of possible shifts. We investigate
by experiments on a simple test case the errors induced by
interpolation-based alignment procedures. We propose a new
method for MSE/PSNR measurement of image pairs involving
non-integer displacements that is based on a superresolution
approach. We introduce an innocence assumption in order
to keep deviations that can be explained by shifted sampling
grids out of the error measurement. In our test case, the new
measurement procedure reduces the variations in MSE/PSNR
measurements substantially, creating the hope that it can be
used for valid comparisons of blind deconvolution methods.
I. INTRODUCTION
The removal of blur in images by blind image deconvo-
lution has been studied for many years [2], [3], [4], [5], [6],
[10], [16], [21], and received increasing interest during the
last years [1], [7], [8], [9], [11], [12], [14]. A frequently used
simplifying assumption is that the blur is spatially invariant,
i.e. the redistribution of intensity is described by the same
point-spread function (PSF) h at each image location. Blur
is then described by a convolution between the unobserved
sharp image g and the PSF h; incorporating additive noise
n, the observed image f is given by the blur model
f =g∗h+n . (1)
Whereas for non-blind deconvolution one assumes that f and
h are known, and aims at an estimate u for the sharp image
g, the knowledge of h is often not available in practice, thus
necessitating blind deconvolution where the estimate u of the
sharp image is to be obtained along with the PSF h, using
only f as input image.
A variety of approaches to solve this task have been de-
veloped, creating the need for quality comparisons. Besides
visual assessment, one is interested in quantitative measure-
ments of reconstruction quality versus a known ground truth.
Frequently used standard measures for image reconstruc-
tion methods include the mean-square error (MSE) as well
as the signal-to-noise ratio (SNR) and peak signal-to-noise
ratio (PSNR) both of which are closely related to the MSE;
furthermore, sometimes the average absolute error (AAE) is
*This work was not supported by any organization
1Martin Welk is with Department of Biomedical Informatics
and Mechatronics, Private University for Health Sciences, Medical
Informatics and Technology (UMIT), 6060 Hall/Tyrol, Austria
martin.welk@umit.at advocated. Another measure that puts some more emphasis
on important structural details of images such as contrast
edges is the structural similarity index (SSIM), see [17]. Let
us shortly recall the first three measures.
For a reference (ground-truth) image g and degraded (or
reconstructed) image u, both of size n×m pixels, their MSE
is given by
MSE(u,g)= 1
nm n−1
∑
i=0 m−1
∑
j=0 (ui,j−gi,j)2 . (2)
Provided that u and g have equal mean intensityµ (which we
will assume in the following), this is the variance var(u−g)
of u−g. Using the variance of g given by
var(g)= 1
nm n−1
∑
i=0 m−1
∑
j=0 (gi,j−µ)2 , (3)
and the range R(g) :=maxi,jgi,j−mini,jgi,j (255 for satu-
rated 8-bit images), one can compute the SNR
SNR(u,g)=10log var(g)
var(u−g)dB (4)
and PSNR
PSNR(u,g)=10log R(g)2
var(u−g)dB . (5)
For non-blind deconvolution, both MSE/(P)SNR and
SSIM are frequently used to assess reconstruction quality.
Although these quantitative measures are not always in good
agreement with visual assessments by humans, they are
generally accepted as simple and objective measures. For a
recent study on measures that approximate better the human
perception of restoration quality see [13].
In blind deconvolution, however, their application meets
a difficulty: If the reconstructed image u is translated by an
arbitrary, often non-integer, displacement d, and the point-
spread function h is translated by −d, these translations
cancel in the convolution u∗h. Blind deconvolution results
that differ just by such opposite translations of u and h must
therefore be considered equally valid reconstructions. An ex-
ample of such shifts that indeed occur in blind deconvolution
results is shown in Fig. 1. This precludes a straightforward
(P)SNR or SSIM comparison of blind deconvolution results
with ground truth. Obviously, some kind of alignment – rigid
registration restricted to translations as transformations – has
to be applied.
Nevertheless, blind deconvolution results are compared
by PSNR and other quantitative measures in a number of
works, e.g. [6], [7], [8], [9], [10], [14]. In many of these,
134
Proceedings of the OAGM&ARW Joint Workshop
Vision, Automation and Robotics
- Title
- Proceedings of the OAGM&ARW Joint Workshop
- Subtitle
- Vision, Automation and Robotics
- Authors
- Peter M. Roth
- Markus Vincze
- Wilfried Kubinger
- Andreas Müller
- Bernhard Blaschitz
- Svorad Stolc
- Publisher
- Verlag der Technischen Universität Graz
- Location
- Wien
- Date
- 2017
- Language
- English
- License
- CC BY 4.0
- ISBN
- 978-3-85125-524-9
- Size
- 21.0 x 29.7 cm
- Pages
- 188
- Keywords
- Tagungsband
- Categories
- International
- Tagungsbände