Page - 40 - in Austrian Law Journal, Volume 1/2019
Image of the Page - 40 -
Text of the Page - 40 -
ALJ 2019 Peter Egger et al 40
knowledge it has learnt in the past to apply it to a new situation it is confronted with.14 As a
consequence, the reconstruction of the behaviour of machine-learning algorithms can be
extremely challenging and is in most cases impossible.15 Consequently, the term “black box” has
been coined in this context since the person affected by the machine-learning algorithm knows
the input data and the final result, but cannot understand how the algorithm has reached it.16
Hence, the question arises who is responsible for its behaviour: (1) the developer, (2) the person
who activated the algorithm or (3) the person delivering the (training) data set.
From a legal perspective, the use of machine-learning algorithms causes specific difficulties if such
algorithms are used by the state to make decisions which interfere with the rights of individuals.17
Whenever the state exercises sovereign action, the concept of rule of law obliges it to give reasons
for a decision. An algorithm with an incomprehensible process of decision-making cannot
adequately satisfy this obligation. Furthermore, the principle of legality (which is interpreted rather
strictly in Austrian law) requires that executive authorities only act on the basis of a statutory
authorisation. Since there are only very few explicit provisions on the use of algorithms both on
EU and national level,18 the Austrian state largely acts in a grey area when it comes to that topic.
Despite the lack of a comprehensive legal basis, many algorithms - both deterministic and
machine-learning ones - are already in use today without the public being aware of it.19 This is true
for the state and the private sector, though the use of algorithms is certainly more widespread in
14 Cf. Gruber and I. Eisenberger, Wenn Fahrzeuge selbst lernen: Verkehrstechnische und rechtliche
Herausforderungen durch Deep Learning? in AUTONOMES FAHREN UND RECHT 51, 57 et seq. (I. Eisenberger, Lachmayer
and G. Eisenberger ed., 2017); Russell and Norvig, supra note 9; LENZEN, KÜNSTLICHE INTELLIGENZ: WAS SIE KANN & WAS
UNS ERWARTET 20 (2018).
15 Cf. Wahlster, Künstliche Intelligenz als Grundlage autonomer Systeme, 40 INFORMATIK-SPEKTRUM 409 (2017). For a
detailed analysis of the accountability of algorithms cf. Kroll, Huey, Barocas, Felten, Reidenberg, Robinson and Yu,
Accountable Algorithms, 165 UNIVERSITY OF PENNSYLVANIA LAW REVIEW 633 (2017). See also Ernst, Die Gefährdung der
individuellen Selbstentfaltung durch den privaten Einsatz von Algorithmen, in DIGITALISIERUNG UND RECHT 65 (Klafki,
Würkert and Winter ed., 2017): Due to the increasing complexity of algorithms and the amount of data algorithms
are trained with, their functioning is becoming less understandable for third persons and particularly for users
without technical know-how.
16 The very limited knowledge of European citizens about algorithms has been the subject of a recent survey by the
German Bertelsmann Stiftung. Cf. Grzymek and Puntschuh (ed.), Was Europa über Algorithmen weiß und denkt.
Ergebnisse einer repräsentativen Bevölkerungsumfrage (2019) Bertelsmann Stiftung https://www.bertelsmann-
stiftung.de/fileadmin/files/BSt/Publikationen/GrauePublikationen/WasEuropaUEberAlgorithmenWeissUndDenkt.
pdf. For analyses of the black box metaphor in context with AI cf. Bathaee, The Artificial Intelligence Black Box and
the Failure of Intent and Causation, 31 HARVARD JOURNAL OF LAW & TECHNOLOGY 890 (2018); Guidotti, Monreale,
Ruggieri, Turini, Pedreschi and Giannotti, A Survey of Methods for Explaining Black Box Models, 51 ACM COMPUTING
SURVEYS (CSUR) 93 (2019); Kwong, The Algorithm says you did it: The use of Black Box Algorithms to analyze complex
DNA evidence, 31 HARVARD JOURNAL OF LAW & TECHNOLOGY 275 (2017); Mühlbacher, Piringer, Gratzl, Sedlmair and
Streit, Opening the Black Box: Strategies for Increased User Involvement in Existing Algorithm Implementations,
20 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 1643 (2014).
17 Concerning the effects of the use of AI on human beings and their human rights in general cf. European Parliament
resolution of 14 March 2017 on fundamental rights implications of big data: privacy, data protection, non-
discrimination, security and law-enforcement [2018] OJ C263/82; Mortier, Haddadi, Henderson, Mcauley and
Crowcroft, Human-Data Interaction: The Human Face of the Data-Driven Society (2014), available at
https://haddadi.github.io/papers/HDIssrn.pdf; Raso, Hilligoss, Krishnamurthy, Bavitz and Kim, Artificial Intelligence
& Human Rights: Opportunities & Risks, THE BERKMAN KLEIN CENTER FOR INTERNET & SOCIETY RESEARCH PUBLICATION SERIES
(2018), available at https://cyber.harvard.edu/publication/2018/artificial-intelligence-human-rights.
18 Cf. supra note 11.
19 According to the IDC Data Age Study of 2017, humans currently have about 500 interactions with algorithms per
day. This number will increase to 4700 per day by 2025. Cf. also AlgorithmWatch GmbH (ed.), Automating Society.
Taking Stock of Automated Decision-Making in the EU (2019) Bertelsmann Stiftung https://www.bertelsmann-
stiftung.de/fileadmin/files/BSt/Publikationen/GrauePublikationen/001-148_AW_EU-ADMreport_2801_2.pdf which
shows that automated decisions have become part of everyday life in Europe.
back to the
book Austrian Law Journal, Volume 1/2019"
Austrian Law Journal
Volume 1/2019
- Title
- Austrian Law Journal
- Volume
- 1/2019
- Author
- Karl-Franzens-Universität Graz
- Editor
- Brigitta Lurger
- Elisabeth Staudegger
- Stefan Storr
- Location
- Graz
- Date
- 2019
- Language
- German
- License
- CC BY 4.0
- Size
- 19.1 x 27.5 cm
- Pages
- 126
- Keywords
- Recht, Gesetz, Rechtswissenschaft, Jurisprudenz
- Categories
- Zeitschriften Austrian Law Journal