Seite - 143 - in The Future of Software Quality Assurance
Bild der Seite - 143 -
Text der Seite - 143 -
Responsible Software Engineering 143
risk of exclusion, and to situations which are characterized by asymmetries of power
or information, such as between employers and workers, or between businesses and
consumers.
3. Acknowledge that, while bringing substantial benefits to individuals and society, AI
systemsalsoposecertainrisksandmayhaveanegative impact, includingimpactswhich
maybedifficult toanticipate, identifyormeasure (e.g.ondemocracy, theruleof lawand
distributive justice, or on the human mind itself.) Adopt adequate measures to mitigate
these risks when appropriate, and proportionately to the magnitude of the risk.”
Another initiative is AI4People: It is a multi-stakeholder forum that “brings
togetherall stakeholders interested in shaping the societal impact ofAI—including
the European Commission, the European Parliament, civil society organizations,
industry and the media” [33]. The result is a living document with the following
preamble: “We believe that, in order to create a Good AI Society, the ethical
. . . should be embedded in the default practices of AI. In particular, AI should
be designed and developed in ways that decrease inequality and further social
empowerment, with respect for human autonomy, and increase benefits that are
shared by all, equitably. It is especially important that AI be explicable, as
explicability is a critical tool to build public trust in, and understanding of, the
technology.”
Theso-calledAlgo.Rules[34]defineanewapproachonhowtopromotesoftware
trust systematically. It was developed by the think tank iRights.Lab together with
several experts in the field. New rules define how an algorithm must be designed
in order to be able to be evaluated with moral authority: above all, transparent,
comprehensible in its effectsandcontrollable:
1. “Strengthen competency: The function and potential effects of an algorithmic system
must be understood.
2. Defineresponsibilities:Anaturalor legalperson mustalwaysbeheldresponsible for the
effects involved with the use ofan algorithmic system.
3. Document goals and anticipated impact: The objectives and expected impact of the use
ofan algorithmic system must be documented and assessed prior to implementation.
4. Guarantee security: The security of an algorithmic system must be tested before and
during its implementation.
5. Provide labelling:The use of an algorithmic system mustbe identified as such.
6. Ensure intelligibility:Thedecision-making processeswithinanalgorithmicsystemmust
always be comprehensible.
7. Safeguard manageability: An algorithmic system must be manageable throughout the
lifetimeof itsuse.
8. Monitor impact: The effects of an algorithmic system must be reviewed on a regular
basis.
9. Establish complaint mechanisms: If an algorithmic system results in a questionable
decisionoradecision thataffects an individual’s rights, itmustbepossible torequest an
explanation and file a complaint.”
The Future of Software Quality Assurance
- Titel
- The Future of Software Quality Assurance
- Autor
- Stephan Goericke
- Verlag
- Springer Nature Switzerland AG
- Ort
- Cham
- Datum
- 2020
- Sprache
- englisch
- Lizenz
- CC BY 4.0
- ISBN
- 978-3-030-29509-7
- Abmessungen
- 15.5 x 24.1 cm
- Seiten
- 276
- Kategorie
- Informatik