Seite - 84 - in The Future of Software Quality Assurance
Bild der Seite - 84 -
Text der Seite - 84 -
84 R. Marselis
• law1:Arobotmaynot injureahumanbeingor, throughinaction,allowahuman
being tocome toharm.
• law 2: A robot must obey the orders given to it by human beings except where
suchorderswouldconflictwith theFirst Law.
• law3:Arobotmustprotect itsownexistenceas longassuchprotectiondoesnot
conflictwith theFirst orSecondLaw.
Other sourcesaddedsomeadditional laws:
• law 4:A robotmustestablish its identityasa robot inall cases.
• law 5:A robotmustknowit is a robot.
• law 6: A robot must reproduce.As long as such reproductiondoes not interfere
with theFirst orSecondorThirdLaw.
Unfortunately,we observe that, unlike in Asimov’s stories, these robot laws are
notbuilt in in most intelligentmachines. It’sup to the team memberswith a digital
test engineeringrole toassess towhat level the intelligentmachineadheres to these
laws.
Ethics
Ethics is about acting according to various principles. Important principles are
laws, rules, and regulations, but for ethics the unwritten moral values are the most
important.Somechallengesofmachineethicsaremuch likemanyotherchallenges
involved in designing machines. Designing a robot arm to avoid crushing stray
humans isnomoremorally fraught thandesigninga flame-retardantsofa.
With respect to intelligentmachines importantquestionsrelated toethicsare:
• Does it observecommonethical rules?
• Does it cheat?
• Does it distinguishbetweenwhat is allowedandwhat isnotallowed?
Tobeethically responsible, the intelligentmachineshould informitsusersabout
thedata that is in thesystemandwhat thisdata isused for.
Ethics will cause various challenges. For example: it isn’t too difficult to have
an AI learn (using machine learning) to distinguish people based on facial or other
bodypartcharacteristics,forexample,raceandsexualpreference.Inmostcountries,
thiswouldnotbeethical. So testers need to haveacceptancecriteria for this anddo
somethingwith it.
Another ethical dilemma is who is responsible when an intelligent machine
causes an accident? There is no driver in the car, just passengers. Should the
programmer of the intelligent software be responsible? Or the salesman that sold
the car? Or the manufacturer? All ethical (and some legal) dilemma’s. And who
should be protected in case of an autonomous car crash? Some manufacturers of
autonomous cars have already announced that their cars will always protect the
people inside the car. That may be smart from a business point of view (otherwise
no one would buy the car) but from an ethical perspective, is it right to let a few
passengers in thecarprevailovera largegroupofpedestriansoutside thecar?
zurĂĽck zum
Buch The Future of Software Quality Assurance"
The Future of Software Quality Assurance
- Titel
- The Future of Software Quality Assurance
- Autor
- Stephan Goericke
- Verlag
- Springer Nature Switzerland AG
- Ort
- Cham
- Datum
- 2020
- Sprache
- englisch
- Lizenz
- CC BY 4.0
- ISBN
- 978-3-030-29509-7
- Abmessungen
- 15.5 x 24.1 cm
- Seiten
- 276
- Kategorie
- Informatik