Seite - 129 - in The Future of Software Quality Assurance
Bild der Seite - 129 -
Text der Seite - 129 -
Testing Artificial Intelligence 129
3.3 Under-confidence inAI
The other side of this is under-confidence. A rational debate on whether to use AI
can be blurredby uncertainty, irrational fear or bias in the media (or sci-fi movies).
Accidentswithself-drivingcarsgetmoreheadlines thanordinaryaccidents.People
areafraid to becomeobsoleteor that a maliciousghost in themachinemightarise.
3.4 Traceability
Withnon-AI-systemsthealgorithmis thecode.This isnot thecasewithAI-systems
so we don’t know the exact criteria by which the AI-system takes decisions. Next
to that it’s hard to oversee the total population of training data and therefore get
a good understanding of how the AI system will behave. So when the outcome is
evidently incorrect, it is hard to pinpoint the cause and correct it. Is it the training
data, the parameters, the neural network or the labelling? Lack in traceability fuels
over-confidenceandunder-confidence(aswasshownabove)andcausesuncertainty
in liability (was it thesoftware, thedata, the labellingor thecontext thatdid it?)and
lackofmaintainability (what tocorrect?).
4 TestingAI
Thekeytomitigationof theAIrisks is transparency.Inbiasweneedinsight into the
representativenessof trainingdataandlabelling,butmostofallweneedinsight into
how important expectations and consequences for all parties involved are reflected
in the results.
Building the rightamountofconfidenceand traceabilityneeds transparencytoo.
Transparency will not be achieved by illuminating the code. Even if this were
possible, by showing a heat-map of the code indicating which part of the neural
network is active when a particular part of an object is analysed or a calculation in
a layer isproduced,meansclose tonothing.Lookinginsideabrainwillnevershow
a thoughtordecision. It couldshowwhichpart isactivatedbutallmentalprocesses
always involve multiple brain parts to be involved and most of all experience from
thepast.
AI systems are black boxes, so we should test them like we do in black box
testing: fromthe outside, developing test cases that are modelled on real-life input.
From there expectationson the output are determined. Sounds traditional and well
known,doesn’t it?
The basic logic of testing AI might be familiar, the specific tasks and elements
areverydifferent.
The Future of Software Quality Assurance
- Titel
- The Future of Software Quality Assurance
- Autor
- Stephan Goericke
- Verlag
- Springer Nature Switzerland AG
- Ort
- Cham
- Datum
- 2020
- Sprache
- englisch
- Lizenz
- CC BY 4.0
- ISBN
- 978-3-030-29509-7
- Abmessungen
- 15.5 x 24.1 cm
- Seiten
- 276
- Kategorie
- Informatik