Web-Books
in the Austria-Forum
Austria-Forum
Web-Books
Informatik
The Future of Software Quality Assurance
Page - 129 -
  • User
  • Version
    • full version
    • text only version
  • Language
    • Deutsch - German
    • English

Page - 129 - in The Future of Software Quality Assurance

Image of the Page - 129 -

Image of the Page - 129 - in The Future of Software Quality Assurance

Text of the Page - 129 -

Testing Artificial Intelligence 129 3.3 Under-confidence inAI The other side of this is under-confidence. A rational debate on whether to use AI can be blurredby uncertainty, irrational fear or bias in the media (or sci-fi movies). Accidentswithself-drivingcarsgetmoreheadlines thanordinaryaccidents.People areafraid to becomeobsoleteor that a maliciousghost in themachinemightarise. 3.4 Traceability Withnon-AI-systemsthealgorithmis thecode.This isnot thecasewithAI-systems so we don’t know the exact criteria by which the AI-system takes decisions. Next to that it’s hard to oversee the total population of training data and therefore get a good understanding of how the AI system will behave. So when the outcome is evidently incorrect, it is hard to pinpoint the cause and correct it. Is it the training data, the parameters, the neural network or the labelling? Lack in traceability fuels over-confidenceandunder-confidence(aswasshownabove)andcausesuncertainty in liability (was it thesoftware, thedata, the labellingor thecontext thatdid it?)and lackofmaintainability (what tocorrect?). 4 TestingAI Thekeytomitigationof theAIrisks is transparency.Inbiasweneedinsight into the representativenessof trainingdataandlabelling,butmostofallweneedinsight into how important expectations and consequences for all parties involved are reflected in the results. Building the rightamountofconfidenceand traceabilityneeds transparencytoo. Transparency will not be achieved by illuminating the code. Even if this were possible, by showing a heat-map of the code indicating which part of the neural network is active when a particular part of an object is analysed or a calculation in a layer isproduced,meansclose tonothing.Lookinginsideabrainwillnevershow a thoughtordecision. It couldshowwhichpart isactivatedbutallmentalprocesses always involve multiple brain parts to be involved and most of all experience from thepast. AI systems are black boxes, so we should test them like we do in black box testing: fromthe outside, developing test cases that are modelled on real-life input. From there expectationson the output are determined. Sounds traditional and well known,doesn’t it? The basic logic of testing AI might be familiar, the specific tasks and elements areverydifferent.
back to the  book The Future of Software Quality Assurance"
The Future of Software Quality Assurance
Title
The Future of Software Quality Assurance
Author
Stephan Goericke
Publisher
Springer Nature Switzerland AG
Location
Cham
Date
2020
Language
English
License
CC BY 4.0
ISBN
978-3-030-29509-7
Size
15.5 x 24.1 cm
Pages
276
Category
Informatik
Web-Books
Library
Privacy
Imprint
Austria-Forum
Austria-Forum
Web-Books
The Future of Software Quality Assurance