Page - 203 - in The Future of Software Quality Assurance
Image of the Page - 203 -
Text of the Page - 203 -
The Future of Testing 203
role.After all, back in 1998who would have thoughtwe would be havingbusiness
cards todaywith terms like âScrum-masterâor âProduct-ownerâ . . .
2038
It may be near impossible to predict the future 20 years ahead. But letâs give it a
try. In 2038 weâre probably driving autonomouscars, and robots may very well be
part of everyday life. Artificial Intelligence will be a lot more intelligent than it is
today and may even be replacing knowledge workers. Will there still be software
developmentaswe knowit? Andwill therebesoftware testing?
I personally believe we will still be needing people to develop software even
though itâs probable weâll be able to produce a lot more software with a lot
less people. It is possible software has become intelligent enough to be testing
itself. Software may be continuously runningself-diagnosticswhich indicate when
somethingâs going wrong and the software in question might even be able to fix
itself, at least toa certaindegree.
ButIdonâtthinkcomputersandrobotswillhavereplacedeveryoneinvolvedwith
the development of software. Simply because of one thing: Artificial Intelligence
is only intelligent about certain things, but really unintelligent when it comes to
some other things . . . This is clearly visible today, but I donât expect it will be
all that different in20years fromnow.Takea lookat current-daytools like Google
AssistantorSiri.TheseAIâsknowalotmorethananyhumanbeingknows(because
they have an endless supply of information constantly at their disposal). Some
robots today are amazing at interpreting their surroundingsand figuringout whatâs
expected of them. Current prototypes of autonomous driving cars may very well
already be safer than human drivers. However, even with all this computer power
and all this data and intelligence thereâs still one thing at which every AI sucks:
understandinghumanbehaviour.
A great example is Hondaâs humanoid robot Asimo, which was developed a
few years ago. In every single way this was a great feat of engineering. However,
during a demonstration it failed horribly because of one simple misunderstanding
of human behaviour. The robot didnât understand why people would want to take
pictures of it and thus concluded that when people were raising their camera or
mobilephone to take a picture, theywere raising theirhands to ask a question.The
robotfroze,repeatingoverandoverâWhowantstoaskAsimoaquestion?â[3].And
this interpretation of human behaviour is something current AI still doesnât know
how to do, even though itâs somethingwe humansfind veryeasy! We immediately
understand whatâs happening when someone is hanging out of the window of a
train thatâs about to leave to hug someone outside. Theyâre saying goodbye.Pretty
straightforward, right? However, an AI might mistake it for someone trying to pull
another person out of the train. There might be something wrong in the train, an
evacuation might even be necessary . . . . I donât believe AI will be much better
at interpreting human behaviour in just 20 years from now. And therefore itâs very
likelyweâllstillbeneedinghumanstodevelopsoftwarethatwillactuallydowhatan
end-user isexpectingfromit.Andwhenweârestillbuildingsoftwareweâllalsostill
need someone to act as the earlier mentioned quality conscience. Someone whoâs
back to the
book The Future of Software Quality Assurance"
The Future of Software Quality Assurance
- Title
- The Future of Software Quality Assurance
- Author
- Stephan Goericke
- Publisher
- Springer Nature Switzerland AG
- Location
- Cham
- Date
- 2020
- Language
- English
- License
- CC BY 4.0
- ISBN
- 978-3-030-29509-7
- Size
- 15.5 x 24.1 cm
- Pages
- 276
- Category
- Informatik