A Critical Discussion of Some Current and Future Developments of IT#Keynote Presentation at EuroSPI 2019 in Edinburgh
Hermann Maurer (Keynoter) and Namik Delilovic (Co-author). Both: Graz University of Technology, Austria
Abstract#In this paper we discuss briefly a few aspects of some of the exciting new developments in information technology. We argue that most of ongoing activities and visions (if they become reality) are quite ambivalent: with positive and negative effects, with nobody really able to make credible forecasts.
In Section 1, we first mention five issues. Three involve ongoing activities which are suffering from drawbacks that one could improve and that are totally overlooked by the community. The other two issues are major topics of IT today: artificial intelligence and big data and analysis of big data. Those 5 issues will be discussed more thoroughly in Section 2. In the remainder of Section 1 we look at a plethora of other IT issues that we do not follow up in depth. We conclude this paper in a very short Section 3 by warning IT specialists no to overestimate the importance of IT vs. other subject areas: They may influence mankind as much or even more than IT.
Keywords: Artificial Intelligence, Web Improvements, Digital Libraries, Big Data
1 The Ambivalence of Current and Future IT Developments#Many big issues driven by a combination of economic promise and enthusiasts are quite ambivalent: they offer many positive results, but also have dramatic downsides, if one looks at them form other points of view. We mention examples of the pros and cons for some of those developments.
1.1 Moving Activities into the Web#Current trends to move many tasks from persons to Web applications are in full swing: just consider telebanking, getting a passport via the web, doing taxes that way, shopping online, booking trips with all that comes with it via the Web, etc. The usability of some of those applications leaves much to be desired. We will discuss in Section 2 and explain a feature that is missing today in the Web and would reduce much irritation.
1.2 Information Digitization#The whole world is currently digitizing information, including newspapers, journals and books, and including elaborate e-Learning systems. The danger is not only that digitized information is less stable (due to Code, SW and HW developments) than information on paper but that there are other serious problems. Section 2 will discuss problems that abound today and could be avoided with moderate effort.
1.3 Finding Reliable Information in the Web and Social Networks#Finding information using WWW and one of the search engines is, in general, easier than it ever was. And much information is showered on us in unbearable quantity via various social networks. Yet not only are searches still restrictive, but even more dis-turbing is that the reliability of information found is often impossible to check. First ideas to combat such weaknesses are presented in Section 2.
1.4 Artificial Intelligence (AI)#AI has made dramatic progress and has tremendous positive and negative potential. We are unable to fathom this at depth in a short paper but will discuss some points we feel strongly about in Section 2, but else refer to excellent books like (Harari, Y. N., 2018), (Tengmark, M., 2017) or (Precht, R.D., 2018).
1.5 Big Data and Big Data Analysis#One of the biggest issues of today in IT is the collection and analysis of large amounts of data. “Who owns the data owns the world”, is often said. We will explain a bit about this but can of course only scratch the surface and will emphasize those issue that are often not mentioned enough yet we consider fairly crucial.
1.6 Other Issues#We try to mention here some other important issues, that we will not follow up in Section 2. Cryptography has developed tremendously, allowing to keep information secret from all who are not supposed to have access to it. At the same time, cryptography has been in use in some of the worst cyberattacks, by encrypting data and making the decryption only possible after having extorted a large sum of money. Note that even backups stored without net connection are not safe: latest attacks break into the administration area of a server and keep silent for a while. As backups are made to safeguard against attacks, even those backups are encrypted. Finally, when extor-tion starts and the organisation shrugs it off planning to use one of the backups it is found out that even the backups are corrupted. The current way to safeguard against this is to have a backup that is not encrypted. When a new backup is made, the pre-vious one is kept until it is clear that the new one is useable. Unfortunately, even this is not enough, since when a backup looks ok, an embedded Trojan that may have been overlooked might become active and encrypt at a much later date! This report could be continued, but would be really a story on its own.
Of course there are other IT developments that are both marvelous and horrible: Think about drones. If you are not worried, read one of the SF books that are more and more becoming reality: one of the first was (Maurer, H., 2006) where such drones are not bigger than a small insect, i.e. dozens sit in your home without you ever noticing. Not surprisingly a book 12 years later is still closer to today’s reality: (Elsberg, M., 2018).
Or consider autonomous navigation modules: sounds good for cars, when it finally will work properly. But how about military or criminal applications? Unfortunately, you can be sure that there will be a terrorist car bomb using an autonomous car (or plane?) at some stage. Or what do you think of a friendly robot going berserk because its SW has been hacked?
How reliable is crypto-money like Bitcoin? How valuable is block-chain technolo-gy for other purposes? Note that computers world-wide are consuming large amounts of energy. How can this be changed? (Block-chain technology makes, as all know, the situation even much worse.)
IT has led to much vulnerability: a complete breakdown of the internet, when the world is completely networked, could well be the end of humanity as described in (Maurer, H., 2004) in an SF scenario. Even a breakdown of electricity (likely to happen because of faulty SW) will have catastrophic consequences as is well described in (Elsberg, M., 2017).
Social networks are not just fun and destroying spare time and personal contacts, but increase the fact that not much is left of our privacy and that our thoughts are shaped by forces beyond us. “Profit maximizing SW is influencing our behaviour” as both (Lanier, J., 2018) and (Spiekermann, S., 2019) prove beyond doubt.
Above are just a few instances showing the (not surprising) ambivalence of new IT technology. (Spiekermann, S., 2019) puts it very nicely in the statement (translated): “New is not automatically good, digital does not imply better.” We cannot discuss all problems and attempts at solutions, like introducing a new type of social network without the danger of fake news, mobbing, trivialities and SPAM. But we can summarize this section, and indeed the main message of this paper: It is time that research is not only directed at progress in IT but also at progress in fighting possible abuses or negative consequences of such new developments.
In the next section we have closer look at the issues mentioned in subsections 1.1 – 1.5
2 Some Novel Ideas to Avoid Some Dangers of New IT Technology.#In this Section we take a closer look at the areas addressed in Section 1 under 1.1- 1.5 and show how some of the weaknesses could be avoided.
2.1 Moving Activities into the Web#Many activities that traditionally required persons to physically contact others, or fill out forms, or such, are replaced on a large scale by APPs or other programs on the WWW. Just consider telebanking, shopping online, booking trips with all that comes with it via the Web, electronic voting, or contact with public organisations like changing your place of permanent residency, getting a passport via the web, doing taxes that way, etc. The usability of some of those applications leaves much to be desired.
It would be nice if a fixed set of guidelines what interfaces should look like is available, some items potentially even enforced by law as it was in Interactive Videotex days (see (Maurer, H., Sebestyen, I., 1984)) in parts of Europe in the eighties: In the Austrian/German version advertisements had to be shown as such, the homepage was only allowed to have a logo and a number of entries, like contact person (!), a table of contents, i.e. had a fairly rigid structure. It is too late to enforce the layout of homepages. Yet at least within an organisation like e.g. a federal government the same structure should be used.
All who think this is overregulation let me convince those of at least of one point. EVERY page should a have button: “Message to the administrator” or such: Clicking on that button would open a window that would only allow textual input (like a question for help, information of an error, a suggestion for further information, etc., and an optional request for an answer). When the receiver gets such a message with the request for an answer the receiver should be obliged to return a personal (not computer generated) answer within a few days. Note that messages are usually sent anonymously. If an answer is requested the sender has to specify an E-Mail address or such and the receiver should be obliged to answer but then to discard the contact parameters.
- Note that such button has been implemented for Austria-Forum.org: You find it on every page.
We have discussed this idea with a number of organisations. At first we encountered strong resistance. But slowly we could convince the organisation that using such an application there would be no flood of messages, but important feedback allowing to modify the interface. I.e., the feature would be helpful in making an application easier to use, improvements coming in that would fast reduce messages to a trickle. This feature is not suggested to replace FAQ lists or chat-bots, but both are often the reason for irritation: one reads through a lengthy list of FAQs, but does not find the right answer. In desperation one calls the organisation, often ending up in a long waiting loop. In the end, if unlucky, one does not reach a person competent in the special matter at issue.
2.2 Information Digitization#The whole world is currently digitizing information, including newspapers, journals and books, and including elaborate e-Learning systems.
We see a number of dangers.
First, even non-interactive information like brochures or books are stored using some format on some hardware using some operating system. Only ten years later one or all three of the mentioned components may be obsolete, and the information stored has been lost, unless it has been recoded and restored when changes of the underlying storage structure were done. Clearly, important information will survive this way over generations of changes, but what was considered not important in year x may suddenly be important in year x+t, and by then may have been lost for ever. The first author had an experience of this kind fairly recently. A record thereof can found under (Maurer, H., 2017):
Second, the situation is still more critical with complex e-Learning systems or 3D imagery or animated/interactive virtual reality, since obsolescence of HW, SW and interfaces is coming very fast . We cannot present a solution here, except possibly for e-learning systems by replacing them by libraries of interactive books as described next. Third, formats for digitized books and pictures (based on variants of PDF) seem to offer a certain stability. It is a pity that for e-Books (digitized books to be read offline on a Kindle, e-book reader, tablet or such) there are really two incompatible stand-ards: The open and non-proprietary EPUB format, and Amazon’s closed and propri-etary AZW format. For details see (Garrish, M., 2011). Digitized books as available in large libraries use as format either something close to PDF, often with even similar interface, based on the International Image Framework . An experimental approach has been taken as a first step with some 2.300 books with our own software Web-Books , but will eventually be moved over into an IIIF compatible environment (Delilovic, N., Maurer, H., Zaka, B., 2019).
Typically, digitized books have features like bookmarking, full text search, comments for oneself or the community, etc. Most current collections of digitized books suffer from the isolation syndrome: books are seen as independent entity, with little connection to other books, let alone other material. IIIF is the first major attempt to at least allow the display of pages from different books side by side.
However, what we believe is a major step beyond is this: We want to permit the insertion of an “interaction icon” at any place in a book page that can activate a link to another book page or Web-page, but can also activate some other external action, like a discussion forum, a set of multiple-choice questions where the answers will determine what readers will experience next, or the link can lead to an experimental platform in 3D, part of an educational game, etc. Thus we have a book as a kind of shell leading to all kinds of multimedia material and actions. This, we believe, is less prone to obsolescence since if one of the interaction icons is not leading to an operating application any more it is just removed, or leads to a new kind of action, without need to modify the basic SW, as would be the case with a complex integrated e-learning environment. We are presenting much more details in (Delilovic, N., Maurer, H., Zaka, B., 2019), a paper appearing in the Proceedings of ED-Media 2019, Amsterdam, June 24-28.
2.3 Finding Reliable Information in the Web and Social Networks. #Finding information using WWW and one of the search engines is, in general, easier than it ever was. And much information is showered on us in unbearable quantity via various social networks.
We feel there are a number of major issues with finding reliable information in the current WWW.
One issue is that unspecific questions posed to search engines give us results that do not make sense without further specification but we tend to accept them as “true”. This is most easily explained by means of a few examples. Searching for Austrian Nobel laureates with any of the more popular search engines results in a list of 16 to 26. Those lists are based on where the persons were born. But then the question is: what if that place was in Austria when they were born, but is not anymore? Like do we count Bertha von Suttner (born 1843 in Prague) or Fritz Pregl (born 1869 in Ljubljana), then both cities in Austria. Or how about the ten laureates born in K. u. K. Austria (like Hungary). Even worse, why do we count by place of birth? Why not by the place where they worked or where they got the recognition (even those last two places may be different). The following were not born in Austria yet received the prize for their research done in Austria: Barany, Wagner-Jauregg, Otto Loewi, Hess, Lorenz.
Putting it differently, questions have to be asked that are specific enough, and search engines should either force us to do so, or else give a result and explain what definition it is based on. Thus, asking for the area of France as such does not make sense (Do we include the overseas departments like Reunion? If so, do we also in-clude the more independent oversee territories - like French Polynesia and New Caledonia?). If we ask for the area of Fiji, do we mean at low tide or high tide? If we ask for the size of a cave, do we mean its volume, or its longest corridor, or the largest distance between cave floor and ceiling, or the sum of the lengths of corridors (as is done in Austria); does a corridor too small for a human but big enough for a rat still count???
This issue has been investigated to some extent in (Mehmood, R. , Maurer, H., 2015) and (Glatz, M., Maurer, H., Tanvir, A., 2018), but many issues, like how to judge the “truth” of a piece of information on the WWW or on social nets remains an unresolved hot issue. One idea pursued by us is to introduce networks allowing certified contributors whose name is known, hence their reputation is at stake if they produce or forward fake news.
Overall, the authors hope that improvements as partially suggested in 2.1-2.3 will take place before the frustration that things “are not getting easier but more complicated” by going into the internet with everything leads to a revolt against a technology, just because the technology is not used well. It is alarming that a top scientist and writer like (Jaeger, L., 2017) writes (this is a translation): “How science and technology are seen has changed: no longer are the saviour or big helper, but are also seen as something threatening.” And even if this is not really the topic of this paper here is another quote from him: “There is no reason to believe that future capitalism and the gains from production due to new technologies will be distributed more justly than is now the case”, see (Alvaredo, F., 2018).
A steady stream of new technologies that have to be understood and mastered together with a growing percentage of very rich and very poor, making the middle class smaller and smaller, and social networks misinforming us on purpose see (Spiekermann, S., 2019) or (Lanier, J., 2018) are good ingredients of unrest, as we see it almost world-wide, from France to Venezuela, from large-scale migrations to increasing radicalism.
2.4 Artificial Intelligence (AI)#AI has made dramatic progress. It is important to distinguish AI applied to certain problem areas, “specific AI” (AI for short), and “general AI” (g-AI for short) that can act like an intelligent human, i.e. can adapt to arbitrary circumstances, define goals or sub-goals, etc.
It turns out that both types of AI (g-AI, although forecast already in (Kurzweil, R., 2005) is still a dream) have tremendous positive and negative potential. We are unable to fathom this (and g-AI in particular) in depth in a short paper but want to mention some points we feel strongly about. Else we encourage to look at excellent books like (Harari, Y. N., 2018), (Tengmark, M., 2017) or (Precht, R.D., 2018).
It is clear that AI is shifting and will continue to shift many tasks from humans to computers/robots. This will allow rich countries to produce much by automated AI controlled factories/robots that is outsourced today to cheap-labour countries to avoid such outsourcing (threatening the livelihood of people in those poorer coun-tries).
Specific AI will be powerful enough to also potentially create more unemployment in developed countries. Politicians are usually assuring us that there will be no increase in unemployment, since new types of jobs will replace the old ones. However, this is just used as tranquilizer. As (Tengmark, M., 2017) explains the situation he states convincingly: “With advance planning, a low-employment society should be able to flourish not only financially, with people getting their sense of purpose from activities other than jobs”.
It is also our conviction based on simple observations: According to the diary of our grandfathers, they worked 3.200 h/ year. Today the average employee is only working 1.600 h/year (38.5 hours a week, up to 6 weeks of holidays, 52 weekends and 13 other holidays (at least in Austria). It is foreseeable that this will further reduce. Even today some companies are offering 30 h/week jobs at the same pay as for 40 h/week before. Overall, AI will increase unemployment unless a redistribution of work will take place.
Concerning g-AI it is the belief of many experts, contrary to (Kurzweil, R., 2005) but in accordance to our belief that g-AI is much further away than some think: In particular it would require that computers develop (self) consciousness. Since we do not even know what consciousness means and why we as humans have it, to expect that computer will develop this feature “if they are just complex enough” seems farfetched. Hence the idea that computers at some stage will take control of the world and of humanity (and erase humanity if necessary) is a good SF topic, but not close to reality. In our opinion it is more likely and scary that a group of people or a small country develops very good AI SW and uses it to control part or all of the world!
It is probably necessary to explain why we believe that AI, as much as it has developed, is not going to turn into g-AI soon.
Originally AI started using smart algorithms, the speed of computers and large number of inputs to succeed. In 1997 IBM’s Deep Blue program beat Garry Kasparov at chess, becoming the first computer to defeat a human world champion. In the next few years, sometimes computers, sometimes the chess champion, won. Since 2006 computers have won ever such tournament. The currently best chess Software is called Komodo: it reaches an Elo rating of 3304, about 450 point higher than any top rated human player in the world.
However, AI kept developing by using learning software. By now “deep learning software” (simulating a number of layers of neural nets) in Alpha Zero was able to achieve chess mastery without human instruction (except for the rules of chess) by playing thousands of games against itself and learning from mistakes within in a four-hour period!
The Japanese Go-game is believed to be the most difficult board game existing, despite the fact that its rules are very simple. A few years ago, when the chess world champions were already computers using “old” AI, the community considered Go out of reach for another 10 to 20 years. However, using deep learning and playing millions of games against itself DeepMind has now a version AlphaGo Zero that is better than any human.
Even more interesting: Another recent AI program developed by DeepMind is AlphaStar which was used in 2019 to defeat the most skilled team in the popular SF multiplayer game StarCraft II (5-0). In only 14 days of training time, AlphaStar gained 200 years of human gameplay experience which made it able to utilize a mixture of most effective strategies against its opponent (details can be found on (The AlphaStar team, 2019)). This is more than a milestone: If a computer driven Avatar can succeed in a full VR replica of the real world, we may approach g-AI. The catch is: Not even the best Virtual Worlds are close to the huge idiosyncrasies of the real world.
Such dramatic advances in game playing, IBM Watson computer winning in difficult Quiz shows like Jeopardy , better and better image and pattern recognition in big data (particularly important for medical applications, but also for recognizing human faces and good language translation and text to speech algorithm) are the reason why some specialists believe the word is close to g-AI (and many think that then computers will take over: “We have to be lucky if computers keep us as pets”, Marvin Minsky said already some 20 years ago).
Yet it is one item, the failure of AI to master language translation perfectly that is one of the reasons why we do not believe that g-AI is around the corner. Allow us to report an anecdote. Ian Witten (who was once a graduate student of the first author in Calgary, Canada, and then for almost 30 years the most famous professor of informatics in New Zealand, but during this time also once Visiting Professor at TU Graz) told in his course in Graz “who understands the language understands the world”). Well, no language translation system understands the world nor will it understand it soon. It is particular back-references that are impossible to resolve without knowing everything about the world at some point in time.
Consider the sentence: “The repair of the upper part of the small house that cost € 400.000 ….”. We immediately understand that the 400.000 refers to the value of the house not the repair, but to translate this into German the software has to also understand this. It thus either has to deliver: “Die Reparatur des Oberstocks des kleinen Hauses, die € 400.000 gekostet hat…” (here the amount refers to the price of the repair), or, as would be correct: “Die Reparatur des Oberstocks des kleinen Hauses, das € 400.000 gekostet hat…” (here the amount refers to the value of the house).
2.5 Big Data and Big Data Analysis #One of the biggest issues of today in IT is the collection and analysis of large amounts of data, in combination with AI. An often heard statement today is: “Who owns the data owns the world”.
Much of the data analysed in medical areas can improve both diagnosis, cause of illnesses, medication and treatment issues.
Data analysed by examining user behaviour when shopping online, or just when browsing the net, is certainly beneficial for companies to identify the right products for customers, yet makes people transparent to an extent that most do not wish. The number of data protection laws reflect this concern but it has been said over and over that the new currency is not money, but knowledge about people and organisations.
There is also an issue that is sometimes overlooked: Big Data analysis is often used to check the correlation of facts. Such correlations must not be taken at face value, but have to be carefully analysed. A typical example is the fact that big data analysis has revealed that “pirate attacks on ships happen mostly in areas of the oceans where the water is not polluted.” To conclude from this that clean water is good for pirate attacks is, however, clearly nonsense: Areas far away from big cities, or major shipping lanes, i.e. areas with little supervision by police or military are best suitable for pirate attacks. Well, those areas have clearly water with little pollution. This situation is obvious to anyone seriously looking at it, but there are examples that are much more subtle. In his seminal paper Calude shows that any (!) correlation can be found if the data-set is large enough; he also shows that some decisions of the EU on economic matters were taken based on the analysis of large sets of data when the correlation detected was spurious!
Never the less, analysis of big data is offering both promises and challenges. in combination with AI or g-AI we can expect a world close to what is described in (Eggers, D., 2013) or (Elsberg, M., 2018)
2.6 Other Issues#Cryptography has developed tremendously, allowing to keep information secret from all who are not supposed to have access to it. At the same time, cryptography has been in use in some of the worst cyberattacks, by encrypting data and making the decryption only possible after having extorted a large sum of money. Note that even backups stored without net connection are not safe: latest attacks manage to get into the administration area of a server and keep silent for a while. As backups are made to safeguard against attacks, even those backups are encrypted. Finally, when extortion starts and the organisation shrugs it off planning to use one of the backups it is found out that even the backups are corrupted. The current way to safeguard against this is to have a backup that is not encrypted. When a new backup is made, the previous one is kept until it is clear that the new one is useable. Unfortunately, even this is not enough, since when a backup looks ok, an embedded Trojan that has been overlooked may become active and encrypt at a much later date! This could be continued, but would be really a story on its own.
Of course there are other IT developments that are both marvelous and horrible: Think about drones. If you are not worried, read one of the first author’s SF books where such drones are not bigger than a small insect, i.e. dozens sit in your home without you ever noticing it.
Or consider autonomous navigation modules: sounds good for cars, when it finally will work properly. But how about military or criminal applications? Unfortunately, you can be sure that there will be a terrorist car bomb using an autonomous car (or plane?) at some stage. Or how autonomous warfare or just weapons? Or what do you think of a friendly robot going berserk because its SW has been hacked?
How reliable is crypto-money like Bitcoin? How valuable is block-chain technolo-gy for other purposes? Computers world-wide are consuming large amounts of energy. How can this be changed? (Block-chain technology makes, as we all know, the situation even much worse: “The bitcoin network is run by miners, computers that maintain the shared transaction ledger called the block-chain. A new study estimates that this process consumes at least 2.6 GW of power—almost as much electric power as Ireland consumes ”).
Do we want direct brain-computer interfaces? What do YOU think about the future of Quantum computing?
We have not discussed that computer are on the way out. Web-Information is now used by over 50% by smartphones! We have not discussed the use of chips that (when dialed-up) just beep (to show their location) or activate a door or whatever. After all: Many of us use some such gadget to open the door to our car or garage, to activate security setting in our house, etc. What if those things are hacked??
Above are just a few instances showing the (not surprising) ambivalence of new IT technology. There is no room to report on all problems and attempts at solutions.
We can summarize this section, and indeed the main message of this paper: It is time that research is not only directed at progress in IT but also at progress in fighting possible abuses or negative consequences of such new developments.
3 Conclusion and a Warning of Hubris#IT is developing at terrify speed. IT experts are much in demand. Many feel: IT is the centre of the world. We believe some modesty is useful: Some argue e.g. that we will never have g-AI simply because humanity will wipe itself out well before through nuclear war, bio-warfare or climate-change.
This should not be taken lightly: The chance that the world is wiped out by nuclear war is per year 1/10 of 1% or less. However, over 1.000 years this gives 100% ---i.e. certainty. Other alternatives are just as bad.
Consider world population: 1 Billion in 1900, 8 billion right now, 12 billion by 2100. Nigeria in Africa will grow from 200 million right now to 440 by 2050. Where do you think many Nigerians will want to go, at whatever cost? And of course we use Nigeria as just an example. Population in Africa is exploding (1 billion more people in 50 years) and there are efforts to help a bit, but the real effort, to try to reduce population growth by ethical means is rarely addressed---despite the fact that the first author of this paper has proposed four models that are ethical and that would not be so hard to implement.
Is it really the increasing car traffic that is ruining our world, or that we will face 12 to 20 times more cars (over the numbers 30 years ago) in a not so distant future? Is the correlation more cars, more pollution the correct one? Or rather: More people, more wealth, more cars, hence more pollution?
How about bio-engineering with CRISPR, DNA scissors? Changing humans, creating new forms of life? Was Steve Hawkings correct that humanity will only survive if it colonizes many planets? And such colonization will be easier with some bio-engineering, like allowing humans of the future to also breathe Methane, allowing them e.g. to colonize the moon Titan? And we have not discussed genetic engineering to grow protein (meat) as we now grow carbohydrates (vegetables). We have not even mentioned material science with “shark skin”, or “lotus effect” or graphene oxide to clean dirty water, or new ways of generating energy. So: Let us be optimistic, but this will mean serious discussions of alternatives, and not just opportunistic statements of media and politicians.
- 1. Alvaredo, F. (2018). The World Inequality Report. C.H. Beck Publishing Co.
- 2. Delilovic, N., Maurer, H., Zaka, B. (2019). NID- Networked Interactive Digital. Paper in preparation at the time of writing (April 2019).
- 3. Eggers, D. (2013). The Circle. Random House.
- 4. Elsberg, M. (2017). Blackout – Tomorrow will be too late.
- 5. Elsberg, M. (2018). Zero - They know everything you do.
- 6. Garrish, M. (2011). What is EPUB 3? O'Reilly Media.
- 7. Glatz, M., Maurer, H., Tanvir, A. (2018). Finding Reliable Information on the Web Should and Can Still Be Improved. Journal of Computing and Information Technology, vol.2, no.5, 1-6.
- 8. Harari, Y. N. (2018). 21 lessons for the 21st Century. London: Penguin Random House.
- 9. Jaeger, L. (2017). Supermacht Wissenschaft. München: Güters Loher Verlagshaus.
- 10. Kurzweil, R. (2005). The Singularity is Near. New York: Viking Press.
- 11. Lanier, J. (2018). Ten Arguments for deleting Your Social Media Accounts Right Now. Henry Holt and Company.
- 12. Maurer, H. (2004). The Paranet – Breakdown of the Internet. Linz, Austria: freya publishing.
- 13. Maurer, H. (2006). Kampf dem großen Bruder. Linz, Austria: freya publishing.
- 14. Maurer, H. (2017). Retrieved from https://austria-forum.org/af/Geography/Cross-country_information/Stability_of_digitized_information
- 15. Maurer, H., Sebestyen, I. (1984). Report on Videotex development in Austria. Electronic Publishing Review, 45-57.
- 16. Mehmood, R. , Maurer, H. (2015). Fact Collection and Verification Effort. 4 th International Conference on Data Management Technologies and Applications. Colmar, France.
- 17. Precht, R.D. (2018). Hirten, Jäger, Kritiker. München (Translation to appear): Goldmann/Random House.
- 18. Spiekermann, S. (2019). Digitale Ethik. Droemer/Knaur.
- 19. Tengmark, M. (2017). Life 3.0. London: Penguin Random House.
- 20. The AlphaStar team. (2019). AlphaStar: Mastering the Real-Time Strategy Game StarCraft II. Retrieved from https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/