Web-Books
im Austria-Forum
Austria-Forum
Web-Books
Tagungsbände
Intelligent Environments 2019 - Workshop Proceedings of the 15th International Conference on Intelligent Environments
Seite - 211 -
  • Benutzer
  • Version
    • Vollversion
    • Textversion
  • Sprache
    • Deutsch
    • English - Englisch

Seite - 211 - in Intelligent Environments 2019 - Workshop Proceedings of the 15th International Conference on Intelligent Environments

Bild der Seite - 211 -

Bild der Seite - 211 - in Intelligent Environments 2019 - Workshop Proceedings of the 15th International Conference on Intelligent Environments

Text der Seite - 211 -

knowledge about the people they collect information about and it can use this knowledge to determine what that audience would likely understand (calculated intelligibility). For example, a controller collecting the personal data of working professionals can assume its audience has a higher level of understanding than a controller that obtains the personal data of children. If controllers are uncertain about the level of intelligibility and trans- parency of the information and effectiveness of user interfaces/notices/policies etc., they can test these, for example, through mechanisms such as user panels, readability testing, formal and informal interactions and dialogue with industry groups, consumer advocacy groups and regulatory bodies, where appropriate. If intelligible form would mean to ensure data subjects understanding of the technol- ogy, we then turn to the difficulty of fully understanding the technology which is already complex in and of itself. [20] The famous black box algorithms may prevent even data controllers from understanding what the algorithm is exactly is doing with the personal data and how it evaluates it. The AI system may receive so many personal data that it may cause fundamental changes in the way of the algorithms decision-making system which is not predictable by its creators and in this case, data controllers are somehow bound with explaining something that they do not even know technically. Not surprisingly, this is the very nature of the AI, and it is ”not a bug” [21]. Which personal data, from what source, and in what way it was considered by an algorithm is still a question for many re- searchers. Research on creating explainable (transparent) AI and accountable algorithms 2are on-going, but until finding a universal solution, data controllers may make up sto- ries [22] and feed them to data subjects who cannot verify any of the information they provide. A study measuring Android apps behaviors and their potential non-compliance level with the companys own privacy statement reveals that almost half of the studied apps were found potentially inconsistent with the policy they presented and only a small portion of the apps were found completely consistent with it[23]. The updated guidelines of the Article 29 Working Party on transparency [25] actu- ally give a clue about preparing information tailored to different audiences, so that the information could be understandable by each. According to that, data controllers first should identify the audience, including the factor or age, especially minority, then present the information. In connection with that, intelligible information means that it should be understood by the average of the target groups as assessed by the data controller, not by each of them or not by all of them. This statement remains vague, if the service to be of- fered is a personalized one developed based on an algorithm learning from personal data. If the condition is to first evaluate the groups based on criteria such as age, there could be quite big differences between the understanding level of people even within the same group. (However, recent experience might show that younger people understand specific terminology much better than older ones do.) The document also suggests that the level of intelligibility, not the level of users’ understanding, could be tested with several meth- ods which still may not ensure every single data subject’s personal characteristics. This explanation, in our view, should further be revised in line with the characteristics of the specific AI services. 2Interestingly, accountability has never before been an issue in technological, only in legal terms in light of institutions, decision-makers. It, however, strongly applies to algorithms as decision-makers, without the AI being actually qualified as a person in a legal sense. However, the EU introduced the idea of giving them an electronic personality, and the scientific community has already started assigning principles to AI systems that have so far only been used or applied to persons. G.GultekinVarkonyi /Operability of theGDPR’sConsent Rule in Intelligent Systems 211
zurück zum  Buch Intelligent Environments 2019 - Workshop Proceedings of the 15th International Conference on Intelligent Environments"
Intelligent Environments 2019 Workshop Proceedings of the 15th International Conference on Intelligent Environments
Titel
Intelligent Environments 2019
Untertitel
Workshop Proceedings of the 15th International Conference on Intelligent Environments
Autoren
Andrés Muñoz
Sofia Ouhbi
Wolfgang Minker
Loubna Echabbi
Miguel Navarro-Cía
Verlag
IOS Press BV
Datum
2019
Sprache
deutsch
Lizenz
CC BY-NC 4.0
ISBN
978-1-61499-983-6
Abmessungen
16.0 x 24.0 cm
Seiten
416
Kategorie
Tagungsbände
Web-Books
Bibliothek
Datenschutz
Impressum
Austria-Forum
Austria-Forum
Web-Books
Intelligent Environments 2019