4 The Safe Use of Artificial Intelligence
By the end of this chapter, learners will be able to analyse risks associated with the development and use of AI, distinguishing between cybersecurity risks, risks related to failures in functionality, risks related to the effects produced by an AI system, and risks from opacity.
Data protection law aims to protect the fundamental rights and freedoms of natural persons from the risks that might come from data processing. Some of these risks, as we have seen in the previous unit, emerge from deliberate attempts to interfere with computer systems or extract information from them. Others, however, emerge from the operation of the systems themselves. Computer systems can fail in their operation, or they might produce undesirable side effects even if running correctly. For example, scholars and activists have recently pointed out various environmental hazards coming from the growing use of AI. In this chapter, we will discuss some aspects of AI technologies that can affect their safe development and use from a data protection perspective.
The concept of safety is a complement to the concept of security we discussed in Chapter 3. They both relate to the prevention of harms coming from a computer system. However, they cover distinct kinds of harm. Whereas security is concerned with preventing malicious interferences with a computer system, safety refers to the prevention of harms that do not involve an attacker (Herrmann and Pridöhl 2020). Those harms might be the result of natural events, such as a storm that disrupts the operation of a system that maintains a critical piece of infrastructure. Or they might be the result of the system’s operation: for example, an AI chatbot designed to defraud users will harm those users if it functions as expected. This means that an AI system must be both safe and secure to comply with legal requirements.
Safety is a complex phenomenon. It relates to social, psychological, and institutional factors, among others (Levenson 2012). As a legal obligation, it can find a wide variety of sources. One of those is data protection law. Under Article 25 GDPR, data controllers are obliged to take technical and organizational measures that consider the risks that processing can create for the rights and freedoms of natural persons.1 This means that lawful processing requires attention to not putting those rights and freedoms at stake.
In this chapter, we will examine three sources of risk to safety that are particularly relevant in the context of AI. Section 4.1 discusses numerous factors that can make an AI system’s actual operation diverge from the promises used to sell it. Section 4.2 then illustrates how some risks can emerge even if an AI system operates as expected. Finally, Section 4.3 discusses how factors such as the technical complexity of AI systems, their scale of operation, and intellectual property rights can be obstacles to the evaluation of AI systems.
4.1 The promise of functionality and its limits
By the end of this section, learners will be able to illustrate reasons that might lead an AI system to not operate as expected, such as defects in software design, biased algorithms, or inadequate organizational processes.
Whenever somebody uses or creates an AI system or model, they usually intend it to have one or more functionalities. That is, there is an expectation that the AI technologies are used for doing something. For example, a chatbot is expected to be able to interact with humans in conversations, while a facial recognition system is expected to recognize faces. Yet sometimes these functionalities are not actually present in the finalized system or model. Or, if they are, the AI technology performs them worse than expected. In this section, we will discuss how that can happen and why that matters for data protection.
Over the past decade, the speedy developments of AI technologies create expectations that AI can solve any problems. Even if no technology available today can solve a given problem, this might not be the case in a few years. For example, the object recognition capabilities that are available in a moderately-priced smartphone nowadays were beyond the reach of computer science only a decade ago. As a result, the adoption of AI technologies is driven not just by our knowledge of what we know AI can do for sure today, but by the promise that certain technologies show of solving future problems (Hirsch-Kreinsen 2024).
These promises do not always materialize in practice.2 Back in the 1950s, computer scientists had expected to solve most of major technical problems behind AI in an intense summer of research. Technologists and entrepreneurs keep promising that we will have superintelligent AI systems, self-driving cars, and other technologies, and they keep revising their estimates of when those technologies will actually be available. Even more modest promises often fail to materialize: IT projects are notorious for taking much more time and effort than original forecasts (see, e.g., McConnell 2006). As such, data protection professionals would do well not to take the promises of software development at face value.
4.1.1 Analysing how things can go wrong
One challenge that we face when analysing safety issues is that many things can go wrong. To approach this problem, one can follow a similar approach to the one adopted in cybersecurity: relying on shared knowledge bases that catalogue potential sources of unsafety in the development and operation of an AI system. Some initial steps towards this have been taken, as organizations such as the OECD have created databases that monitor safety incidents related to artificial intelligence. By sharing reports of those incidents, individuals and organizations can understand and draw lessons from what has gone wrong.
To systematize the lessons learned from AI safety incidents, one might expand on them and offer some theoretical constructions. A potentially fruitful approach for this has been proposed by Deborah Raji and her co-authors. In a 2022 conference paper, these authors identify what they call the fallacy of functionality: the mere fact that an AI system exists is not enough for us to believe that it does what it promises to do. This is because AI systems can fail in many ways.
Beyond this concise formulation of the fallacy, Raji and her co-authors offer a taxonomy of failure modes of AI. That is, they classify several ways in which an AI system might fail to deliver the promised functionalities. In the following paragraphs, we will look more closely at the categories proposed by those authors.
The first type of failure mode they cover is that of impossible tasks (Raji et al. 2022, p. 962). Sometimes, an AI system cannot do what it is expected to do because that goal cannot be achieved at all. A task might be impossible at a conceptual level, for example if it tries to make predictions with no scientific basis, as is the case of various AI systems attempting to infer traits of personality, behaviour, or social status from physical traits (Stark and Hutson 2022). Other tasks are possible in theory but cannot be achieved in practice. Raji et al. (2022) give numerous examples of trying to build AI systems when the data available is biased or does not capture key features for the problem at hand.
Other failure modes stem from engineering failures (Raji et al. 2022, p. 963–964). AI systems and models are developed by humans, either acting alone or as part of larger groups and organizations. The individuals and groups working in an AI system are fallible, and this can affect the functionality of an AI system. They might fail to implement certain features correctly, to detect errors in the system, to include safeguards to individual rights and so on. For example, a programmer might use an outdated version of a software library when developing an AI system, one that gives wrong results for one of each one hundred analyses done by the system. Those errors in programming might produce harms to individuals, for example, if they lead to an individual being assigned the wrong treatment by a medical AI system.
The third category in the failure taxonomy is that of deployment failures (Raji et al. 2022, p. 964). These failures refer to various things that can go wrong when one puts an AI system to use:
- A system might lack robustness; that is, its outputs might be disturbed if the conditions in which it is used change just a little. For example, a robust AI system for evaluating student performance should not change its predictions radically if a student’s grade in a specific exam is revised a few decimal points up or down.
- A system might struggle with adversarial attacks, as discussed in the previous unit, which are meant to interfere with its operation.
- A system might fail to account for unexpected interactions. For example, a medical AI system used to diagnose heart diseases by looking at chest images might struggle if it is exposed to a patient with situs inversus.
Those problems might affect even an AI system that has been well-designed to achieve a feasible task.
Finally, Raji et al. (2022, p. 964-965) discuss failures of communication. One example they give are situations in which a vendor overstates or even falsifies the capabilities of a technology they are selling. For example, a provider of a chatbot might claim that their systems can reason, even though the system is actually just dealing with statistical correlations to produce its results. The other example of communication failure they mention is that of misrepresented capabilities, which can happen if a provider sells a product even if they know it cannot be reliably used for a certain application. In those cases, the problem is not so much on the technical object as it is on the communication between those offering the AI-based tools and those buying their promises.
4.1.2 Dealing with the fallacy of functionality
From an organizational perspective, those who use AI technologies will benefit from looking closely at the failure modes mentioned above. Otherwise, they might find themselves buying (or even developing) tools that do not do what they promise, and so become expensive failures. However, addressing those fallacies is also a legal obligation when the AI systems are covered by data protection law.
For organizations deploying AI systems developed by others, the obligation follows from the fact that they will effectively be the data controllers for those systems.[^fn-ch6] They must therefore discharge various obligations towards the persons whose personal data is processed. If a system fails to operate as expected, the deploying organization will need to respond for any harms that failure might have caused. This means it will need to have a clear view of what its AI systems can (and cannot do), otherwise the use of AI might expose it to undesirable liability.
For organizations developing or commercializing AI models or systems, the data protection obligations do not refer directly to the harms stemming from the use of those technologies, as discussed in Section 6.3.
Still, obligations can stem from other sources. An organization that develops an AI system is likely to be the data controller for any processing that takes place during the training process, and as such it will be responsible for safety failures. It might also have obligations of fair representation of its products. For example, the AI Act mandates various kinds of disclosure across the supply chain for providers of high-risk AI systems and of general-purpose AI models. A failure to critically engage with the fallacy of functionality might therefore lead to legal problems down the road.
4.2 Adverse effects of AI applications
By the end of this section, learners will be able to examine how AI systems can harm the rights and interests of individuals and groups even if a system works as advertised.
In this section, we will discuss how AI systems and models might be unsafe even if they deliver all the promised functionalities. This is because many AI-based technologies are used in contexts in which they affect the physical and virtual environments where social life takes place. For example, online platforms often rely on content moderation algorithms, while governments might use AI systems to allocate benefits or detect fraud. The effect of AI systems in those use cases is not solely a function of their technical properties. Instead, it depends on the role those systems are expected to play and how they are operated within a given context.
Because the kind of harm we discuss here is sensitive to the contexts in which AI systems are used, it is not possible to cover all relevant cases. Instead, we will use the hypothetical cases from Section 1.3 to illustrate how those harms might emerge in practice.
These examples highlight that the impact of AI systems extends beyond their functional performance. The broader context in which they are embedded often determines whether they contribute positively or negatively to society, particularly when it comes to protecting the rights and interests of individuals and groups. By looking at potential harms in each of the three case studies, and discussing their legal implications, our goal is to have some examples of factors that one can analyse in their own organization.
4.2.1 AI-based harms at the University of Nowhere
The UNw university is considering integrating AI technologies to alleviate the workload on its overburdened staff, particularly in administrative processes and student services. However, even well-functioning AI systems can lead to unintended harms that affect the rights and interests of students and staff. Let us now consider a few of the potentially harmful applications.
The use of AI-based systems for grading assignments and exams could introduce biases that disproportionately affect certain student groups. Automated assessment tools might systematically disadvantage students who come from non-traditional educational backgrounds, use unconventional writing styles, or whose first language is not the one used for instruction. For example, there have been various reports that AI-powered plagiarism detectors used in English-language institutions are more likely to wrongfully flag a piece of student work as a plagiarism if English is not their first language.
Another area of concern is the use of predictive analytics to identify students who may be at risk of dropping out. AI models could analyse student data, such as attendance records, grades, and engagement metrics, to flag individuals for intervention. While this may appear beneficial, it can also lead to privacy invasions and undue stress for students who are unfairly labelled as “at-risk” due to factors that the model misinterprets or oversimplifies. For example, students who work part-time jobs or have caregiving responsibilities might show lower engagement metrics but do not necessarily require or benefit from additional classes or extra tasks. This type of profiling can harm the students’ sense of autonomy and increase stigmatization.
Processes such as those can harm the students affected by them. A dedicated student might be unfairly accused of plagiarism and have to spend time they would dedicate to studies in defending themselves against the charges. A student who is balancing their studies with full-time work to sustain their family might be required to follow remedial classes they have neither the need nor the time for. Such outcomes are not only unfair to the students but can lead to legal liabilities for the university.
From a data protection perspective, the following chapter of this book will help you identify various potential sources of non-compliance. Some of those are quite technical, but others can be identified from the incompatibility of these errors and biases with some GDPR principles. For example, the biases above run afoul of the principle of accuracy, as they lead the university to store wrongful assessments about individuals. Any applications which make decisions about students without human involvement are also likely to trigger the rules on automated decision-making. Furthermore, many of the applications outlined above are covered by the list of high-risk AI systems in Annex III AI Act, triggering additional obligations. As such, the potential impact of AI in students is something that must be considered beyond the technical rigour in design.
4.2.2 The risks of smart toys at DigiToys
DigiToys aims to use AI to create interactive, educative experiences for young children. Even if the AI embedded in the toys functions exactly as intended—engaging children with personalized learning prompts or responding accurately to their voice commands there are still significant risks related to privacy and child development. Those risks are particularly relevant from a data protection perspective, as Recital 38 GDPR clarifies that the vulnerabilities of children warrant special protection when their data is processed.
As a recital, this stipulation is not legally binding. However, it points out how one should interpret the applicable legal provisions of the GDPR—not just the specific requirements for children’s consent in Article 8, but any provision when a child’s data is processed. In addition, other provisions of EU law also require special attention and protection to the rights and interests of a child, and the GDPR has to be interpreted in a way that is compatible with those.3 Special attention to children is not just a desirable feature of the law, but a legal requirement, even if data controllers retain considerable flexibility on how to deal with that requirement.
Some of the challenges to children’s rights might be directly connected with their right to data protection. For instance, if the toys track children’s interactions to personalize the learning experience, they may inadvertently gather information about the child’s behaviour, preferences, or even their emotional state. Part of that data might fall into the special categories of personal data defined in the GDPR, triggering additional requirements for processing.
Even if the gathered data is not deemed sensitive in a narrow legal sense, it can still pose considerable risks. Data collected from children might be subsequently processed for reasons that are not in their best interest, such as the creation of profiles from an early age. Those profiles might fail to consider how the interests, preferences, and even central aspects of a child’s personality can change radically over time. For example, they might take into account mistakes that people make when they are young, even after those individuals have matured. This can adversely affect those individuals in adult life, and it might create obstacles from the exercise of their right to be forgotten.
The long-term implications of processing are not limited to DigiToys. The information gathered by that company might be shared with partner companies and organizations, requested by government authorities if national or EU law allows so, or even commercialized to data brokers. The spread of children’s data might happen even if the company is deliberately averse to that: for example, if DigiToys goes bankrupt, its assets might be bought by companies that are less invested in child’s rights.
Moreover, the use of AI in toys can alter the way children engage with the digital world, potentially affecting their cognitive and social development. Even if the toy is designed to be educational, there is a risk that children may become overly reliant on interactive digital stimuli, reducing opportunities for free play and human interaction. This can have long-term consequences on their ability to develop essential social skills, even if the toys are technically operating as intended.
4.2.3 Some challenges to automated medicine at InnovaHospital
InnovaHospital is known for its commitment to patient confidentiality and its embrace of innovative technologies. The integration of AI tools into clinical decision-making, such as diagnostic support systems or patient monitoring, may seem like a natural progression. However, even when these systems operate correctly, they can still produce harmful effects.
For example, an AI-based diagnostic tool might prioritize efficiency and speed, recommending standardized treatment protocols based on data-driven insights. While this may streamline care, it can also lead to a “one-size-fits-all” approach, overlooking individual patient needs or ignoring subtle symptoms that do not fit typical patterns. This could harm patients with rare conditions or those from underrepresented demographic groups whose medical data is not adequately represented in the training datasets.
Additionally, the use of AI in triaging patients could inadvertently exacerbate healthcare inequalities. An AI system designed to allocate resources or prioritize patients based on risk assessments might rely on historical data that reflect existing biases in healthcare access. For instance, patients from lower-income neighbourhoods or marginalized communities might receive lower priority because the system correlates socioeconomic factors with lower health outcomes, rather than considering the structural reasons behind these disparities.
As seen from the examples above, the deployment of AI systems in healthcare can contribute to systemic inequalities, even if the models powering those systems are technically well-designed. Such an outcome runs counter to the GDPR’s overall goal of ensuring the protection of the rights and freedoms of individuals, such as their right to health or their right to privacy (which is affected by the large-scale accumulation of data about their healthcare). It can pose problems from an accuracy perspective, and it might also create issues from the perspective of non-discrimination law. Finally, some AI applications might be subject to the AI Act’s high-risk risks, in special if they are covered by the strictest tiers of rules in the Medical Devices Regulation. Hence, compliance with the requirements of data protection law will help InnovaHospital ensure that its embrace of innovative technologies will not come at the expense of the hospital’s commitment to equitable patient care.
4.3 Opacity as a risk
By the end of this section, learners will be able to describe technical and nontechnical sources of opacity surrounding AI systems. They will also be able to estimate how that opacity can create problems for compliance with data protection requirements.
A well-known problem with AI systems is their opacity. The expression “black box” has entered public discourse as a way to describe how the inner workings of AI systems and models remain hidden from the sight of the general public. However, the technical complexities we have discussed in Chapter 2 often mean that even the organizations deploying AI systems might lack access to the information they need to make sense of how those systems work. In this session, we will examine potential sources of opacity and discuss their implications for organizations under data protection law.
In short, the opacity of AI systems can stem from technical or legal factors. Both tend to appear in practice, combining themselves to hide information from regulators, the general public, and data controllers themselves. This can be a legal problem in itself, to the extent that it prevents controllers from complying with their transparency and accountability duties. But it can also be a complicating factor in the various issues we discussed above, amplifying harm by making sure that organizations and data protection authorities cannot discover in time what is going on. It is oftne not possible or desirable to eliminate those sources of opacity. Still, their potential impact on the rights, liberties, and interests of those affected by AI means that the legitimate grounds for opacity must be balanced with those other interests at stake. Therefore, organizations will need to adopt measures to deal with opacity in the AI technologies they create or use.
4.3.1 Two kinds of AI opacity
AI systems are often characterized by a high degree of opacity, which can arise from many factors. On the technical side, many AI models, particularly those based on deep learning, are complex and difficult to interpret. However, even when it is technically possible to make sense of an AI-based technology, other factors might be an obstacle to that. In particular, opacity can also be produced by the law. For example, some provisions in the German tax code prevent the disclosure of information about the algorithms used by tax authorities for estimating fraud risk (Hadwick and Lan 2021). The interplay between technical and non-technical factors can contribute to our lack of understanding about what happens within an AI system or model.
When we think about the black box of AI, we often think about its technical complexity. AI models rely on intricate mathematical operations and advanced computational techniques. To understand these models—let alone to tinker with their inner workings—one must have specialized training. Even though recent developments in AI technologies reduce the specialized knowledge needed for using them, their components, such as the neural networks powering many AI models, remain inaccessible to non-expert users (Kolkman 2022). Experts might also struggle to make sense of those models, given the vast number of parameters and the complex architectures in which their components are arranged (Burrell 2016). As a result, making sense of what an AI system is doing is a task that can require considerable technical work.
However, as discussed above, that task is not always in the best interest of some actors. For example, government authorities might be unwilling to release some information about how their AI systems work, fearing that citizens might “game” the system to avoid detection. Or the providers of AI technologies might not want to release information about how they configure their AI models in order to prevent competitors from using that information to create better models.
The law recognizes various legitimate reasons why one might want to pursue secrecy, such as:
- State secrecy, that is, the protection of information related to vital public functions.
- Trade secrecy, that is, the protection of information related to how a business operates.
- Intellectual property law, which might be used by organizations to deny access to the technicalities of an AI system or model.
- Data protection law itself can be an obstacle to disclosure, for example if an organization argues it cannot disclose the training data for an AI model because it contains information about identified or identifiable natural persons.
Many of those legal grounds have been used to deny organizations access to information about AI.
Often, the denial of information is mostly directed at the general public, as seen in the examples above. But some of the concerns driving organizations towards confidentiality can also apply towards downstream providers. For example, a company that sells a general-purpose AI model might be afraid that its consumers will clone its model and become competitors. The result is that legal opacity is sometimes used against the organizations that create and use AI technologies. Those providers and deployers find themselves in the unenviable position of being potentially responsible for the outcomes produced by technologies they have little margin to understand or control. As we shall see now, this situation has legal implications.
4.3.2 What AI opacity means for data protection law
As mentioned in the introduction to this session, AI opacity can lead to two distinct but related issues. If organizations lack visibility of the inner workings of an AI system or model that they use, they might be unable to comply with any legal obligations requiring them to release information about those workings. Additionally, AI opacity might hinder the detection of other sources of harms within an AI system, delaying their detection and response. Both implications of opacity are relevant for data protection law.
Regarding the legal obligations that are directly affected by opacity, one can focus primarily on issues of transparency and accountability.
- Articles 13–15 GDPR establish that the data controller must be able to disclose some types of information to the data subject, such as information about whether and how the automated processing of their data is used for making decisions without human involvement.
- Article 24 GDPR further establishes that data controllers are responsible for demonstrating compliance with the requirements of data protection law. Given that some of those requirements concern the technical means used to process personal data, complying with this duty requires information about how the system is set up.
These duties apply in any use of AI that involves the processing of personal data, regardless of the use of AI. This means that organizations deploying or developing AI systems cannot invoke technical complexity as an excuse to discharge their duties. Instead, they are expected to adopt technical and organizational measures that mitigate said opacity.
Such measures are also relevant for the detection of risks associated with the use of AIbased technologies. Two examples can illustrate how opacity might amplify such risks:
- Consider a scenario in which UNw decides to use an AI system for grading and assessing student performance. If certain groups of students consistently receive lower scores due to biases in the model, the university may not realize this issue if the system’s decision-making process is too opaque to audit. This lack of visibility can allow discriminatory patterns to persist, even if the university has no intention of unfair treatment.
- For DigiToys, opacity in its AI-enhanced toys might prevent the company from identifying privacy issues. If parents express concerns about how the toys process and respond to children’s voices, the company may struggle to offer clear explanations or assurances due to the proprietary nature of the algorithms involved. This lack of transparency can erode trust and lead to reputational harm, even if the AI system functions as designed and complies with other legal requirements.
To the extent that organizations are obliged to adopt legal measures to address those risks, as we discussed in this unit, opacity can be an obstacle to compliance. It can increase the time necessary for identifying that a risk exists and for understanding its likelihood and severity. For example, UNw might struggle to detect the biased algorithm because the terms of service from the algorithmic tool it uses do not allow access to inner system parameters. In that case, risks might only be noticed once they have manifested and harmed students.
Even after the risk is detected, opacity might mean that an organization does not know exactly how it can address a problem. For example, it might be the case that UNw’s tool actually has settings that would allow for safe processing, but the university does not know about those settings. Opacity is not a problem just for data subjects, but for the controllers processing their data as well.
4.4 Conclusion
In this chapter, we have seen the importance of distinguishing between security and safety in the context of AI systems. A secure AI system might still be unsafe for use, either because of its technical properties or because of problems with the context in which an organization wants to use it. Conversely, an AI system that is safe in light of those factors might still cause harms to individuals and groups if its security is not adequate for its task. Therefore, organizations need to pay attention both to cybersecurity and to the safety of their AI technologies in order to comply with the GDPR’s requirements.
Toward that goal, we can highlight the following takeaways from this chapter:
- Safety risks from AI can appear from a variety of sources, including but not limited to:
- Technical and organizational shortcomings that prevent an AI system from delivering the promised results.
- Unlawful or otherwise unethical uses of AI technologies, which can sometimes be more harmful if the system operates correctly.
- The opacity of AI systems, which can prevent compliance with disclosure requirements and prevent the detection of other safety hazards.
- Safety risks must be addressed by proactive measures, both to prevent their occurrence and to mitigate the hams from any incidents during operation.
- Risks can be addressed by technical measures (that is, by changes to the design of an AI system or model) and organizational measures (that is, changes to its operation context).
- Some risks cannot be fully eliminated, only mitigated.
- Some technical risks might follow from essential properties of the technology.
- Others might be solvable in theory, but an adequate solution might be beyond the state of the art.
- Last but not least, fully eliminating some risks might be too expensive in practice.
- Likewise, some organizational risks are inherent to a technology’s intended purpose, the context in which its meant to operate, or general societal arrangements that cannot be changed just for the sake of safe AI use.
- Whenever that is the case, organizations must decide whether they can mitigate the risks enough to make the use/development of AI worthwhile. If not, they might want to abandon it.
By keeping in mind the points above, one can have a clearer picture of why safety matters and why it might be threatened by the use and development of AI technologies. The rest of this book will show various measures and safeguards that can be adopted to detect and respond to potential safety risks.
Exercises
Exercise 1. How might deployment failures contribute to the fallacy of functionality?
- a. By overstating the system’s capabilities to potential users.
- b. By introducing errors in the AI training dataset.
- c. By causing an AI system to malfunction under specific conditions.
- d. By using proprietary software to conceal errors.
- e. By withholding critical information about system limitations.
Exercise 2. UNw decides to use an AI system for predicting student dropout rates. The system consistently flags students with part-time jobs as high-risk, even though these students perform well academically. What does this scenario best illustrate?
- a. An engineering failure during system development.
- b. A deployment failure due to environmental factors.
- c. A lack of transparency in AI functionality.
- d. An inappropriate use of AI for predictive modelling.
- e. An unrealistic assumption about the system’s ability to identify true dropout risks.
Exercise 3. Which potential harm might arise even if DigiToys’s AI toys follow all the relevant security measures, comply with applicable legal requirements, and operate as described in their specifications?
- a. Violation of GDPR transparency requirements.
- b. Excessive reliance on digital stimuli by children.
- c. Exposure of children’s data due to weak encryption.
- d. Miscommunication about toy functionalities.
- e. Poor-quality responses to voice commands.
Exercise 4. What is a likely source of opacity in InnovaHospital’s AI systems?
- a. Complex mathematical models.
- b. Excessive reliance on user feedback.
- c. Over-simplified system design.
- d. Redundancy in system layers.
- e. Use of open-source algorithms.
Exercise 5. Which of the following data protection principle is most directly affected if UNw deploys an opaque AI system to grade student tests?
- a. Principle of accuracy.
- b. Principle of fairness.
- c. Non-discrimination under EU law.
- d. Principle of lawfulness.
- e. Security-by-design principles.
4.4.1 Prompt for reflection
The three kinds of safety failures discussed in this unit are complex. They can emerge from many sources, and it can be hard to find out whose actions caused the ensuing harms. Discuss who should be held accountable in these situations: the developers, the deploying organizations, or both? How can data protection officers (DPOs) play a proactive role in identifying and mitigating such risks before they materialize?
4.4.2 Answer sheet
Exercise 1. Alternative C is correct. Alternatives A and E reflect failures of communication, alternative B is an engineering issue (data failure), and alternative D is a deliberate decision to cultivate opacity.
Exercise 2. Alternative E is correct. While alternatives A and B might also be the case, engineering failures are likely to be downstream from the faulty assumptions: if the university believes in falsehoods about part-time students, it is likely to design the system based on those assumptions. Alternatives C and D are less likely to be applicable in general, though they might be relevant under specific circumstances.
Exercise 3. Alternative B is correct. Alternative A would reflect a breach of legal requirements, while alternatives C to E refer to different failures of functionality.
Exercise 4. Alternative A is correct. Of the others, alternative D might have some impact on opacity, but that depends on the specifics of the system. The others have no immediate connection to the sources of opacity discussed above.
Exercise 5. Alternative D is correct, as the university will likely be unable to comply with the GDPR’s requirements for transparency and the data subject’s right to access to data. Alternatives A and B might also be indirectly affected by opacity if it prevents the university from detecting other problems with the system.
References
Adrien Bibal and others, ‘Legal Requirements on Explainability in Machine Learning’ (2021) 29 Artificial Intelligence and Law 149.
Maja Brkan and Grégory Bonnet, ‘Legal and Technical Feasibility of the GDPR’s Quest for Explanation of Algorithmic Decisions: Of Black Boxes, White Boxes and Fata Morganas’ (2020) 11 European Journal of Risk Regulation 18.
Jenna Burrell, ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’ (2016) 3 Big Data & Society 1.
Madalina Busuioc, Deirdre Curtin, and Marco Almada, ‘Reclaiming Transparency: Contesting the Logics of Secrecy within the AI Act’ (2023) 2 European Law Open 79.
Roel IJ Dobbe, ‘System Safety and Artificial Intelligence’ in Justin Bullock and others (eds), Oxford Handbook on AI Governance(Oxford University Press 2022).
Stefan Gaillard, Cyrus Mody and Willem Halffman, ‘Overpromising in Science and Technology: An Evaluative Conceptualization’ (2023) 32 TATuP - Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis 60.
David Hadwick and Shimeng Lan, ‘Lessons to Be Learned from the Dutch Childcare Allowance Scandal: A Comparative Review of Algorithmic Governance by Tax Administrations in the Netherlands, France and Germany’ (2021) 13 World Tax Journal.
Dominik Herrmann and Henning Pridöhl, ‘Basic Concepts and Models of Cybersecurity’ in Markus Christensen and others (eds.), The Ethics of Cybersecurity(Springer 2020).
Hartmut Hirsch-Kreinsen, ‘Artificial Intelligence: A “Promising Technology”’ (2024) 39 AI & SOCIETY 1641.
Daan Kolkman, ‘The (in)Credibility of Algorithmic Models to Non-Experts’ (2022) 25 Information, Communication & Society 93.
Nancy G Leveson, Engineering a Safer World: Systems Thinking Applied to Safety (The MIT Press 2012).
Steve McConnell, Software Estimation: Demystifying the Black Art (Microsoft Press 2006).
Inioluwa Deborah Raji and others, ‘The Fallacy of AI Functionality’, 2022 ACM Conference on Fairness, Accountability, and Transparency(ACM 2022).
Luke Stark and Jevan Hutson, ‘Physiognomic Artificial Intelligence’ (2022) 32 Fordham Intellectual Property, Media and Entertainment Law Journal 922.
Charlotte A Tschider, ‘Legal Opacity: Artificial Intelligence’s Sticky Wicket’ (2021) 106 Iowa Law Review Online 126.
Note that the provision does not speak of “fundamental rights” only, but it covers the broader range of legally recognized interests that an individual might have.↩︎
For a study of overpromising in technology, see Gaillard et al. (2023).↩︎
See, in particular, Article 24 of the EU Charter of Fundamental Rights.↩︎