9 Operation and Monitoring of an AI System
By the end of this chapter, learners will be able to:
- identify data protection issues that can emerge once an AI system is put into service within an organization.
- organize a monitoring system to detect those issues; and
- propose interventions to ensure an organization’s continued compliance with data protection obligations.
The operation stage of the life cycle is the goal of the entire development process: most AI systems are designed so that they can be used at some point. Given the considerable effort involving in the previous stages of the life cycle, an AI system tends to be used for a purpose, which might or not be the purpose for which it was originally designed. Either way, its use will affect the functioning of physical and virtual environments. This chapter discusses what data protection obligations apply at this life cycle stage.
Those obligations are largely connected to the idea of risk. Article 25(1) GDPR obliges data controllers to consider “the risks of varying likelihood and severity for rights and freedoms of natural persons posed by the processing” when choosing which technical and organizational measures to adopt. The AI Act makes risk an even more salient factor: as discussed in Section 1.2, the rules it applies to AI systems and models are determined based on their perceived risk. So, compliance with those requirements requires a solid understanding of what we are talking about when we are talking about risk.
Both the GDPR and the AI Act rely on an actuarial definition of risk. Under such a definition, a risk is a quantity that is associated with an event that might (or not) happen. That event has a likelihood of happening, and if it does take place, its consequences are taken to have a measurable severity. The risk associated with that event is, therefore, the combination between the likelihood of the event and its severity. Until a risk materializes, those values are speculative, and as such their determination suffers from the same issues of anticipation discussed in Section 9.1. Even so, they offer an initial basis for determining how much effort should be dedicated to preventing a potentially harmful outcome.
In both pieces of legislation, regulation addresses the risks to data subject rights created from the processing of data by AI systems. Yet, there are crucial differences between how the GDPR and the AI Act perceive those risks:1
- The GDPR adopts a contextual view of risk. The AI Act follows, instead, a topdown approach (de Gregorio and Dunn 2022).
- Data controllers are obliged to evaluate the risks their processing creates to data subject rights and freedoms, and then choose the technical and organizational measures that are best suited for eliminating or at least mitigating them.
- In the AI Act, the legal text determines the risk categories that must be applied, leaving to the regulated actor the task of applying different sets of rules according to the risk level determined by the EU lawmaker.
- The GDPR requires data controllers to balance the rights at stake, including the fundamental right to data protection.
- This means, among other things, that processing must take care to avoid interfering with data subject rights if the same goals can be achieved with less interference.
- The AI Act, instead, relies on a satisficing approach (Almada and Petit 2025). Any systems or models that meet the specified requirements is considered lawful, regardless of whether it just barely acceptable or if it surpasses by much the standard.
- The GDPR requires data controllers to adopt technical and organizational measures to address risks, whereas the AI Act largely focuses on technical measures.
From these differences, one can conclude that fulfilling the AI Act’s requirements is often necessary for data protection, but rarely sufficient. When using AI technologies, data controllers must not lose track of their obligations towards data subjects and their rights, even after initial deployment.
This chapter examines the risk management obligations that apply to any organization as an AI system operates. Some of those obligations fall on the actor who operates the system, but its original developer remains subject to duties both under the AI Act and data protection law. Section 9.1 outlines how the GDPR and the AI Act oblige data controllers to manage the risks associated with the development and use of AI systems. Section 9.2 shows how organizations are obliged to monitor issues with their issues after deployment and presents some techniques they can use to that effect. Section 9.3 then follows up with a discussion of the legal obligations that bind organizations to address any post-deployment issues for the systems they are responsible for.
9.1 Managing data protection risks
By the end of this section, learners will be able to identify risks that an organization must monitor and address once an AI system is deployed.
Under data protection law, data controllers are required to address the risks created by the AI systems, both at the moment of initial development and in any subsequent processing of personal data:
- Article 25 GDPR creates an obligation of addressing risks to data protection principles.
- For example, if a system’s accuracy degrades after its deployment, the controller must take technical and organizational measures to ensure this does not harm data subjects.
- Such measures might include changes to the system (such as improving its model) or to its organizational context (such as removing the system form some critical applications where it would create the most risk).
- Article 32 GDPR creates an obligation of dealing with security risks.
- For example, malicious actors might figure a way to override the safeguards adopted in a model and extract the data used for its training.
- If that happens, the controller must adopt measures to prevent and respond to breaches.
Those obligations apply during software development, giving origin to the obligations discussed in Chapter 7. But they also apply once the system is in service, as the risks to data protection and cybersecurity must also be faced whenever personal data is processed. Chapter 3 and Chapter 4 gave an overview of several risks that must be considered. It is now time to discuss how an organization should weigh those risks in practice.
Both GDPR articles stipulate four factors that must be taken into account in the assessment of data protection risks:
- the state of the art;
- the cost of implementation;
- the nature, scope, context, and purposes of processing; and
- the likelihood and severity of risks.
All these factors must be considered for each instance of processing, but the relative importance of each one will depend on context.
9.1.1 Relevant factors for risk management
Regarding the state of the art, the GDPR obligations mean that data controllers must consider the best practices available in the market and the current capabilities of technology. On the one hand, this means that controllers are obliged to update their standards as technology evolves. What is adequate today might not be tomorrow. On the other hand, this means Articles 25 and 32 GDPR do not oblige controller to advance the state of the art. They cannot be expected to adopt innovative technical and organizational measures. However, it might be the case that a controller must refrain from processing data if no tried-and-true measures and safeguards can reduce risk to an acceptable level.
The cost of implementing measures is also a relevant factor when it comes to AI. Developing AI technologies is a resource-intensive task, especially when it comes to AI models that meet or advance the state of the art. Procuring AI-based tools from external sources can also be expensive, and costs are likely to increase if an organization must include extensive safeguards for data processing. As a result, the obligations of data protection and security by design would not oblige controllers to adopt measures that have an excessive cost for a very reduced mitigation of risks. But, as the extensive enforcement of Article 25 GDPR throughout the EU shows, this does not mean that organizations can avoid adopting measures just because they are expensive. Instead, they are still obliged to adopt technical and organizational measures that reduce risks at a cost that is proportional to risk reduction.
9.1.2 Interpreting risk management duties
When it comes to the properties of processing and the risks it creates, evaluation will depend on the specifics of the system. For example, DigiToys must adopt different safeguards for the AI systems it uses in its toys and the ones it uses for data analytics, even if those systems are based in the same technologies. This is because those applications give origin to different risks. An issue with the toy itself might harm children, for example by allowing a hacker to interact directly with a child playing with a toy. Issues with data analytics, in contrast, are likely to harm the company’s direct customers—for example, by exposing financial data of the parents and other people that buy those toys. Some measures that are useful for solving one type of risk, such as anonymizing financial data, might have little to offer against the other type of risk.
Given the novelty of many AI applications, it is not always easy to identify what kinds of risks must be considered for each type of processing. Still, data protection professionals can rely on a few tools to support them in that identification:
- They can extrapolate from existing sources of knowledge about risks of AI, such as the ones discussed in Chapter 3 and Chapter 4.
- They can use forecasting tools such as those discussed in the next session.
- Once potential risks are evaluated, they can apply the general guidance offered by the EDPB in the Guidelines 4/2019, as well as materials provided by the national and regional data protection authorities.
Despite the differences between risk framings discussed above, the AI Act can also provide some guidance for addressing the risks that AI systems can create to the rights and freedoms of data subjects. Its Article 9 stipulates that the providers of high-risk AI systems must adopt practices for:
- Identifying and analysing known and reasonably foreseeable risks that the system can pose when used in accordance with its intended purpose.
- Estimating and evaluating risks that may emerge both when the system is used in accordance with purpose and under conditions of reasonably foreseeable misuse.
- Evaluating other risks possibly arising, based on the data gathered from the postmarketing monitoring system.
Because those obligations are directed at high-risk AI systems, they are not binding in most of the data processing involving AI. Furthermore, one must be careful with the different types of risk that each legislation deals with, as some of the risks that are of interest for the AI Act are not risks to the fundamental rights and freedoms of a data subject.2 What is useful here for compliance with data protection laws is, instead, the sequence of steps which an organization can follow when evaluating the risks it faces while developing or deploying an AI system.
The AI Act can also be a source of guidance regarding which measures to apply. Here, however, it is considerably vaguer than in the risk assessment measures. Articles 10 to 15 AI Act stipulate technical requirements that must be observed by all high-risk AI systems, but they only define the “essential elements” of those requirements. Providers of AI systems are expected to interpret these essential elements and devise their own measures for compliance, for example with the support of the means discussed in Chapter 13. Even so, the AI Act’s list of essential requirements offers a starting point that providers can adjust to their needs if they are not obliged to follow it.
Finally, the risk assessment obligations in the GDPR and the AI Act are continuing obligations. They do not end with a system’s development, or even with its initial deployment. This suggest that data controllers must consider the timing of their interventions to address risk. Sometimes, it might be easier to develop a workaround for a known issue in an AI system than to solve it through technical means. For example, if the UNw university’s AI system for forecasting student outcomes does not work well with students from non-traditional backgrounds, the university might simply create manual forecasts for those students, especially if there are few of them. However, an organization must make sure that it is actually addressing such issues at a different stage. Otherwise, the lack of organizational measures (or their inadequacy) might itself be a breach of the obligations on data protection and security by design.
9.2 Detecting issues with AI systems
By the end of this section, learners will be able to explain the various legal obligations that bind organizations to monitor data protection risks during deployment.
In the previous session, we learned that both the GDPR and the AI Act require that organizations keep track of AI-related risks throughout the entire life cycle of an AI system or model. A provider’s obligations do not end when their product is commercialized but continue until it is no longer processing personal data. Likewise, the obligations of an organization deploying AI go beyond individual processing operations, encompassing all the ways it feeds personal data to an AI system and draws personal data for it. Now, it is time to examine how organizations can detect risks that can materialize after AI is deployed.
9.2.1 Ex ante risk detection
To some extent, detecting post-deployment risks is a matter of anticipation. Because the risks associated with AI are not always well-known, data controllers need to be proactive in their identification of potential risks. Otherwise, they might fail to adopt the necessary measures to address such risks and end up exposed to liability.
This means organizations must keep track of technical developments that might render their approach obsolete or overcome existing safeguards. For example, if somebody develops a new technique to extract personal data from medical images, some data that InnovaHospital previously treated as anonymous might be subject to re-identification. If that is the case, such data is now considered personal data and must be subject to appropriate safeguards.
They must also consider that changes in the context in which an AI system operates can affect its usefulness. Consider a scenario where if the university UNw starts to teach many courses in a new language, such as Chinese. If the systems it uses to predict student performance do not consider linguistic competence, they might provide an inadequate assessment of student performance. A student that has all the technical competences to succeed in a mathematics course might still struggle if they cannot understand what is being said in the classroom.
If an organization can forecast some of those changes, it might already account for them into the system and avoid the need for future change. To do so, an organization might benefit from various tools:
- Structured techniques for prediction, such as the Delphi method, allow organizations to combine the predictions of various experts and compensate for individual biases.
- Reports about market tendencies and trends in technological innovation might be a useful source of information about what is coming next in terms of technical and social developments.
- The data collected during a system’s test processes might suggest that some aspects of the system are acceptable for now but might become a problem later. For example, one might look at a system’s accuracy metrics and decide that they are acceptable for a system that makes a thousand inferences for day, but that the error levels would be unacceptable if that system were to make a million daily inferences in the future.
- Once the system is deployed, an organization can use the information it collects about its operation to extrapolate future tendencies. Coming back to the previous example, the growth in the user base might be a good indicator of whether the system usage will reach a point where a previously acceptable level of accuracy is no longer okay.
By combining those sources of information with the organization’s knowledge of its context of operation, a data protection professional will be able to identify risks that they should analyse further.
9.2.2 Ex post risk detection
While anticipation is a valuable tool for identifying risks, a data controller cannot rely just on it, for several reasons. Sometimes even the best forms of anticipation go wrong: we might underestimate the likelihood of a harmful event taking place, or the extent of harm that comes out of it. For example, the development of feasible quantum computing techniques seems unlikely in the short term, but it would create all sorts of problems for current information security practices. In other cases, even robust forecast techniques might be blindsided by unexpected new developments, such as problems in datasets used to train widely used AI models. Therefore, it is likely that organizations will only learn about some of the risks of AI once they have materialized, that is, once somebody has been harmed by the use of an AI system.
It follows from this that organizations must keep track of harms that escaped their initial anticipation efforts. This is true for at least two reasons. Even if the harm was genuinely unforeseeable at first, it might happen again, and in that case, it is no longer unprecedented. A single wrongful diagnosis from a medical AI system might come from a bizarre set of coincidences, but understanding those circumstances would allow a hospital to prevent that error from happening repeatedly. And doing so is in its interest, as organizations remain responsible for the effects of data processing that they control.
For high-risk AI systems under the AI Act, evaluating the risks that become actuall harm is an actual obligation. Under Article 26(5), organizations deploying high-risk AI systems must monitor the system’s operation, following the instructions for use given by the system’s provider. Deployers must inform any serious incidents to the provider. They are not obliged to adopt themselves any measures under the AI Act. But, as data controllers of the individual uses of the AI system, those deployers will still be responsible for preventing the harms under data protection law, as seen in Section 6.2.
Providers, in turn, are required to communicate with the market surveillance authorities and adopt measures to fix the system. That is, a provider is required to eliminate or mitigate the possibility that the harmful event will happen again. If they fail to do so, the surveillance authorities can adopt various sanctions, including fines, the removal of the system from the EU market or a mandated recall. If the harm stems directly from the training process of the AI system or model, they might be responsible for it under data protection law. If the harm comes from system operation, one must consider whether the provider can be classified as a data controller or processor for that processing.
How can organizations extract meaningful information from the data they collect after the system has been placed on the market? Doing so will require a mix of technical and contextual analyses:
- For the technical side of things, finding issues will likely require a closer look at system operations. In particular, the automated registry of system events (logging) can ensure that system behaviour is stored for subsequent analysis.
- For actually seeing the harms caused to individuals and groups, one will need to interact with domain experts (such as the ones operating the system) and with the people potentially affected by the system.
In both cases, analysis will benefit from a combination of automated tools and in-depth case studies of cases identified through automation. Once those analyses are conducted, one can start discussing how to best fix the problems found out by them.
9.3 Addressing issues after deployment
By the end of this section, learners will be able to explain how organizations are obliged to address the detected issues. They will also be able to propose organizational strategies and practices to tackle such issues.
When it comes to risks, knowing is only half the battle. Data controllers, regardless of whether they develop AI systems or models or put those technologies to use, have various other obligations. It is not enough, for example, to detect that an attacker has found a jailbreak that allows them to change the behaviour of the AI model powering your application. You must comply with a series of legal requirements, such as notifying the data protection authority of any data breaches and communicating with data subjects when the data breach results in a high risk to their rights and freedoms. You must also adopting technical and organizational measures to prevent future exploitation of the jailbreak, such as updating the system to close the technical exploits that enable it or even withdrawing it from service if no other measures can mitigate the risk. These obligations mean that an AI system and/or its mode of use are unlikely to remain unchanged after deployment.
In this session, we will consider some of the measures that data controllers are obliged to adopt after they have deployed their AI systems. In line with the requirements of data protection by design and security by design covered in Chapter 12, those measures can be technical or organizational, depending on what is best to address a specific risk in a particular context. For systems classified as high-risk under the AI Act, additional requirements apply, which must be understood considering data protection requirements. To show how that can take place, we will consider measures specific to the use of AI in automated decision-making.
9.3.1 Technical and organizational measures
The text of the GDPR distinguishes between two kinds of measures. Technical measures are technological interventions that change a system to eliminate a source of risk. Chapter 12 provides examples of measures that can help with that, such as techniques for detecting biases in algorithmic decisions. The idea behind this requirement is that it makes the desired behaviour a part of the software’s affordances. That is, the computer system will not allow a user to act (either unwittingly or maliciously) in a way that is contrary to data protection law, or at least make it exceedingly difficult for them to do so.
Organizational measures, instead, change the context in which the system operates. For example, an organization might decide to restrict access to the outputs of an AI system, to reduce the number of people that can see the personal data contained in those outputs. Those measures keep the AI system or model as it is and focus on the behaviour of the humans surrounding the technology and the context in which it operates. Both kinds of intervention can be useful for dealing with risks related to an AI system, even after that system has been deployed.
For the most part, the technical properties of an AI system are laid down during its development process. Yet, this does not mean a computer system cannot be changed afterwards. Think about the constant updates we are invited to do in the operating systems of our computers and smartphones. Those updates often add features to the systems we use or fix flaws in their security or functioning. If a developer organization detects an issue with an AI system that it already has placed on the market, it can release updates with measures that mitigate the ensuing risk.
A widespread problem with software updates is that they are not always carried out correctly. Organizations (and individual users) often postpone updates because of factors such as inconvenient timing or lack of expertise. This is why it is common to see major cybersecurity incidents that exploit vulnerabilities for which there is a known fix, such as a software update. Avoiding this kind of problem is a shared responsibility between providers and deployers of AI systems:
- Providers should make clear in the instructions for use the procedure for patching AI systems, educate deployers about the need for updates, and ideally provide support for updating.
- Deployers, on the other hand, must follow the instructions to use and keep their systems up to date.
A failure to do so is not a breach of data protection law. However, an organization that fails to update their systems to address known risks is arguably failing to adopt technical measures that can address the relevant risks. A failure to update systems may therefore lead to sanctions as a breach of the requirement of data protection (and security) by design.3
Organizational measures, instead, focus on the human side of the equation. Some of them relate to individual processing operations, such as establishing standard operating procedures for the use of AI systems. Others focus on preparing the individuals who will operate AI systems, as is the case for the AI literacy actions discussed in Section 8.1. Finally, an organization needs also to consider certain institutional channels that can support the efforts of a data protection officer:
- An organization’s customer service can be its first line of response against risks that were not eliminated in the design process.
- Support personnel can collect information from users about harms created by the use of an AI system, for example by processing customer complaints.
- In some cases, it might even be feasible to grant them the power to fix those issues, for example by allowing them to undo some algorithmic decisions.
- Even if it is not feasible to grant this kind of intervention power to customer service, communication with the affected persons is a way to ensure them that they can exercise their rights and be protected from harm.
- Internal controls, such as an ombudsman function, can be used to provide a critical look at current procedures and suggest how the organization can improve its use of AI.
- A robust set of whistleblower protections can act as a measure of last resort, allowing the people inside an organization to make sure that information about AI-related risks reach the leadership before it leads to harms in the real world.
There is no single set of organizational arrangements that will meet the requirements of the GDPR. Within a smaller organization, a clearly defined set of access procedures might harm innovation without necessarily leading to better protection of data subject rights. Requiring constant training sessions might create fatigue, making people indifferent to essential information about the risks created by AI. Guidance by a data protection professional is therefore essential for identifying the best set of arrangements for an organization developing or deploying an AI system.
9.3.2 Human oversight and intervention
The GDPR seldom prescribes the adoption of specific technical and organizational measures. One exception can be seen in Article 22(3) GDPR, which deals with automated decision-making. Under that provision, a data controller must adopt “suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests” of data subjects affected by decisions solely based on automated processing. Those measures include, at least, the right to obtain human intervention in the decisionmaking, to express the data subject’s point of view, and to contest the decision. This requirement is frequently described as a way to keep a human “in the loop” of decisionmaking.
As discussed in Section 8.2, AI is not equal to automated decision-making. On the one hand, AI systems can be used in decisions that are ultimately taken by humans. For example, an AI system might suggest a few courses of action to a decision-maker, who must then choose which of those they will adopt. On the other hand, automation can take place without the use of AI, as is the case in systems that use spreadsheets for risk scoring. Still, some of the applications of AI in decision-making processes can make a deployer organization responsible for following the rules in Article 22 GDPR.
For systems classified as high-risk under the AI Act, the requirements are more detailed. Any such system—even if it is just aiding rather than making the entire decision—must be subject to human oversight.4 In particular, the provider of any such system must design it in a way that allows the persons exercising that oversight, as we discussed in Section 8.3.
The implementation of the requirements for human oversight will depend on the specifics of the system. For example, a person who oversees the functioning of a medical diagnosis system will likely need to have access to different variables and training than a person overseeing a system that is used for automated content moderation. Still, some of the requirements, such as the possibility of stopping an AI system, present a clearly defined requirement that can be implemented into a system.
The functionalities required by Article 14 AI Act are not mandatory for AI systems that are not classified as high-risk. Even so, the list from that article can once again be used as a starting point of measures that an organization might consider for their own systems. However, even for high-risk AI they are not sufficient. It might be the case that meaningful oversight is possible from a technical perspective but does not happen in practice. For example, a person overseeing an algorithmic system might be afraid to override its decisions if doing so will cause them to fall behind with their work or create the risk of reprisal from bosses. An organization deploying an AI system needs to take measures to ensure that the individuals exercising oversight powers can do what the law requires of them.
This obligation applies even if the organization cannot change the system itself. Even if the organization lacks the capability to make technical changes to the AI systems and models it uses, it still has control over its own internal arrangements and practices. Therefore, the legal obligation to adopt technical and organizational measures to address risks means that an organization must adapt itself for the safe use of AI.
9.4 Conclusion
The previous sessions have covered the issue of how one can detect and mitigate risks after an AI system has been deployed. We have discussed the legal obligations that create the need for ongoing risk management throughout the life cycle, what is meant by risk in the first place, and strategies for risk management during and after deployment.
A few key points emerge out of the discussion in this unit:
- The AI Act and the GDPR both tackle the risks to fundamental rights, liberties, and legitimate interests that might come out of data processing in AI. However, they do so in vastly diverse ways.
- Despite those differences, they both require ongoing attention to risks.
- Tools like system logs, user feedback, and automated monitoring systems are critical to detect and evaluate risks during system operation.
- Relevant risks might emerge at any point of the life cycle, and their characteristics can change as technologies evolve and society changes.
- These same sources of change mean that organizations will likely need to update not just their systems but the very measures they use to detect risk.
- Risk monitoring can take place ex ante or ex post
- Ex ante forecasting has its limitations, especially when it comes to the flexible uses to which AI technologies can be put. It remains a valuable tool to address some risks before they happen.
- Ex post monitoring focuses on learning from harms that happened to prevent them from happening again.
- Measures for risk management
- Both technical and organizational measures are relevant for addressing risks detected through ex ante and ex post approaches.
- Safeguards like manual interventions or adjustments to operational workflows can act as a second line of defence.
- The GDPR is broad when it comes to technical and organizational measures. The AI Act provides some concrete measures, which are mandatory for high-risk systems and might be useful for other technologies.
@pt-topics of this book will engage more deeply with some measures that emerge out of current best practices in AI development and deployment.
Exercises
Exercise 1. Under the GDPR, what should DigiToys prioritize when identifying risks in its smart toys?
- a. The latest trends in children’s entertainment.
- b. The state of the art in data anonymization techniques.
- c. The cost-effectiveness of implementing safeguards.
- d. Feedback from customers regarding toy usability.
- e. Risks of re-identification and misuse of children’s personal data.
Exercise 2. How can UNw proactively and continuously monitor risks related to the AI system it uses to predict the likelihood of students dropping out?
- a. By using structured forecasting methods like the Delphi method to anticipate future risks.
- b. By waiting for complaints from students about incorrect predictions.
- c. By relying on annual system reviews conducted by an external vendor.
- d. By restricting the system’s use to specific departments.
- e. By assuming risks will remain constant over time
Exercise 3. What is the most appropriate organizational strategy for InnovaHospital to address risks in its AI-powered diagnosis system?
- a. Restricting system use to senior doctors only.
- b. Training staff to properly interpret system outputs and provide oversight.
- c. Requiring patients to sign additional consent forms.
- d. Relying on system logs without human oversight.
- e. Outsourcing all risk management to the AI system provider.
Exercise 4. Which example reflects a failure to manage post-deployment risks under both GDPR and the AI Act?
- a. DigiToys delays software updates despite known vulnerabilities.
- b. UNw uses a facial recognition system that struggles to read the faces of students from some ethnic backgrounds.
- c. InnovaHospital disregards complaints about the accuracy of its AI diagnosis system.
- d. All of the above.
- e. None of the above.
Exercise 5. Which alternative integrates GDPR and AI Act requirements for monitoring and addressing risks?
- a. Pre-deployment risk evaluations only.
- b. Vendor reliance for software maintenance.
- c. Ongoing assessment of system operations and user interactions.
- d. Eliminating AI use in high-risk scenarios.
- e. Limiting oversight to legal departments
9.4.0.0.1 Prompt for reflection
InnovaHospital’s deployment of AI diagnostic tools raises concerns about technical and organizational measures. Consider how different contexts (e.g., medical settings vs. educational institutions) require tailored approaches to risk management. How can organizations like InnovaHospital ensure that measures address the unique risks posed by their AI systems, and how might these approaches differ from those needed by UNw?
9.4.1 Answer sheet
Exercise 1. Alternative E is correct. Alternatives A and D refer to measures that are desirable from a business perspective but have limited use for data protection. Alternatives B and D identify factors that are relevant for some aspects of data protection, but not for others. For example, data subjects have rights even when they are not particularly cost-effective.
Exercise 2. Alternative A is correct. Alternatives B and E adopt a reactive approach to risk. Alternative C can be part of a broader risk management strategy, but it does not offer an ongoing evaluation of risk. Finally, Alternative C might be desirable for other reasons but does not offer extra insight about the risk landscape.
Exercise 3. Alternative B is correct. Alternative A would restrict utility without addressing risks, alternative C might struggle with the difficulties of consent in AI contexts.
Alternative D is hampered by ignoring the potential value of human oversight. Finally, Alternative E ignores that the provider is likely to lack context-specific knowledge that is needed for compliance.
Exercise 4. Alternative D is correct. Alternative B describes a scenario that might have been detected during development but needs to be addressed even if detected afterwards.
Exercise 5. Alternative C is correct. Risk management is a continuous obligation, which involves various actors in the supply chain and requires many kinds of competences. It does not prohibit the use of risky AI tools but manages the ensuing risks to ensure that harms are as small as possible and do not outweigh potential benefits.
References
Marco Almada and others, ‘Art. 25. Data Protection by Design and by Default’ in Indra Spiecker gen. Döhmann and others (eds), General Data Protection Regulation: Articleby-article commentary(Beck; Nomos; Hart Publishing 2023).
Frank Bannister and Regina Connolly, ‘The Future Ain’t What It Used to Be: Forecasting the Impact of ICT on the Public Sphere’ (2020) 37 Government Information Quarterly.
Andrea Bonaccorsi and others, ‘Expert Biases in Technology Foresight. Why They Are a Problem and How to Mitigate Them’ (2020) 151 Technological Forecasting and Social Change 119855.
Katerina Demetzou, ‘GDPR and the Concept of Risk’ in Eleni Kosta and others (eds), Privacy and Identity Management 2018(Springer 2019).
Pierre Dewitte, ‘The Many Shades of Impact Assessments: An Analysis of Data Protection by Design in the Case Law of National Supervisory Authorities’ (2024) 2024 Technology and Regulation 209.
EDPB, ‘Guidelines 4/2019 on Article 25 on Data Protection by Design and by Default’ (European Data Protection Board, 2020).
Diana Korayim and others, ‘How Big Data Analytics Can Create Competitive Advantage in High-Stake Decision Forecasting? The Mediating Role of Organizational Innovation’ (2024) 199 Technological Forecasting and Social Change 123040.
Tobias Mahler, ‘Between Risk Management and Proportionality: The Risk-Based Approach in the EU’s Artificial Intelligence Act Proposal’ in Liane Colonna and Stanley Greenstein (eds), Nordic Yearbook of Law and Informatics 2020-2021 (2022).
Jhon Masso and others, ‘Risk Management in the Software Life Cycle: A Systematic Literature Review’ (2020) 71 Computer Standards & Interfaces 103431.
Jessica Newman, ‘A Taxonomy of Trustworthiness for Artificial Intelligence. Connecting Properties of Trustworthiness with Risk Management and the AI Lifecycle.’ (CLTC White Paper Series, January 2023).
NIST, ‘AI Risk Management Framework: AI RMF (1.0)’ (2023).
Jonas Schuett, ‘Risk Management in the Artificial Intelligence Act’ (2024) 15 European Journal of Risk Regulation 367.
Dimitrios Tsoukalas and others, ‘Technical Debt Forecasting: An Empirical Study on Open-Source Repositories’ (2020) 170 Journal of Systems and Software 110777.
Usually, calculated by multiplying one quantity by the other.↩︎
Which Article 25(1) GDPR obliges the data controller to protect: Guidelines 4/2019, para. 11.↩︎
See, for examples of sanctions, Dewitte (2024).↩︎
Article 26 AI Act creates this obligation for deployers, and Article 14 obliges providers to design high-risk systems in a way that enables oversight.↩︎