8 The Deployment of an AI System
By the end of this chapter, learners will be able to devise organizational procedures to mitigate risks to the protection of personal data that were not eliminated during software design.
The deployment stage is when an AI system is prepared for use. As such, it makes no sense to speak of the deployment of an AI model, given that we defined in Section 2.1.2 that a model is a component rather than a stand-alone product or service. At this point, the system’s technical configurations have been mostly defined, except for some tasks that must be done at the moment a system is prepared for use, such as defining parameters. What remains is the work of preparing an organization for using the system: defining who will operate the AI system, how its outputs will be used, and so on. Those organizational measures set up the context in which the AI system is expected to affect a physical or virtual environment.
Data protection law is well aware of the impact that such organizational measures can have on the rights of data subjects. It empowers those subjects with rights they can oppose to specific instances of data processing (as we will discuss in this unit), and in doing so it creates obligations for data controllers. Those controllers are also obliged to adopt organizational measures—and not just technical ones—to address the risks to data protection principles that are the object of Chapter 12.
For most AI systems, the governance of organizational measures is a matter of data protection law, as well as of sector-specific laws, when applicable. The AI Act, in line with its product safety pedigree, focuses on technical standards for AI systems and models. Still, its Article 26 establishes some obligations for the deployers of high-risk AI systems, which must be implemented in a way that aligns with the general organizational duties imposed by the GDPR.
In this chapter, we will discuss three kinds of organizational duties related to data protection. Section 8.1 discusses the AI literacy duty imposed by Article 4 AI Act, clarifying that its implementation can be a valuable tool for data protection compliance. Section 8.2 discusses the challenges that the use of AI technologies creates to certain data subject rights, as well as possibilities of compensating technical obstacles with organizational measures. Finally, Section 8.3 introduces measures that are meant to support the deployment of trustworthy AI by organization, such as regulatory sandboxes.
8.1 The AI literacy obligation as an organizational measure
By the end of this section, learners will be able to map the various obligations following from the AI Act’s AI literacy obligation and exemplify measures to foster it.
Of all the obligations created by the AI Act, only one applies to all AI systems: AI literacy. As expressed in Article 4 AI, this obligation means that:
Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.
From the definition above, we can extract two elements to inform legal compliance. The first one is that the AI Act establishes an obligation of best effort. Providers and deployers of AI systems must show that they are adopting measures to foster literacy. They are not bound to any particular level of literacy, unless such a level follows from sector-specific legislation.
Instead, the target level of literacy is contextual. Literacy must be “sufficient” for the “operation and use” of an AI system. However, sufficiency is a broad concept: would it be enough just to teach operators what kinds of data they need to input in the system and what to do with the outputs? Or is it necessary to provide a deeper understanding of how the technologies work? The AI Act does not offer a clear-cut answer to this question, but the definition above indicates a series of factors that a provider or deployer must assess when designing their literacy measures.
Despite this ethereal formulation, one must keep in mind that Article 4 AI Act establishes an actual obligation. While the AI Act features some provisions that stimulate voluntary compliance, this is not one of them, given the presence of the shall clause. It is true that Article 99 does not establish fines for breaches of the literacy duty. But it does not exclude the possibility of non-monetary sanctions, which are left to Member States under Article 99(1). And, because, literacy promotion is a measure that mitigates risk, an adequate literacy programme can support a reduced fine once a regulatee is subject to a fine for some other violation of the Act. Therefore, t AI Act’s literacy obligation is not entirely toothless.
Whenever an AI system operates on personal data, or is trained on it, that obligation must be read in line with existing data protection obligations. Article 25 and Article 32 GDPR both require data controllers to adopt, among other things, organizational measures that are meant to address risks created by processing. Organizational measures refer to institutional processes regarding how data is processed, such as the definition of who can access data and how that data can be processed. Hence, the proper implementation of those measures will require that persons within the organization have access to information about the system (and potentially, about its components). The AI literacy duty can therefore support compliance with data protection law.
8.1.1 The contents of AI literacy
The notion of “AI literacy” is formally defined in Article 3(56) AI Act. It encompasses the “skills, knowledge, and understanding” that allows providers, deployers, and affected persons to make informed decisions about AI and be aware of opportunities, risks, and potential harms from its use. This definition, interpreted in the light of data protection law, must guide compliance with the duty of literacy.
Compliance with the AI literacy requirement entails, in the first place, determining its contents. Given the formulation of “skills, knowledge, and understanding” above, it is not enough to provide information about the existence of an AI system and how a user can operate it. This kind of information is necessary (otherwise, people would not be able to discharge their duties or exercise their rights) but it must be accompanied by more general, transferrable knowledge about what AI is and what it can (and cannot) do.
The specific skills, knowledge, and understanding will depend on how a person is affected by the use of an AI system:
- A software developer involved in the process of deploying an AI system within an organization will need to understand some details about how that system operates, in order to diagnose errors and maintain the system.
- The persons operating that AI system, in turn, do not need to have a command of the technicalities of the system. But they need clarity about other aspects of the AI system, such as what role it plays within a context, what are its safety margins and known risks, and how to operate it correctly.
- Complaints channels within an organization need to be able to respond to requests from affected persons, such as the exercise of the data subject rights examined in the rest of this chapter.
- The data protection officer is likely to be the first point of contact for this kind of request, especially when they are directly grounded on GDPR rights.
- Other contact points in an organization, such as customer service or an ombuds, might also be contacted with complaints. In that case, they will need to liaise with DPOs to sort out data protection issues.
- For that purpose, they will need access to information about AI: whether AI systems are or not used in each context, the decision logic of the systems in use, and about the data they use.
A data protection professional will need to have a clear view of AI systems and those operating them in order to design an adequate literacy programme. For that purpose, the AI inventory discussed in Section 5.2 might be particularly useful.
Once the targets of a literacy programme have been identified, they will likely need tailored information and skills. Given the several types of informational needs mapped above, different actors will need distinct levels of technical details and organizational context. For example, software developers can deal with more technical detail, but they are less likely to be familiar with the operational context in which the AI system will be used. A program that focuses on their knowledge is likely to require too much specialized knowledge to be useful for a non-technologist who simply uses the AI system. At the same time, it might lack contextual information that this person will need to do their job. Hence, there is no one-size-fits-all solution to AI literacy.
Acknowledging this diversity of needs, Article 4 AI Act lists several factors that must be considered when determining the contents of an AI literacy program. The first set of factors refers to the system itself. The contents of a literacy programme must allow the target individuals to have a better understanding of the technologies used to power the AI systems being provided or deployed by the organization. For this purpose, an organization will likely need to provide some baseline knowledge about AI in general, but it can and should focus on the technologies it currently uses (or plans to use).
The second set of relevant factors in designing the literacy programme is that relating to the persons affected by the literacy programme. Under the AI Act, providers and deployers must foster literacy among the people they employ in AI-related roles and to external persons carrying out AI-related tasks on their behalf. For technical roles, this will likely mean a deep dive into the specific models and interfaces used for the system. For less technical roles, this means an explanation that is tailored to laypersons, who need not discuss details but still require an overview of what is going on. In both cases, the focus is on providing a clear view of the opportunities and risks created by AI, in a way that allows people to make decisions that are compatible with their roles in the organization.
Finally, the AI Act requires that AI literacy considers the opportunities and risks that AI creates for the persons affected by the use of the AI system. This does not mean that an organization must foster literacy among those persons. Instead, it obliges the organization to teach people how to take the rights and interests of those persons into account. And, when one speaks of “affected persons,” it is likely that the AI system is processing personal data, which means the general requirements for the design of organizational measures laid down in the GDPR remain in force.
One cannot foster AI literacy without creating literacy about how personal data is processed by and through AI systems within an organization. Therefore, data protection professionals can seize the AI literacy obligation as an opportunity to create awareness about the obligations of the various actors involved in the AI life cycle.
8.1.2 Promoting AI literacy within an organization
Once an organization has a clear view of who is involved in the deployment and development of AI systems, and of the information that is relevant for the tasks of those actors, it can start designing literacy programmes that meet their needs. There is no established formula for AI literacy, yet, but one can already prescribe useful steps for programme design.
First, one needs to know the starting point for the literacy programme. Given the novelty of AI technologies, people tend to know about their existence in general, but they do not always have information about how these technologies work and what specifically they do in particular applications. This is true even among software developers who do not work specifically in AI. Even though they tend to have the technical background to understand it, the state of the art in AI technologies is a pretty specialized knowledge. Those actors will have dissimilar needs of skills of knowledge, which should be measured before designing a learning curriculum. Otherwise, a literacy initiative might be disregarded as too basic, or it might be too heavy in content to be accessible for the learners.
Once those needs are identified, a data protection professional can pick materials tailored for each audience.
- For non-technical actors, a good starting point is provided by general courses, such as the Elements of AI course designed to explain basic concepts without diving into their mathematical and computational implementations.
- Technical actors will likely benefit from materials that dive into AI technicalities, but they will need foundational materials in topics such as the ethics of AI and data protection obligations.
- In both cases, these general-purpose materials need to be supplemented with training materials that are specific to an organization’s context. For example, part of a literacy programme could involve teaching learners how to read specific documents, such as data cards or system cards (see Section 10.1).
In short, a literacy programme must spread information about how AI works, about the risks and opportunities it creates, and about the legal obligations that follow from the use of AI. This book is designed to support the latter.
After the literacy programme is designed, it must be kept up to date. Given the fast pace of technical, social, and legal developments surrounding AI, many things can change quickly. Among other developments, tasks previously thought to be impossible might be solved by new AI models, the public opinion might turn against some applications of AI. Conversely, changes in the interpretation of data protection requirements might require changes to how an AI system is used. This means that not only the curriculum for AI literacy must be updated with some frequency, but that people might need to undergo regular training sessions. AI literacy is not, at least for the time being, a fire-and-forget practice.
8.2 Data subject rights in the context of AI
By the end of this section, learners will be able to describe some of the obstacles created by AI to the exercise of data subject rights and discuss whether those can be addressed by organizational measures.
One of the distinctive approaches of EU data protection law is that it grants individual rights. The data subjects whose data is processed gain certain rights they can invoke against the controllers of that processing, laid down in Articles 12–22 GDPR. Those rights are applicable to the training and deployment of AI systems whenever such practices use personal data, as discussed in the previous units. However, the peculiar technical characteristics of AI have some implications for how those rights can be exercised in practice.
The transparency rights from Articles 13–15 GDPR will be examined more closely in Chapter 11. In the following paragraphs, we examine other data subject rights granted by the GDPR.
8.2.1 Restricting and objecting to processing in AI systems
The GDPR grants to data subjects two rights that allow them to affect how data controllers process their data. Under Article 18, data subjects have the right to restrict the processing of their personal data if one of the listed conditions apply. Article 21 allows data subjects to object to processing altogether. These rights mean different things, and each has their own exceptions and conditions for application. However, their application to AI systems and models faces similar obstacles.
Those obstacles are likely to appear when the data subject attempts to exercise their right to restrict (or object to) the use of their data in the training of AI systems and models. First, a data subject might not even be aware that their data is being used for training. The transparency measures studied in Unit 11 of this training module are meant, among other things, to reduce this risk.
Additionally, a data subject might not have direct access to the organization training the model or the system. For example, a patient of InnovaHospital might know that the hospital is using for diagnosis an AI system based on a model developed by a thirdparty provider. If a patient wants to object to the use of their data for training the model, the hospital must not use that data for training (or fine-tuning the model), and it must make sure that the developer organization will not use it for training.
Things become more complicated for that data subject when the AI model is not trained on data that is specific to an organization. In that case, as discussed in Unit 6 of the training module, the organization using the model is unlikely to have control over the training process. Data subjects will need to exercise their right to restrict (or to object) against the organization training the model.
8.2.2 Rights to rectification and erasure
Another set of data subject rights refers to the contents of data. Data subjects can request that controllers address inaccuracies or even delete personal data concerning them. The conditions, exceptions, and complications related to the exercise of those rights have been extensively discussed elsewhere. But, once again, their implementation becomes more complicated when it comes to the training of an AI system.
The challenge, here, is that many AI systems do not represent information in the same way as traditional computer systems. It is rarely the case that a particular piece of information is stored in a single place within the system. Instead, data about an individual is often dispersed across billions (or more) of parameters within a neural network.1 Changing or removing that information, therefore, is not a matter as simple as making a change to an entry on a database.
Yet, data controllers remain obliged to rectify and erase personal data whenever those rights are applicable. If they fail to do so, data protection authorities can wield a variety of sanctions, including “algorithmic disgorgement,” that is, the mandatory deletion of models that are not compliant with the law (Li 2022; Hutson & Winters 2024). Such a measure has not yet been deployed by data protection enforcers in the EU, and it is likely a measure of last resort against reiterated non-compliance.
Several measures have been proposed as technical and organizational alternatives to full-blown model deletion. Some of those are meant to delete data from the weights of the entire model, thus allowing its removal after the model has been trained (see, e.g., Bourtoule et al. 2021). Others try to make deletion feasible by changing how the model is trained. For example, the CPR technique (Golatkar et al. 2024) allows a model to rely not just on its core training data, but also on a private data store that can be instantly forgotten.
Those techniques are still at an early stage of development, and as such they might not be mature enough to meet all the legal requirements established in the GDPR (Cooper et al. 2024). Still, a data protection professional will need to engage with the software developers to understand whether such a technical approach is feasible in the case at hand.
8.2.3 The right to data portability
Article 20 GDPR equips data subjects with a right to data portability. If the outputs of an AI system qualify as personal data, a data subject has the right to request their portability. Likewise, the data subject has the right to request portability of personal data used as an input for an AI system. In both cases, the connection of the data with the AI system does not introduce any additional complications if compared with other kinds of portability.
The same cannot be said about the portability of the weights of an AI system based on machine learning. Given that, as discussed above, information is often spread across weights, it can be difficult to associate specific weights with a natural person. Even if such an identification is possible, the weights within a neural network are specific to that network’s architecture. As such, they cannot be simply “plugged into” another network.
However, that transplantation of rules might be feasible in other kinds of AI systems. For example, rules codified into an expert system might be implemented in another system if the same variables are available. Therefore, a data protection professional will need to consult with the technical team to determine whether the inner workings of the model embed personal data in a format that can be ported. Future guidance by data protection authorities will bring more clarity in this regard.
8.3 Automated decision-making and AI
In this section, we will continue the previous discussion on data subject rights by focusing on a specific right: the right not to be subject to an automated decision. Under Article 22(1) GDPR, a data subject has the right not to be subject to a decision that is based solely on the automated processing of personal data, if that decision has a legal or otherwise significant impact on them. Because this kind of decision is intricately connected to AI technologies, we will now spend some time on it.
This is not to say that this particular provision is applicable if and only if AI is used. Not all AI systems make significant decisions about individuals. For example, generative AI tools tend to be used to create content, while recommender systems leave the final decision to a human. Also, not all decision-making systems are powered by AI. Consider how many businesses and government organizations rely on spreadsheets to automate important processes. Even so, many large-scale applications of AI are meant to automate decisions, and this is why the risks we examined in Part I of this book often focused on decision-making tools. Hence, a training on AI cannot avoid some engagement with the provisions on automated decision-making.
Such attention is warranted because recent case law by the European Court of Justice has consolidated understanding on important aspects of this right. For one, the definition of “automated decision-making” under the GDPR is not limited to scenarios in which humans are not involved at all. This is because the court has adopted a broad interpretation of the concept of “decision”: it includes acts that can affect the data subject even if they do not amount to a formal decision,2 such as the definition of a credit score that guides decisions about various aspects of an individual’s life. When evaluating the applicability of Article 22 GDPR to its AI-generated outputs, an organization must therefore consider how those outputs are used within itself and by third parties.
A second implication of the Schufa case is that the Court has confirmed that the “right not to be subject” is a prohibition in principle.3 It does not require the data subject to invoke the right. Instead, it prohibits decision-making from taking place unless one of the conditions from Article 22(2) GDPR is met:
Paragraph 1 shall not apply if the decision:
- is necessary for entering into, or performance of, a contract between the data subject and a data controller;
- is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or
- is based on the data subject’s explicit consent.
This judicial understanding is not necessarily a big shift in the application of the law, as many data protection authorities were already following this approach (Barros Vale and Zanfir-Fortuna 2022). Still, it requires caution from organizations using AI to make decisions that involve personal data, especially if involving sensitive data.
If a certain application of an AI system meets the legal definition of automated decision-making, its controller must ensure that one of the three legal bases above is applicable. If that is the case, the ensuing processing remains bound by the general requirements from the GDPR. Additionally, processing based on consent or on the existence of a contract must ensure that the processing adopts measures that protect the rights, freedoms, and legitimate interests of the data subject. According to Article 22(3) GDPR, those measures must include “at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.”
This requirement can be relevant to the training and deployment of AI systems, especially those meant to replace humans in decision processes. Any organization that controls a system (as seen in Section 6.2) designed for decision-making purposes, or that repurposes an existing system for that end, should check whether the system’s output counts as automated decision-making. If so, they will likely need to designate specific individuals to interact with data subjects and handle their requests for human intervention, expressing their point of view, and contesting the decision.
Contrastingly, Article 22 GDPR is unlikely to be directly applicable to processing in the training process of AI models and systems. This is because each processing operation during the training process does not produce a significant effect on the data subject. Nonetheless, developers of AI systems planned for decision-making should be aware that their buyers will likely need to comply with these requirements. As such, these buyers could benefit from systems that incorporate design measures that facilitate the operation of those rights, as analysed in Chapter 12.
Once again, the AI Act offers extra detail to these obligations when it comes to high-risk AI systems. Under its Article 14, providers of high-risk AI systems are required to adopt technical measures that facilitate oversight. They must build the system in a way that allows the persons overseeing it:
(a) to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance;
(b) to remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (automation bias), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons;
(c) to correctly interpret the high-risk AI system's output, taking into account, for example, the interpretation tools and methods available;
(d) to decide, in any particular situation, not to use the high-risk AI system or to otherwise disregard, override or reverse the output of the high-risk AI system;
(e) to intervene in the operation of the high-risk AI system or interrupt the system through a 'stop' button or a similar procedure that allows the system to come to a halt in a safe state.
While such measures are not mandatory for any other AI systems, or for automated decision-making not based on AI, they provide a starting point for understanding what compliance with Article 22(3) GDPR requires, especially once the technical standards discussed in Section 13.1 are published.
8.4 Conclusion
The deployment stage presents unique challenges from a data protection perspective. Though compliance with GDPR requirements demands that developers take several measures to address risks during design and development, some risks are likely to escape even their most diligent efforts. Furthermore, some risks emerge from the specific context in which an AI system is put into use. This means that not all issues can be solved beforehand, and data protection professionals must be active during deployment, too.
To support them in this task, this unit has identified certain aspects that are unique to the deployment of AI systems—or at least particularly relevant in this context:
- Most people (even with a technical background) are unlikely to have a thorough understanding of how AI operates and what it does in each context. Therefore, literacy programmes can be a valuable tool for compliance.
- Literacy training must be tailored for its audience. Some stakeholders are unlikely to need full exposure to technical detail, but they still need to grasp what AI can and cannot do. Others would benefit from looking at the system’s inner workings.
- Literacy training must be kept up to date with technical developments and changes in the social and organizational context in which the system is used.
- The technical properties of AI might create complications for the exercise of data subject rights, which organizations need to address.
- Some of these rights will require an organization to engage with upstream providers.
- Others require delicate trade-offs, given the limitations of current techniques.
- The broad definition of automated decision-making under the GDPR case law means that organizations might be subject to its Article 22 even if there is some measure of human involvement in the loop.
These factors mean that organizations need to adopt measures and safeguards that address residual (and potentially large) risks that connect to deployment. In the next unit, we will examine some of the tools they can use for this purpose.
Exercises
Exercise 1. Which of the following measures is required by the AI Act’s literacy mandate?
- a. Providing detailed technical training for all staff.
- b. Ensuring literacy tailored to the context and roles of individuals.
- c. Prohibiting staff from using AI unless they have a training certification.
- d. Restricting AI literacy efforts to technical staff only.
- e. Offering voluntary AI literacy programs for select employees.
Exercise 2. How does an AI literacy programme support compliance with GDPR obligations?
- a. By ensuring staff understand how AI systems process personal data and address associated risks.
- b. By replacing the need for technical measures with organizational training.
- c. By mandating universal AI literacy certification for all employees.
- d. By exempting organizations from GDPR requirements if literacy programmes are in place.
- e. The literacy mandate is independent from data protection duties.
Exercise 3. What makes rectifying data in AI systems challenging?
- a. Data is encoded in multiple storage locations throughout the cloud.
- b. AI systems resist manual updates.
- c. Neural networks store data in distributed parameters.
- d. GDPR does not require rectification for AI systems.
- e. Data protection authorities lack enforcement mechanisms.
Exercise 4. Which of the following rights is more likely to be invoked against the data controller responsible for training an AI system with personal data?
- a. Right to data portability
- b. Right to rectification
- c. Right to an explanation
- d. Right to contest automated decisions
- e. Right to restrict processing
Exercise 5. How does the Schufa case influence automated decision-making under GDPR?
- a. It restricts Article 22 GDPR to AI outputs only.
- b. It exempts human-overseen decisions from Article 22 GDPR.
- c. It limits the definition of automated decisions to formal approvals.
- d. It clarifies that some outputs produced by the automated processing of personal data are covered by Article 22 GDPR even if they do not take the form of a decision.
- e. It introduces new exceptions to Article 22 GDPR.
8.4.1 Prompt for reflection
The chapter highlights that exercising data subject rights, such as restriction, rectification, and erasure, can be technically complex in AI systems. Discuss potential organizational measures that could help bridge the gap between technical limitations and GDPR compliance, such as ombudsman offices or specialized teams for handling data subject requests. What role can data protection professionals play in facilitating these measures?
8.4.2 Answer sheet
Exercise 1. Alternative B is correct. Training must be universal but tailored to the needs of each role.
Exercise 2. Alternative A is correct. A well-designed literacy programme is an organizational measure in the sense of Article 25 GDPR, as it allows different actors within an organization to understand how data is processed by the AI system. The programme does not require formal certification.
Exercise 3. Alternative C is correct. Rectification is required and can be enforced by authorities, but it faces certain technical obstacles.
Exercise 4. Alternative E is correct. The other alternatives tend to have the most impact when invoked against the controller using the AI system with the data subject’s data.
Exercise 5. Alternative D is correct.
References
Marco Almada, ‘Automated Uncertainty: A Research Agenda for Artificial Intelligence in Administrative Decisions’ (2023) 16 Review of European Administrative Law 137.
Article 29 WP, ‘Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679’ (2018).
Article 29 WP, ‘Guidelines on the Right to “Data Portability”’ (2017).
Paul De Hert and others, ‘The Right to Data Portability in the GDPR: Towards User-Centric Interoperability of Digital Services’ (2018) 34 Computer Law & Security Review 193.
Reuben Binns, ‘Analogies and Disanalogies Between Machine-Driven and Human-Driven Legal Judgement’ (2022) 1 CRCL 1.
Lucas Bourtoule and others, ‘Machine Unlearning’ in 2021 IEEE Symposium on Security and Privacy (SP) 141.
A Feder Cooper and others, ‘Machine Unlearning Doesn’t Do What You Think: Lessons for Generative AI Policy, Research, and Practice’, 2nd Workshop on Generative AI and Law at ICML 2024.
Philipp Hacker, ‘Sustainable AI Regulation’ (2024) 61 Common Market Law Review 345.
Emmie Hine and others, ‘Supporting Trustworthy AI Through Machine Unlearning’ (2024) 30 Sci Eng Ethics 43.
Jevan Hutson and Ben Winters, ‘America’s Next “Stop Model!”: Model Deletion’ (2024) 8 Georgetown Law Technology Review 124.
Margot E Kaminski and Gianclaudio Malgieri, ‘Algorithmic Impact Assessments under the GDPR: Producing Multi-Layered Explanations’ (2021) 11 International Data Privacy Law 125.
Tiffany C Li, ‘Algorithmic Destruction’ (2022) 75 SMU Law Review 479.
Jakob Mökander and Maria Axente, ‘Ethics-Based Auditing of Automated Decision-Making Systems: Intervention Points and Policy Implications’ [2021] AI & SOCIETY.
Francesca Palmiotto, ‘When Is a Decision Automated? A Taxonomy for a Fundamental Rights Analysis’ (2024) 25 German Law Journal 210.
Ayush K Tarun and others, ‘Fast Yet Effective Machine Unlearning’ (2024) 35 IEEE Transactions on Neural Networks & Learning Systems 13046.
Law & Compliance in AI Security & Data Protection
Sebastião Barros Vale and Gabriela Zanfir-Fortuna, ‘Automated Decision-Making Under the GDPR: Practical Cases from Courts and Data Protection Authorities’ (Future of Privacy Forum, May 2022).