12  Regulating AI by Design

TipLearning outcomes

By the end of this chapter, learners will be able to - compare different approaches to data protection by design; - identify the problems they are able to address; and - combine them at various junctures of the life cycle of an AI system.

The EU’s approach to the regulation of digital technologies gives considerable attention to how these technologies are designed. In the field of data protection, this attention manifests itself in two core legal requirements. Article 25(1) GDPR establishes a requirement of data protection by design, as it obliges data controllers to adopt technical and organizational measures that address risks to data protection. A similar logic can be seen in Article 32(1), which creates a security by design obligation to adopt measures (both technical and organization) to avoid cybersecurity issues. Both sets of obligations are directed towards data controllers, who must identify the risks created (or amplified by processing) and choose the best measures to address them.

Data controllers do not make those choices in a vacuum. Both data protection by design and security by design establish that the controller must consider, such as the likelihood and severity of risks or the technological state of the art. Nonetheless, it places data controllers in the position of specifying how those legal provisions need to be interpreted in specific technical contexts.

Another flavour of regulation by design is present in the AI Act. Its rules on high-risk AI systems and general-purpose AI models with systemic risk both establish certain technical requirements that must be met before commercialization. The same is true of the supplementary rules that Article 50 creates for systems regardless of their risk classification. But, unlike the GDPR, the AI Act focuses on the adoption of technical measures in the AI system. The three approaches to regulation by design (security by design, data protection by design, and the AI Act’s technical requirements) coexist, as they are all obligatory at the same time. This raises questions about whether and why these design mandates might clash with one another.

All forms of regulation by design used in EU law create ongoing obligations. Article 25 GDPR requires providers to adopt measures both at the moment of processing and when the means for processing are determined, while the AI Act obliges providers and deployers throughout the entire life cycle of an AI system. Both approaches to regulation by design also cover a broad range of values. The GDPR is designed to protect data subjects from the impact that processing might have on their fundamental rights, while the AI Act has the explicit aim of safeguarding health, safety, and public values such as the protection of fundamental rights, democracy, and the rule of law.

To cover all those values, regulated actors will need to use several types of technical and organizational measures for each context. For example, some systems might be able to benefit from anonymised or synthetic data, but a system that generates profiles will necessarily involve personal data. Even when personal data is intrinsic to the application, organizations can still adopt measures and safeguards to protect it to the greatest extent possible. For example, an organization developing an AI system would need to adopt cybersecurity measures to prevent leaks of personal data, while a deployer organization could adopt controls over who has access to AI outputs. Each application will be better served by a different mix of measures, but some best practices can be useful for a broad range of applications.

This unit introduces some examples of technological measures that promote compliance with data protection requirements. Some of these measures are oriented towards data subjects, allowing them to play a more active role in the defence of their rights:

Other measures are directed, instead, at the needs of data controllers:

This unit examines three sets of techniques that fall under the broad umbrella of data protection by design. Section 12.1 discusses privacy-enhancing technologies (PETs), geared towards the minimization of personal data processing. Section 12.2 deals with other technical and organizational approaches that foster aspects of data protection that go beyond privacy, such as the exercise of data subject rights. Section 12.3 then wraps up the unit with an overview of technical and organizational approaches that aim to ensure fairness in AI systems.

12.1 Privacy-Enhancing Technologies (PETs)

TipLearning outcomes

By the end of this section, learners will be able to identify different privacyenhancing technologies (PETs), illustrate their contribution to data protection, and recognize their conceptual and technical limits.

Privacy-enhancing technologies (PETs) are technical methods developed to reduce the impact of data processing on individual privacy. In general lines, those methods foster privacy by minimizing the amount of data processed in each operation and ensuring the confidentiality and security of any data that is processed. This section will explore how the use of such technologies can contribute to data protection compliance when AI systems are designed and used.

Many PETs are used for broader purposes than the development of AI systems. For example, there are various techniques for data anonymization, which remove identifying factors in a way that prevents the data from being associated with a natural person. Differential privacy, on the other hand, adds “noise” to data queries, masking individual entries while preserving the overall utility of the data. These methods underscore the value of controlling data access as a means of reducing privacy risks.

Additionally, some AI-specific techniques have been designed with privacy in mind. For example, federated learning enables machine learning models to train across decentralized data sources without transferring data directly to a central system. This approach reduces data exposure while still allowing AI models to benefit from diverse datasets.

12.1.1 Organizational measures as part of a privacy arrangement

While PETs are powerful tools, they are only one aspect of effective data protection. They must be paired with organizational measures that foster privacy and data security. Internal practices such as training personnel on the responsible handling of data, tracking data access, and setting restrictions on who can interact with AI models that process personal data are essential.

By training staff to handle data responsibly and implementing logging systems that track data access, organizations can create a culture of accountability that complements their technical measures. Additionally, controlling and monitoring access to AI systems helps prevent unauthorized data use and supports compliance with data protection regulations.

However, certain privacy risks cannot be mitigated by organizational measures alone. For instance, data protection professionals should be aware of the European Data Protection Board’s (EDPB) Recommendations 01/2020, which caution that measures like access controls are vulnerable to tampering by state actors or external adversaries. This vulnerability highlights the need to evaluate when privacy risks might require changing or limiting the use of certain AI technologies altogether.

Depending on the limitations of technical and organizational measures, an organization might need to consider whether it needs to abandon its (planned) use of AI. For example, it might be impossible to create a system that automatically allocates scholarships to students based on their academic performance, if that system cannot be designed in a way that does not discriminate between them in a way prohibited by law. In that case, the necessary design measure is not designing (or using) the AI system in the first place. Sometimes, the only winning move is not to play.

12.1.2 The limits of privacy-enhancing technologies in data protection

Despite their advantages, PETs have limitations that data protection professionals must carefully consider. Some privacy-preserving techniques are in pilot stages of development and are not ready for deployment in practice. For instance, although homomorphic encryption—allowing computations on encrypted data without exposing it shows promise, it remains too complex and resource-intensive for widespread use. Until these emerging PETs become more practical, organizations may need to be cautious with them or be transparent about their limitations to ensure a realistic understanding of compliance. Other PETs, instead, are more mature and can be used more extensively.

An important conceptual limitation of PETs is their focus on data minimization, a key principle in privacy. Minimizing data collection aligns well with privacy goals but does not capture the entire spectrum of GDPR obligations. For example, some of the informational rights of data subjects discussed in Section 11.3 require providing information about how the system considers the circumstances of data subjects. To keep that information accessible, one needs to reduce the overall degree of confidentiality promoted by the system, creating a trade-off between privacy-asconfidentiality and data protection’s goal of promoting control over the use of personal data (Veale et al. 2018). Suppressing data purely for the sake of minimization could inadvertently restrict individuals’ rights and weaken the protection of fundamental rights overall. Thus, data protection officers need to weigh the benefits of minimization against the need to maintain a balanced approach to all GDPR principles.

Ultimately, while PETs do not fulfil every compliance need, they are valuable tools that can significantly reduce privacy risks. Informed use of PETs, combined with robust organizational measures and a clear understanding of their limits, allows data protection officers to align AI systems with legal obligations. PETs should be seen not as standalone solutions but as part of a multi-faceted approach to comprehensive data protection in the AI era.

12.2 Technical measures for AI transparency

TipLearning outcomes

By the end of this section, learners will be able to exemplify techniques that promote technical transparency in AI systems and assess whether those techniques are adequate considering the relevant data protection risks.

In Chapter 11, we have seen that data protection law and the AI Act feature a broad range of information disclosure requirements. There is no one-size-fits-all solution, as data subjects, authorities, and society as a whole need several types of information, which they will use for different purposes. This section examines whether and how design-based interventions can contribute to compliance with those information duties.

Technical interventions might be necessary to the extent that some of the information data controllers must provide refers to the inner workings of an AI system or model. As we discussed in Section 8.2, there is some controversy about the extent to which data controllers must provide detailed information about how the model operates, or if it is sufficient to provide highly abstract information. For some purposes, such as closer audits by data protection authorities, abstract information is not enough. Whenever that is the case, data controllers will need to deal with the technical opacity of AI.

One can distinguish between two sets of technical approaches that can be useful for this purpose. On the one hand, explainable AI (XAI) approaches try to distil the complexity of an AI system into key factors that determine its action. On the other hand, interpretable AI changes the system itself, building it with a simpler model that can be made legible to humans instead of a complex system based on more arcane machine learning techniques. Each of these approaches to the technical complexity of AI technologies has its pros and cons, which we will now consider.

12.2.1 Explainable artificial intelligence and the right to an explanation

XAI models offer a scientific approach to the black box problem. They start from the fact that we often do not know how complex AI systems work. Even if we set up their general architecture and training parameters, the sheer scale of those models, and the fact that they undergo a long training process, means that nobody—not even a trained expert—will have immediate access to everything that happens within an AI system. To solve these problems, XAI techniques aim to reconstruct the decision procedure and offer an understandable account of what is going on (Holzinger et al. 2022). If and when they succeed, the ensuing model contributes to our understanding of the complex system that is being explained.

To achieve this goal, researchers have proposed a dizzying array of technologies. A literature review from a few years ago (Holzinger et al. 2022) identified at least seventeen methods that were in current use as of 2020. Some of these approaches are model-agnostic, that is, they try to reconstruct what a model does based on its outputs. For example, the LIME technique (Holzinger et al. 2022, p. 15) tries to represent the predictions of a complex model, such as a deep neural network, in terms of a surrogate model that is much simpler to understand than the original one. Anchor models try to identify “if-then” decision rules that capture the behaviour of a complex model. Those and other techniques end up creating surrogates that can be used for understanding the original AI model.

Other explanations are contingent on certain features of the models they explain. Layerwise relevance propagation (LRP) approaches, for example, offers a procedure through which one can simplify the underlying logic of a larger model. To do so, it requires information about that model’s internal arrangements (Holzinger et al. 2022, p. 18). The ensuing explanation is potentially more complex than what a model-agnostic explanation would offer, but the access to model-specific information allows the explanation to reflect more of the original model’s actual functioning.

Model-agnostic and model-sensitive XAI techniques both advance scientific understanding of what is going on within AI models. This kind of understanding, however, is not necessarily equal to what the law demands when it establishes a “right to an explanation.” Most XAI technique aim at a scientific explanation of the models they explain, that is, they supply potential mechanisms that would explain what the model does (Creel 2022). The legal conception of a right to an explanation is, instead, related to the justification of a decision: whether it is compatible with legal requirements, including but not limited to the rights discussed in Section 8.2. There are some reasons to believe one kind of insight does not always lead to the other.

Some recent works (in particular, Bordt et al. 2022) have suggested that XAI methods cannot be trusted in adversarial contexts. In such contexts, the data subject’s interest in discovering how an AI system works is contrary to the data controller’s interest in preserving that information. For example, InnovaHospital might want to prevent a patient from understanding an AI diagnosis tool for several reasons, such as avoiding a lawsuit from a misdiagnosis or protecting intellectual property. Whenever that is the case, the data controller has various possibilities for manipulating the outputs of the explanation model. The use of XAI would not be enough to ensure trust and would need to be accompanied by technical and organizational measures to reduce the controller’s possibilities of manipulation.

Another problem is that XAI techniques are not necessarily more understandable than opaque models. A study on the legibility of AI models (Bell et al. 2022) has found that many people find that “simpler” models are still too complex to understand. As such, they are not necessarily more accessible or insightful than the bigger models they aim to replace. The use of XAI technologies must therefore make sure that any outputs are understandable for the audiences they are meant to reach. If that is not possible, then the use of XAI might not be an answer to the legal demands for explanation.

12.2.2 Inherently interpretable models

If XAI techniques are not enough to provide transparency, what can be done? One approach to that problem is the use of inherently interpretable models. Even though many advanced AI applications are powered by complex, opaque AI models, there are many important problems that do not require all that complexity. In fact, computer scientists such as Cynthia Rudin (2019) have shown that, for some tasks, simpler models can perform at least as well as black box models. Whenever that is the case, data controllers have fewer reasons to rely on the opaque alternatives, especially for sensitive tasks.

The move towards simpler models can be desirable for several reasons, such as reduced costs in development and execution. However, its usefulness for transparency will depend on the audience to which information is meant. The same concerns with legibility discussed above were also identified when users were exposed to interpretable AI models (Bell et al. 2022, Kolkman 2022). Still, these models might be more legible than black box alternatives for technical experts, who have the technical baggage needed to make sense of them. They might also be useful for investigative journalists, who can experiment with the parameters of AI models and find out how they operate. Therefore, reliance on inherently interpretable models can be beneficial even if it those models are not necessarily more accessible for laypeople (Busuioc et al. 2023).

12.3 Designing for algorithmic fairness

TipLearning outcomes

By the end of this section, learners will be able to exemplify technical approaches that can be used for the design of fairer algorithms.

This session examines some design measures for addressing fairness issues in AI models. As we have examined in Section 10.3, algorithmic fairness is a complicated problem, both for its technical challenges and for the difficulty in representing legal understandings of fairness in a way that can be measured and implemented in an AI system. Nonetheless, some technical approaches can promote fairness, or at least mitigate known risks such as algorithmic discrimination and biases.

Best practices in addressing risks to fairness (such as Snoek and Barberá 2024) emphasize the need to address issues throughout the entire life cycle of an AI system. That is, responses to fairness issues are not restricted to the development process or to the initial deployment. Hence, one must look at all the life cycle stages examined in Part II of this training module.

12.3.1 Fairness interventions at the inception stage

At the inception stage, fairness can be pursued in many ways. Organizations can evaluate whether the purposes they pursue with an AI system or model are not, in themselves, discriminatory. For example, an AI system that is designed to carry out an unlawful form of discrimination cannot be salvaged by any technical measures.

Organizations might also want to examine how they frame the problem(s) that they want AI to solve, in order to avoid abstraction traps (Snoek and Barberá 2024, p. 20), that is, situations in which the design ignores important aspects of reality. For example, if InnovaHospital wants to create an AI system to assess heart attack risks, it needs to take into account the differences in symptoms between men and women.

12.3.1.1 Fairness in design and development

When it comes to the development of an AI system, fairness practices can be directed towards the data used in training, the development of the algorithmic system, and the documentation of system design decisions. All of those are useful not just for avoiding potential sources of unfairness in algorithmic predictions, but also to keep track of design decisions that are relevant for accountability and for future updates to ensure the system continues fair.

The foundation of a fair AI system lies in the quality of its data:

  • Ensuring completeness is essential, as gaps in data can lead to skewed or biased outputs. For example, a university admissions model at the UNw university might underperform for certain demographic groups if its training data lacks sufficient examples of applicants from those groups.
  • Similarly, accuracy in labelling and data collection is crucial to avoid embedding errors into the system.
  • Representativeness is another key aspect: datasets should reflect the diversity of the real-world populations the AI system will serve. For instance, DigiToys must ensure its AI-driven toys are tested on a diverse range of children to avoid unintended exclusion or stereotyping.

The virtues of good documentation we discussed in Section 10.1 of the module are also relevant for promoting fairness. All relevant decisions and assumptions made during the AI lifecycle should be recorded systematically. Such comprehensiveness ensures transparency and enables future audits. The language used in documentation also matters: it should be accessible to diverse stakeholders, avoiding overly technical jargon while ensuring clarity. Furthermore, keeping documentation up to date is essential, as decisions about data, algorithms, and design choices must be revisited in response to evolving societal and regulatory contexts. For example, InnovaHospital might track updates in medical guidelines or regulatory changes to ensure its diagnostic models remain fair and compliant.

Finally, fairness during the design of an AI system requires attention to the model training process. Designing fair algorithms involves employing appropriate fairness metrics to measure and address potential biases. Here, quantitative and qualitative metrics (such as those proposed by Wachter et al. 2021, Weerts et al. 2023) can be helpful to diagnose certain issues with algorithmic decisions, as long as one takes care to mitigate the issues discussed in Section 10.3.

Understanding the sources of bias is equally important. Research on algorithmic biases has proposed various forms in which the training processes, and the decisions that guide them, can skew the operation of an AI model. For example, a learning bias happens when an AI model prioritizes some metric over other objectives that the system must pursue, such as prioritizing effectiveness over fairness (Snoek and Barberá 2024, p. 17). Those biases can be amplified later, as humans have their own cognitive biases. For example, people overseeing AI systems often override decisions that “look wrong” while deferring to algorithmic decisions that conform to their biases (Alon-Barkat and Busuioc 2023). Fair development of AI systems will therefore require attention both to technical biases and those affecting human-computer interaction.

12.3.2 Fairness during and after the initial deployment

Once an AI system has been deployed, its core design is generally fixed. At this stage, promoting fairness involves ensuring that the system’s outputs are applied in ways that align with equitable outcomes. However, one must still pay attention to potential fairness issues related to the AI outputs themselves. This is the case for two reasons:

  • Some sources of unfairness might have escaped detection during the development process. If they go unchecked in deployment, they might only be noticed after they have harmed data subjects.
  • Even if a system were perfectly fair at first, unfairness might appear after deployment. This might happen as part of a model’s self-learning processes because the data that was originally relevant no longer is so, because society has changed, or many other factors.

As such, organizations need to keep an ongoing surveillance of whether their AI systems and models are processing data fairly.

During the deployment process, organizations can promote fairness by testing their new systems in real-world conditions. By doing so, they can verify whether it functions as intended across diverse settings. For example, DigiToys might evaluate how its interactive toys perform in households with varying languages and cultural norms, ensuring consistent and appropriate interactions. Similarly, InnovaHospital could test its diagnostic models across diverse patient demographics to confirm equitable performance. If any issues are detected, further work might be needed on the system. Alternatively, an organization might adjust its procedures to avoid unfairness, for example by improving human oversight once the system is deployed.

After the system is deployed, an organization needs to evaluate what biases might emerge during operation. For instance, an admissions algorithm at UNw could inadvertently reinforce pre-existing inequalities in access to education if societal biases are reflected in the input data or institutional policies. Regular assessments help identify and address such contextual biases.

To ensure continued fairness, organizations should continue to track fairness metrics after deployment. For example, as societal norms or data patterns evolve, an AI system might need recalibration to avoid perpetuating outdated or unfair assumptions. Bias detection should be an ongoing effort, incorporated into the risk management practices discussed in @ch-deploy.

Finally, ongoing interaction with regulators and affected communities is vital to maintaining fairness and accountability. Engaging directly with those impacted by the AI system helps organizations understand real-world fairness concerns and adapt to shifting regulatory and societal expectations. For example, DigiToys could collaborate with parents’ groups to address concerns about how its AI systems influence children’s behaviour, while InnovaHospital might consult healthcare regulators and patient advocacy groups to align its practices with ethical standards. Relying on those actors will help any organization in finding fairness issues that escaped its own monitoring tools.

12.4 Conclusion

“Regulation by design” is a concept that is in vogue nowadays, and not without reason. If problems can be solved by technical means, this contributes to a higher level of protection for data subjects, reducing possibilities of human error and subversion. Not all problems can be solved by technical measures, for a series of reasons, such as the limits of what one can represent in a computer language, the current state of the art, and potential conflicts between the values that the various by-design approaches are meant to protect. Nonetheless, technical design remains a powerful tool for compliance, as shown by the various interventions discussed in this unit.

To recapitulate the key points of our discussion:

  • Privacy-enhancing technologies (PETs) seek to maintain the utility of a system while reducing the amount of personal data it processes.
    • Some technologies, such as differential privacy approaches to the training set, homomorphic encryption, and federated learning, might be particularly relevant in the context of AI.
    • The development of PETs is grounded on a view of privacy as concealment, which is not entirely aligned with the idea of control in data protection. Hence, the use of PETs is not enough to discharge all data protection obligations.
    • However, they contribute, at very least, to data minimization, and so an organization might want to consider the extensive use of PETs where it makes sense.
  • Technical interventions cannot fully remove opacity, but they can reduce the efforts needed to understand the inner workings of AI systems.
    • Explainable AI (XAI) models try to offer a scientific explanation of the main factors behind an AI approach. Various techniques have been proposed, but they struggle to deal with adversarial contexts that are common in precisely the kind of situation that is likely to create legal issues.
    • Inherently interpretable models are obtained by building systems without the use of black-box techniques. This is not always possible, given the success of black-box models such as neural networks for some problems. Yet, there are many applications in which those opaque models do not necessarily perform better than the alternatives.
    • In addition, some empirical research suggests that neither approach is really intelligible to the general public. Technical transparency might nonetheless be beneficial for technical experts, as well as for actors such as courts, supervisory authorities, and investigative journalists.
  • Despite the various challenges to algorithmic fairness, some multidisciplinary teams have developed approaches that are feasible in practice and address some real unfairness concerns.
    • This research is often grounded on US law, and as such it is not always directly applicable for compliance with EU law.
    • There are extensive mappings of biases that can emerge during the design process, and addressing those biases can contribute to fairness.
    • Any approach to fairness in AI will require constant recalibration as technologies and social expectations change.

It might be the case that the different values promoted by each by-design approach clash with one another. The conflict might be conceptual: for example, privacy by design might require the elimination of some information that might be useful for the exercise of other rights (Veale et al. 2018). But it might also emerge because of limited time and resources that do not allow designers to meet all needs equally. Solving these conflicts will require careful consideration, which also requires engagement with branches of the law beyond data protection. Still, to find the ideal equilibrium, one must consider the entire life cycle of the AI system rather than look just at immediate needs. Otherwise, today’s solution will likely become tomorrow’s compliance problem.

Exercises

Exercise 1. Which of the following best describes the role of privacy-enhancing technologies (PETs) in AI systems?

  • a. They ensure complete data anonymity in AI systems.
  • b. They are only applicable to healthcare organizations like InnovaHospital.
  • c. They eliminate the need for organizational measures.
  • d. They reduce data processing impacts on individual privacy.
  • e. They replace the need for compliance with GDPR.

Exercise 2. Which of the following better explains the contribution of explainable AI (XAI) against AI opacity?

  • a. They replace GDPR obligations with technical solutions.
  • b. They simplify algorithms so non-technical users understand them.
  • c. They use a scientific approach to study the unknowns within into AI systems and models.
  • d. They provide explanations that are not subject to manipulation by the data controller.
  • e. They prevent AI systems from learning biases.

Exercise 3. What role does comprehensive documentation play in fairness?

  • a. Reducing privacy risks
  • b. Eliminating technical complexity
  • c. Addressing opacity in training data
  • d. Ensuring immediate compliance
  • e. Supporting accountability and audits.

Exercise 4. Which of the following approaches leads to more comprehensive compliance with regulation by design requirements for AI systems?

  • a. Combining Explainable AI methods with PETs like federated learning.
  • b. Using data minimization strategies and ignoring transparency obligations.
  • c. Prioritizing anonymization and minimizing the production of documents during the development process.
  • d. Avoiding inherently interpretable models to enhance algorithmic complexity.
  • e. Using fairness and accuracy as the two relevant metrics for system acceptance.

Exercise 5. How might transparency and fairness conflict in InnovaHospital’s diagnostic tools?

  • a. Increasing algorithm interpretability might reduce privacy protections for patient data.
  • b. Increasing algorithm interpretability might reduce the system’s ability to incorporate fairness metrics effectively.
  • c. Transparency could lead to efficiency losses but improve patient outcomes.
  • d. Documenting fairness metrics might eliminate the need for Explainable AI methods.
  • e. Enhanced transparency might allow data controllers to bypass fairness considerations.

12.4.0.1 Prompt for reflection

Reflect on a real-world scenario (or one of the hypothetic cases of UNw, DigiToys, or InnovaHospital) where implementing regulation by design principles might lead to a conflict between privacy, transparency, and fairness. How should organizations prioritize these principles in their AI systems? What strategies can be employed to mitigate the risks associated with favouring one principle over another? Are there any contexts where one principle might justifiably take precedence?

12.4.1 Answer sheet

Exercise 1. Alternative D is correct. Full anonymity is rarely feasible, and as such compliance with the GDPR remains necessary. Likewise, it is rarely the case that PETs are enough to cover all data protection needs in and of themselves. But they can be used in any sector.

Exercise 2. Alternative C is correct. An explained system might still be biased, and the explanations are not always understandable by non-experts. Explanations are prone to manipulation by data controllers, and they do not replace GDPR obligations.

Exercise 3. Alternative E is correct. Documentation does not make a system fairer. However, the need to register decisions might lead providers to make decisions they can justify afterwards, and accountability might be used to seek redress and rectification of unfairness after it has already taken place.

Exercise 4. Alternative A is correct, as it adopts measures that address more than one by-design regulatory goal. Alternative E also tackles more than one objective, but its focus on two metrics risks excluding other important values, such as transparency.

Exercise 5. Alternative B is correct. Alternatives A and D represent trade-offs between other values, while alternatives C and E present false oppositions.

References

Marco Almada and others, ‘Art. 25. Data Protection by Design and by Default’ in Indra Spiecker gen. Döhmann and others (eds), General Data Protection Regulation: Articleby-article commentary(Beck; Nomos; Hart Publishing 2023).

Saar Alon-Barkat and Madalina Busuioc, ‘Human-AI Interactions in Public Sector Decision-Making: “Automation Bias” and “Selective Adherence” to Algorithmic Advice’ (2023) 33 Journal of Public Administration Research and Theory 153.

Andrew Bell and others, ‘It’s Just Not That Simple: An Empirical Study of the Accuracy-Explainability Trade-off in Machine Learning for Public Policy’, FAccT ‘22(ACM 2022). Sebastian Bordt and others, ’Post-Hoc Explanations Fail to Achieve Their Purpose in Adversarial Contexts’, FAccT ’22(ACM 2022).

Lee A Bygrave, ‘Article 25. Data Protection by Design and by Default’ in Christopher Kuner and others (eds), The EU General Data Protection Regulation (GDPR): A Commentary(Oxford University Press 2020).

Alessandra Calvi and others, ‘The Unfair Side of Privacy Enhancing Technologies: Addressing the Trade-Offs between PETs and Fairness’, FAccT 2024 (ACM 2024).

Luca Deck and others, ‘A Critical Survey on Fairness Benefits of Explainable AI’, FAccT 2024 (ACM 2024).

Pierre Dewitte, ‘The Many Shades of Impact Assessments: An Analysis of Data Protection by Design in the Case Law of National Supervisory Authorities’ (2024) 2024 Technology and Regulation 209.

Ernestine Dickhaut and others, ‘Lawfulness by Design – Development and Evaluation of Lawful Design Patterns to Consider Legal Requirements’ [2023] European Journal of Information Systems Early Access.

EDPB, Guidelines 4/2019 on Article 25 on Data Protection by Design and by Default (2020).

ENISA, Best Practices for Cyber Crisis Management (2024).

Daan Kolkman, ‘The (in)Credibility of Algorithmic Models to Non-Experts’ (2022) 25 Information, Communication & Society 93.

Efstratios Koulierakis, ‘Certification as Guidance for Data Protection by Design’ (2024) 38 International Review of Law, Computers & Technology 245.

Andreas Holzinger and others, ‘Explainable AI Methods - A Brief Overview’ in Andreas Holzinger and others (eds), xxAI - Beyond Explainable AI(Springer 2022).

Christina Michelakaki and Sebastião Barros Vale, ‘Unlocking Data Protection By Design & By Default: Lessons from the Enforcement of Article 25 GDPR’ (Future of Privacy Forum May 2023).

Cecilia Panigutti and others, ‘The Role of Explainable AI in the Context of the AI Act’, 2023 ACM Conference on Fairness, Accountability, and Transparency(ACM 2023).

Cynthia Rudin, ‘Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead’ (2019) 1 Nature Machine Intelligence.

Suzanne Snoek and Isabel Barberá, ‘From Inception to Retirement: Addressing Bias Throughout the Lifecycle of AI Systems. A Practical Guide’ (2024).

Michael Veale and others, ‘When Data Protection by Design and Data Subject Rights Clash’ (2018) 8 International Data Privacy Law 105.

Sandra Wachter et al., ‘Bias Preservation in Machine Learning: The Legality of Fairness Metrics under EU Non-Discrimination Law’, 123 W. VA. L. REV. 735, 744 (2021).