10  Fairness and Accountability for AI

TipLearning outcomes

By the end of this chapter, learners will be able to: - exemplify documents that can support accountability regarding AI systems and models; - sketch the elements those documents must contain; and - plan a documentation strategy for an organization.

Accountability is a core principle of EU data protection law. It is explicitly mentioned in Article 5(2) GDPR: a data controller is responsible for compliance with the other data protection principles, and they must be able to demonstrate that compliance. Furthermore, various provisions of the GDPR give effect to that principle by creating mechanisms to hold controllers to account for their processing. Article 24 establishes that controllers must adopt technical and organizational measures that allow them to demonstrate compliance with other data protection requirements. Such provisions, as is often the case, acquire new dimensions when AI is used.

Due to AI’s complexity and capacity to process vast amounts of personal data, data protection officers must ensure that these systems remain compliant with the GDPR by implementing practices that make compliance visible and traceable. This can be particularly relevant in AI applications in which automated decision-making processes impact individuals, such as profiling or personalized recommendations. Regular documentation of AI-related data processing activities is a necessary step, as it provides concrete proof of compliance efforts, allowing both internal stakeholders and external authorities to review and verify the organization’s commitment to GDPR principles.

Applying accountability to AI systems presents unique challenges. Unlike traditional data processing, AI often involves extraordinarily complex algorithms that can operate opaquely. As a result, it can be difficult to trace precisely how personal data is processed or to understand how certain outcomes are reached. These unique characteristics create obstacles for transparency, as the underlying logic and processes in AI can be difficult for even data protection experts to interpret.

This opacity is problematic from a GDPR perspective, as accountability requires a level of transparency and demonstrable control over data flows and decision-making processes. For data protection officers, this means conducting thorough assessments of how AI systems handle data and analysing the decision-making processes these systems employ. Such assessments enable organizations to meet GDPR’s accountability requirements by ensuring that they understand and can explain how AI systems operate, thereby facilitating both compliance and transparency.

One necessary step to carrying out this kind of assessment is the reduction of the various forms of opacity surrounding an AI system. In Section 11.3, we examine technical approaches towards the explanation of AI systems, that is, techniques that allow one to understand the technical factors that guide a system’s decision processes. But, as we discussed in Section 4.3, opacity is not solely a technical problem: there are also legal factors that prevent the release of information about an AI system. And sometimes technical complexity is even instrumentalized to prevent the release of information about an organization’s practices.1 As such, those technical measures for transparency need to be supported by accountability measures that ensure an organization’s decisions on how and what to disclose can be evaluated.

We now examine three issues that are relevant for accountability when AI systems are used to process personal data. Section 10.1 discusses how various kinds of software documentation can assist organizations in demonstrating compliance with data protection requirements. Section 10.2 deals with a specific kind of document that is sometimes required by the GDPR: the data protection impact assessment. Finally, Section 10.3 discusses how data controllers can responsibly pursue fairness in AI systems.

10.1 Documenting technical decisions

TipLearning outcomes

By the end of this section, learners will be able to distinguish between the various roles of technical documentation and map elements that are need for documentation to support accountability.

Large software projects are often accompanied by various kinds of technical documents. Those documents are drawn up in response to several needs, such as:

  • Registering and explaining strategic decisions for later implementation.
  • Supplying technical detail about what has been done within a system, to facilitate future updates and maintenance actions.
  • Guiding the potential future users on how they operate an AI system; or
  • Demonstrating how the system complies with software requirements.

Those needs often require vastly distinct types of documents. The level of detail that is adequate for a software developer learning about a system is likely to be too complex for an operator who just needs to understand what the system does and how to use it. Yet, each of those documents can be relevant for different data protection tasks.

In line with its technology-neutral approach, the GDPR mostly refrains from prescribing specific types of documents. In some cases, as we will soon see in Section 10.2, the proper deployment of an AI system might require a data protection impact assessment. However, the GDPR focuses on define the contents that must be supplied and not the form of their expression. Its Article 15, for instance, allows data subjects to request some information from data controllers, while Article 24(1) obliges controllers to be able to show that processing is in conformity with the GDPR.

It is true that documentation creates some frictions with development processes. They demand additional effort to draw up and maintain updated documentation, and the sheer volume of documents relating to a large AI system can be intimidating. Because those efforts are often seen as having limited returns, one of the key tenets of agile software development2 is the idea that working software is more important than comprehensive documentation. This commandment does not mean that documentation should not exist. But it suggests the need to minimize written records to what is essential for business reasons, including compliance with the law.

10.1.1 The compliance roles of software documents

One of the challenges organizations face in determining what documents are essential is that there is no closed list of such documents. This is because the value of documented information varies with context. A piece of information that is useless for understanding the impact of a system used in social media might make all the difference for assessing whether a medical diagnosing system works as intended. Some types of documents are mandated by law, such as those demanded by sector-specific law. Others emerge as industry standards, as technical experts deem some kinds of information to be essential for their work and for accountability. This section cannot offer an exhaustive list of such documents, but it will introduce some that are deemed to be useful for AI governance.

The first type of documents that can come in handy for an organization relates to the decisions it makes during the software life cycle. Any organization that develops an AI system makes various choices throughout the development process: what algorithms should be used? What data is relevant for training this AI system? How should we test the completed AI system? Likewise, a deployer of an AI system must make choices such as determining which AI system to use and how to use it. In both cases, the choices will shape how personal data is processed. As a result, Article 24 GDPR entails that the organizations must be able to demonstrate that such choices are made in compliance with data protection law.

By documenting the process behind those choices, the actual choices made, and how they are implemented, an organization can demonstrate its due diligence regarding the numerous factors highlighted in Part II of this training module. Organizations providing systems classified as high-risk under the AI Act are obliged to provide this kind of information, covering at least the criteria flagged in Annex IV. Any other controllers are not bound by this requirement. Still, they should consider documenting those decisions, especially those that create (or address) more risk.

The second type of documentation that is relevant for AI systems refers to a system’s instructions for use. When an organization provider an AI system, it makes certain assumptions about the purposes for which their system might be used and how somebody might use the system for those ends. Even if a provider does its part in anticipating risks, as discussed in Chapter 9, the system might still cause harm if the deployers ignore the measures and safeguards put in place to address risk. For example, if the university UNw inputs personal data about students in a public chatbot that uses that data for training, some of that personal data might become accessible to other users of the chatbot. Following the instructions for use is an organizational measure to mitigate risk.

Finally, an organization might want to document the results of system operation:

  • For a provider, this might mean keeping a paper trail of the software testing(as seen in Section 7.2) it conducts and the results of any audits, as well as any bug reports received from its customers afterwards.
  • For a deployer, responsibility entails keeping track of what happens during system operation, to contact providers, affected parties, and the relevant authorities in case of harm.

As Section 9.2 shows, the AI Act creates specific requirements in this regard for high-risk AI systems. But the responsibility to follow results is already present in data protection law. So, it applies regardless of risk level.

10.1.2 Best practices in AI system documentation

Documentation does not exist for the sole purpose of compliance. It also plays a variety of other roles in software. Some types of documents help software developers in upgrading and maintaining existent systems, while others help prospective buyers make sense of the tool. In some applications, there might even be an interest in making information about the system accessible to the public. For example, the use of AI in public-facing applications might be made more legitimate by making clear to the public the role of the AI system. Accordingly, we will now consider some best practices for documentation.

One best practice is to ensure that documentation is comprehensive and structured. This means clearly defining sections within documents to address several aspects of the AI system, such as data sources, processing methods, model performance, and ethical considerations. By adopting a standardized structure, organizations can facilitate navigation and understanding for various audiences. For instance, technical teams may require in-depth details about algorithms and data processing techniques, while executive management may need a high-level overview that focuses on compliance, risk management, and strategic implications.

On a related note, it is crucial to tailor the language and content of the documentation to the specific audience. For technical audiences, documentation should include precise terminology and detailed descriptions of algorithms, data processing methods, and system architecture. In contrast, documentation aimed at non-technical stakeholders, such as compliance officers or executives, should focus on implications for data protection, compliance status, and risk assessments, avoiding overly technical jargon. This approach ensures that all stakeholders can access the information relevant to their roles and responsibilities, enhancing overall understanding and engagement with the AI system.

Another important aspect is to maintain up-to-date documentation. AI systems can evolve rapidly, with models being updated or new data sources introduced. Organizations should implement processes for regularly reviewing and revising documentation to reflect these changes accurately. This practice not only aids in compliance with the GDPR’s accountability requirements but also supports internal audits and assessments, as outdated information can lead to misunderstandings and increased compliance risks.

Finally, organizations should include a section on ethical considerations and potential biases in their documentation. This part should address how the AI system is designed to mitigate bias, the diversity of the training data, and any measures taken to ensure fairness and transparency in automated decisions. By documenting these aspects, organizations demonstrate their commitment to ethical AI practices and provide data protection officers with the necessary insights to address potential risks related to data subjects’ rights and freedoms.

10.2 Varieties of impact assessment for AI

TipLearning outcomes

By the end of this section, learners will be able to distinguish between various kinds of impact assessment report that are associated with AI systems and identify when each type of report is legally required.

A data protection impact assessment (DPIA), as its name suggests, is an evaluation carried out by a data controller before they carry out certain forms of high-risk data processing. As defined in the GDPR, a DPIA is required whenever the nature, scope, context, and purposes of processing suggest it is likely to result in a high risk to the rights and freedoms of natural persons. That same provision highlights that the use of “new technologies” is likely to trigger the need for an impact assessment. This section, accordingly, discusses when a DPIA is required for AI systems and what should be contained in that assessment.

In particular, a DPIA is required under Article 35(3) GDPR when there is:

  • A systematic and extensive evaluation of personal aspects relating to natural personals, which offers the base for decisions that produce legal effects (or similarly significant effects) for the concerned legal person.
  • Large-scale processing of special categories of personal data or personal data relating to criminal convictions and offences.
  • A systematic monitoring of a publicly accessible area on a large scale.3

Some of those hypotheses are also present in the AI Act’s list of high-risk AI systems under Annex III. This does not mean, however, that a DPIA is only needed for systems classified as high-risk under the AI Act. After all, the GDPR uses the risks of processing as the relevant criterion for determining the need for a DPIA, whereas the AI Act is concerned with the technical system as a whole. Often, systems that are not particularly risky from a technical standpoint might nonetheless create problems when (mis)used in sensitive contexts, as shown, for example, by various spreadsheets used for assessing the risk of benefits fraud in Dutch municipalities. Technical complexity is a complicating factor when choosing the measures that need to be applied, but the lack of complexity is not necessarily a sign that an application does not create data protection risks.

10.2.1 DPIA before the deployment of an AI system

For deployers, the first step is to self-assess whether the AI system’s processing of personal data constitutes a high risk to the rights and freedoms of natural persons. This is likely to be the case for a system classified as high-risk under the proposed AI Act, as the Act’s risk classification is based on the impact of AI systems in fundamental rights. For example, consider a scenario where InnovaHospital decides to use AI in a medical device that fall into the most strictly regulated classes of the Medical Devices Regulation. The use of an inadequate AI system can create risks to (among others) the right to health of the patients exposed to the device. As such, the system not only meets the AI Act’s definition of high risk. It is also likely to create the kind of high risk that demands a DPIA under the GDPR.

However, the DPIA’s risk requirement might be met even if a system is not classified as high-risk under the AI Act. For instance, AI applications in tax administration could pose significant privacy risks due to the potential for affecting individuals’ legal rights. As such, they are likely to require a data protection assessment, even if the use of AI in tax is explicitly excluded from the AI Act’s definition of “law enforcement.” This example illustrates that the AI Act can offer a guideline to the application of the GDPR’s requirement for a DPIA, but it does not replace a data controller’s careful evaluation of the context.

When conducting DPIAs for high-risk AI systems, deployers must integrate information provided by the developer. At the very least, this means deployers should use the developer-provided instructions for use, which often outline the AI system’s operational parameters, limitations, and specific conditions for safe use. This information is critical for understanding how the system might impact data subjects and for identifying appropriate safeguards that align with GDPR’s accountability standards.

10.2.1.1 DPIA during the AI development process

The AI Act does not create a similar obligation for providers. That is, organizations developing high-risk AI systems are not obliged to incorporate into the DPIA any information they obtain from upstream providers. Nonetheless, those organizations are likely to need to carry out a DPIA themselves. This is because any processing of personal data in the training process is likely to meet the requirements identified above:

  • If such processing occurs, its goal is to create a system that, by definition, poses a high risk to the rights and freedoms of the natural persons affected by the system.4 As such, the risk criterion is likely to be met for the training process.
  • The training of an AI system is, at least for the time being, an operation involving novel technologies. In the future, when techniques for training AI mature enough, this might no longer be the case.
  • Many AI systems are trained with the use of personal data. Whenever that is the case, the training might fall within the scope of the GDPR, as seen in Section 6.3.

Considering these factors, a provider developing an AI system will likely need to conduct a DPIA before they can commercialize that system or put into service. As they do so, they might benefit from the information made available in their own technical documentation. Additionally, they might want to use information obtained from their own providers. For example, InnovaHospital might want to refer to ChatGPT’s documentation as it assesses a chatbot that uses this model. Doing so will allow an organization to see the bigger picture of risks associated with a system.

Likewise, developers of general-purpose AI models trained on personal data would do well to carry out a DPIA before placing their products on the market. If a general-purpose model has systemic risk, it has the potential to impact fundamental rights at a large scale. Therefore, its training is a textbook example of the kind of risky processing with novel technologies covered by the DPIA requirement. Even for models that fall short of the technical threshold for systemic risk, the level of risk might still be high enough. This is the case especially if a model relies on special categories of personal data. Hence, a DPIA is not an obligation just for the organizations deploying AI systems and models, but also for the ones creating them.

10.2.2 Other impact assessment reports

In the broader context of corporate social responsibility, businesses are often encouraged (by industry associations, consumers, and other stakeholders) to carry out human rights impact assessments (HRIA) of their AI solutions. In a more binding fashion, the AI Act obliges some deployers of high-risk AI systems to carry out a fundamental rights impact assessment (FRIA). Because these assessments require an extensive evaluation of the AI system in question, completing them demands resources. In the rest of this section, we will examine those requirements.

A FRIA is required under the AI Act before the initial deployment of certain high-risk AI systems. As specified in the AI Act, a FRIA is required if the deployer of the high-risk AI system is governed by public law, or if it is a private entity carrying out public services. For example, the university UNw would likely be required to carry out a FRIA for its high-risk AI, as a public university. This kind of impact assessment is also required of two types of private actors carrying out private functions:

  • Those using AI for evaluating the creditworthiness of natural persons or establish their credit score (except systems used for detecting financial fraud).
  • Those using AI for risk assessment and pricing in life and health insurance.

Because those two applications are themselves listed as high-risk in Annex III AI Act, any AI system used for those purposes requires a FRIA.

If a FRIA is needed, it must include certain kinds of information:

- a description of the deployer's processes in which the high-risk AI system will be used in line with its intended purpose; 
- a description of the period of time within which, and the frequency with which, each high-risk AI system is intended to be used; 
- the categories of natural persons and groups likely to be affected by its use in the specific context; 
- the specific risks of harm likely to have an impact on the categories of natural persons or groups of persons identified pursuant to the previous point, taking into account the information given by the provider pursuant to Article 13; 
- a description of the implementation of human oversight measures, according to the instructions for use; 
- the measures to be taken in the case of the materialisation of those risks, including the arrangements for internal governance and complaint mechanisms. 

A careful read of the list above suggests a considerable overlap with the impacts to fundamental rights covered by a DPIA. To a lesser extent, the same can be said of HRIAs. While there is no single list of elements required by a HRIA, as the methodologies are chosen based on business requirements, they all cover the impact of AI systems on human rights, which include the fundamental rights outlined above.

It might be possible in some cases to offer a single report that covers all the points required by data protection law and those human rights-focused instruments. Even if that is not the case, much of the work done in the elaboration of the DPIA will be relevant for drafting those reports. Hence, the DPIA, the FRIA, and the myriad forms of HRIA should not be seen as competitors, but as allies in the shared goal of producing trustworthy AI.

10.3 Pursuing fairness in AI technologies

TipLearning outcomes

By the end of this section, learners will be able to outline a data protection impact assessment for an AI system or model and combine that assessment with other assessments that might be required by EU law.

Fairness is a critical concept for AI. Many of the problematic uses of AI technologies we discussed in Chapter 4 can be ultimately traced to the unfair impact that the use of AI has to individuals in those circumstances. Furthermore, Article 5 GDPR establishes fairness as one of the guiding principles of personal data processing. For all the widespread agreement that fairness matters, it can be exceedingly difficult to pin down how exactly it matters and what we should do about it. In this section, we will examine how to find the substance of the legal duties of fairness under the GDPR.

We will not examine here the definitions of fairness metrics. Learners interested in those technical details would be well-advised to consult the companion training module. Instead, we will engage with some factors that data protection professionals must consider when helping technical actors in the selection of metrics that are relevant for particular cases.

10.3.1 Different conceptions of fairness

When it comes to fairness in AI systems, we must deal with the overlaps and conflicts between different conceptions of fairness. From the perspective of data protection law, Article 8(2) of the EU Charter of Fundamental Rights establishes that everyone’s personal data must be processed fairly. This principle, as discussed in Section 6.4, can ultimately be interpreted as a requirement of trust (Roßnagel and Richter 2023, p. 268): if one is processing an individual’s personal data, they must do so in a way that warrants the trust of the data subject.

That is, it is not enough that an individual trusts the data controller, as they might do so for the wrong reasons. The conditions of processing must be such that the data subject’s rights and interests are not disturbed excessively or without justification. What that means in practice is not determined by data protection law itself, but by broader considerations, such as those relating to EU discrimination law (Weerts et al. 2023).

This view of fairness bears some relationship to how fairness is perceived in computer science but is ultimately distinct from it. From a computer scientist’s perspective, the legal—and ultimately philosophical—challenges of fairness become the technical problem of algorithmic fairness. A vast body of research has been dedicated to this problem over the past few years, which focuses evaluating on whether and how a decision made by an AI system can treat different data subjects equally.

Technical research on algorithmic fairness requires two separate tasks. One needs to propose a metric that formalizes what it means to be unfair, defining the concept in a way that can be given a mathematical treatment. Based on that formulation, it becomes possible to measure how unfairness (under that definition) takes place in a concrete context and evaluate whether proposed technical interventions increase or reduce that unfairness (Weinberg 2022). By implementing such techniques, providers and deployers of AI systems can increase the fairness of their data processing operations, in line with the spirit of the law.

However, the difference between legal and technical conceptions of the fairness problem has practical implications. As recent interdisciplinary studies point out (such as Wachter et al. 2021, Weerts et al. 2023), EU law understands fairness and nondiscrimination in a way that is both highly contextual and somewhat different from how those concepts are treated in the United States, where a considerable part of technical research on AI fairness takes place. The contextual character of fairness makes it difficult to express in formal terms that can be expressed in a computer, precluding full automation of fairness checks. The legal differences between the EU and the US, in turn, mean that many of the metrics proposed for algorithmic fairness do not tackle the same problems required by EU law. As a result, one should be careful when using fairness metrics as a tool to evaluate a system.

This is not to say that algorithmic fairness studies are of no value from the perspective of data protection compliance. To the contrary: some of these metrics capture important aspects of the phenomenon, and so they suggest ways to make a system fairer. If one is aware of the limitations of the tools, it should be possible to use them in a fruitful way. Additionally, the fairness-promoting measures suggested by that body of research might be adapted to better suit EU law requirements. Indeed, the studies mentioned above are part of a growing literature that suggests how to use metrics that are thought for the European context.

10.3.1.1 Technical limits of algorithmic fairness

Beyond legal problems, the pursuit of algorithmic fairness can also be criticized in technical grounds. The first, and sometimes most salient, critique is that algorithmic fairness might be a problem that is impossible to solve. In an early paper in the field, Jon Kleinberg and co-authors (2017) showed that, except in some very narrow cases, it was impossible to find a solution that could satisfy at the same time some of the most accepted definitions of algorithmic fairness. Given that such metrics are thought to describe some aspect of what fairness really means, this result suggests that algorithmic fairness cannot be achieved and the best we can hope for is some kind of trade-off.

The existence impossibility result has not stopped research in algorithmic fairness. In fact, researchers have devised numerous ways to try and square the circle of algorithmic fairness. Some (e.g. Beigang 2023) have proposed modifications to the criteria, changing their formulation so that they are no longer incompatible but still capture the underlying intuitions about what fairness means. Others have proposed that one or more of the alternatives considered by Kleinberg and co-authors must be abandoned, potentially in favour of metrics that capture what fairness is truly about. Still, what these metrics pursue is the fair treatment of individuals vis-à-vis others, not the fair processing of an individual’s personal data, which is what data protection law is concerned about.

Future legal guidance might help data controllers in choosing how to approach this problem in practice. Until such guidance comes along—for example, in the form of the harmonized technical standards discussed in Section 13.1 —providers and deployers of AI technologies should be aware that the choice of fairness metrics can be controversial.

Another problem that is usually raised about algorithmic fairness is its unitary approach. That is, algorithmic fairness research often tries to find a single set of conditions that will tell us whether something is fair (Beigang 2022). This is not necessarily a good reflection of the world, as people might have well-grounded but still diverging criteria of what fairness requires. In fact, one might say that many political disputes are precisely disputes about what is fair. So, the decision to follow one specific view of fairness might always be questioned by those for whom that view is unacceptable.

In addition, it has been suggested that viewing fairness as a single set of criteria blurs important distinctions. In a 2022 article, the philosopher Fabian Beigang argues that unfairness can emerge in two different moments when AI is used in decision-making processes. First, the prediction generated by the AI system itself might be unfair, as is the case if the system produces discriminatory outputs. Second, unfair treatment might happen when the algorithmic output is used to allocate resources. For example, an unbiased facial recognition model might still be used for supporting discriminatory decision-making, such as policies that segregate people from a specific ethnic background. By looking at those two issues separately, an organization might have more clarity about the fairness issues that its use of AI can create.

10.4 Conclusion

We can summarize our previous discussion as follows:

  • Technical documentation provides organizational memory, as it registers what decisions were made in the development process, the alternatives that were considered, and the outcome of debates.
    • There are several types of technical documents, each aimed at a certain audience that requires a particular level of detail.
    • Some best practices in software documentation can be used to ensure that the documents are sufficiently informative.
    • When elaborating those documents, one should take care to store decisions about data protection requirements, as well as those that might be relevant to understand issues later.
  • Many uses of AI technologies with personal data might require a data protection impact assessment, but not all of them.
    • DPIAs might be needed both during the development and for the deployment of an AI system. At each point, they might require distinct types of information, but the overall aim is the same: paying proper attention to how the use of AI can impact the rights, liberties, and interests of others.
    • DPIAs coexist with several types of impact assessments, such as those related to human and fundamental rights. When there is a substantive overlap between reporting requirements, organizations might avoid rework by integrating the contents of different reports.
  • Fairness is a complex concept, which is not exhausted by the technical formulation of algorithmic fairness.
    • Some technical definitions of fairness are incompatible with one another.
    • Some legal aspects of fairness are not well represented in formal representations.
    • Nonetheless, algorithmic fairness approaches can be useful for legal compliance if one pays attention to their limits.

In this unit, we have examined some of the mechanisms data protection law and the AI Act utilize to stimulate the fair and accountable use of AI technologies. Documents, such as the technical software documentation and the various kinds of impact assessments discussed above, can provide a paper trail that is fundamental for justifying and evaluating why a system functions in a certain way. Part of that assessment is likely to deal with whether the design and use of the system reflect the GDPR principle of fair processing, and a gap in accountability might itself be something that makes processing unfair. Therefore, fairness and accountability are intricately connected in AI systems.

Exercises

Exercise 1. What best practice ensures documentation serves both technical and nontechnical audiences?

  • a. Excluding technical details entirely.
  • b. Structuring it based on compliance requirements alone.
  • c. Having distinct kinds of documents tailored to specific audiences in language and content.
  • d. Restricting access to documentation to only technical staff.
  • e. Prioritizing technical details over other aspects.

Exercise 2. What must DigiToys include in a DPIA for their smart toy that uses personal data?

  • a. A description of processing operations and potential risks to children’s privacy.
  • b. A marketing strategy for promoting the toy.
  • c. An assurance that users cannot misuse the toy.
  • d. A detailed plan for making the source code public.
  • e. A section excluding GDPR applicability to AI systems.

Exercise 3. Why is it important to analyse both prediction fairness and allocation fairness in AI systems?

  • a. It ensures AI systems are free from biases during development.
  • b. It simplifies the evaluation process by focusing on a single metric.
  • c. It guarantees compliance with all GDPR principles.
  • d. It minimizes risks to fairness in public sector applications only.
  • e. It helps organizations address fairness issues at distinct stages of decision-making.

Exercise 4. How does thorough documentation support both DPIAs and fairness evaluations?

  • a. It replaces the need for impact assessments and fairness testing.
  • b. It provides a record of decisions and processes, aiding transparency.
  • c. It ensures that only technical teams handle AI systems.
  • d. It eliminates bias in AI training data.
  • e. It ensures high-risk systems are not deployed.

Exercise 5. How can UNw address both fairness and accountability when using AI to assess student performance?

  • a. By tailoring fairness metrics to reflect the diverse needs of students.
  • b. By replacing human decision-makers with AI systems, who are not biased and have their decision processes registered by automated logging.
  • c. By eliminating all personal data from the AI system.
  • d. By documenting the development process and selection of algorithms.
  • e. By focusing solely on legal compliance without considering ethical dimensions.
10.4.0.0.1 Prompt for reflection

Data Protection Impact Assessments (DPIAs) are a core tool for assessing risks under GDPR, but fairness is often a less tangible concept that is hard to measure. Reflect on how DPIAs can incorporate fairness considerations effectively.

10.4.1 Answer sheet

Exercise 1. Alternative C is correct. There must be a balance between accessibility and other relevant aspects, such as comprehensiveness and legal relevance. That balance is likely to be reached by having a variety of documents for different audiences, rather than a one-size-fits-all approach.

Exercise 2. Alternative A is correct. Alternative C might be proposed as a measure to address risks, but it is not strictly necessary for a DPIA. Likewise, open-source disclosure is not required.

Exercise 3. Alternative E is correct. Fairness in AI systems is not restricted to bias, or to the public sector. The separation of these two kinds of fairness complicates evaluation but allows for the diagnosis of issues that happen in different moments of the life cycle.

Exercise 4. Alternative B is correct.

Exercise 5. Alternative D is correct. Alternative A promotes fairness alone, while alternative E is likely to produce unfair decisions. Alternative C is not always feasible or necessary. Finally, alternative B overlook the possibilities of algorithmic bias and the potential role of human oversight.

References

Marco Almada and Nicolas Petit, ‘The EU AI Act: Between the Rock of Product Safety and the Hard Place of Fundamental Rights’ (2025) 62 Common Market Law Review.

Jens Ambrock and Moritz Karg, ‘Art. 35. Data Protection Impact Assessment’ in Indra Spiecker gen. Döhmann and others (eds), General Data Protection Regulation: Articleby-article commentary(Beck; Nomos; Hart Publishing 2023).

Christoph Bartneck and others, ‘Trust and Fairness in AI Systems’ in Christoph Bartneck and others (eds), An Introduction to Ethics in Robotics and AI(Springer 2021).

Fabian Beigang, ‘On the Advantages of Distinguishing Between Predictive and Allocative Fairness in Algorithmic Decision-Making’ (2022) 32 Minds and Machines 655.

Fabian Beigang, ‘Reconciling Algorithmic Fairness Criteria’ (2023) 51 Philosophy & Public Affairs 166.

Madalina Busuioc, Deirdre Curtin and Marco Almada, ‘Reclaiming Transparency: Contesting the Logics of Secrecy within the AI Act’ (2023) 2 European Law Open 79.

Giovanni de Gregorio and Pietro Dunn, ‘The European risk-based approaches: Connecting constitutional dots in the digital age’ (2022) 59 Common Market Law Review 473.

Margot E Kaminski, ‘Regulating the Risks of AI’ (2023) 103 Boston University Law Review 1347.

Jon Kleinberg, Sendhil Mullainathan and Manish Raghavan, ‘Inherent Trade-Offs in the Fair Determination of Risk Scores’, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)(Schloss Dagstuhl–Leibniz-Zentrum für Informatik 2017).

Eleni Kosta, ‘Article 35. Data Protection Impact Assessment’ in Christopher Kuner and others (eds), The EU General Data Protection Regulation (GDPR): A Commentary (Oxford University Press 2020).

Alessandro Mantelero, Beyond Data: Human Rights, Ethical and Social Impact Assessment in AI(Springer Nature 2022).

Claudio Novelli and others, ‘AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act’ (2024) 3 Digital Society 13.

Alexander Roßnagel and Philipp Richter, ‘Art. 5. Principles relating to processing of personal data’ in Indra Spiecker gen. Döhmann and others (eds), General Data

Protection Regulation: Article-by-article commentary (Beck; Nomos; Hart Publishing 2023).

Jonas Schuett, ‘Risk Management in the Artificial Intelligence Act’ [2023] European Journal of Risk Regulation FirstView.

Sandra Wachter, Brent Mittelstadt and Chris Russell, ‘Why Fairness Cannot Be Automated: Bridging the Gap between EU Non-Discrimination Law and AI’ (2021) 41 Computer Law & Security Review 105567.

Alina Wernick, ‘Impact Assessment as a Legal Design Pattern—A “Timeless Way” of Managing Future Risks?’ (2024) 3 Digital Society 29.

Hilde Weerts and others, ‘Algorithmic Unfairness through the Lens of EU Non-Discrimination Law: Or Why the Law Is Not a Decision Tree’, 2023 ACM Conference on Fairness, Accountability, and Transparency(ACM 2023).

Lindsay Weinberg, ‘Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML Fairness Approaches’ (2022) 74 Journal of Artificial Intelligence Research 75.


  1. See, among others, Busuioc et al. (2023).↩︎

  2. See Section 6.1.↩︎

  3. Learners working in law enforcement should also consider Article 5(1)(h) AI Act.↩︎

  4. Which are not necessarily the same persons whose data is being processed.↩︎