11 Transparency towards Stakeholders
By the end of this chapter, learners will be able to:
- distinguish between the different stakeholders to which organizations are obliged to provide information about their use of AI.
- break down the different informational needs of those stakeholders and the types of information that must be provided; and
- propose compliance approaches that ensure the information being provided is fit for purpose.
Both the GDPR and the AI Act require the providers and deployers of AI systems to disclose various kinds of information to stakeholders. The specific kinds of information that must be disclosed under each legal instrument will depend on the legal classification given to each actor. For example, a business that deploys an AI system will likely be classified as a data controller under the GDPR for the data processed by that system, and as such it will be subject to certain transparency requirements. At the same time, it will likely be classified as a deployer, and thus subject to the requirements that apply to this kind of actor, as discussed in Section 6.2. Still, an organization cannot disclosure information if it has not access to it in the first place.
Regulatory authorities have considerable powers to request information from the regulated actors, both under the GDPR and the AI Act. The persons affected by the use of an AI system also have some rights to receive information about its operation, further examined in Section 8.2. And the general public has a limited right to obtain information about some kinds of AI systems, as Article 49 AI Act mandates the registration of high-risk AI systems into a publicly available database. Additionally, Article 50 AI Act mandates that providers and deployers of AI systems disclose when they are using AI for some applications, such as interaction with humans or the generation of artificial content, especially when it cannot be distinguished from authentic. As these examples show, the regulation of AI in the EU gives a high value to diverse kinds of transparency.
This chapter discusses three forms of disclosure that are both necessary and complicated in contexts involving AI:
- Section 11.1 discusses legal duties to disclose information to regulatory authorities.
- Section 11.2 examines whether and how the developer of an AI system or model must disclose information about it to downstream developers who might want to use it in their own systems.
- Section 11.3 evaluates current approaches for technical AI transparency from the perspective of whether they can support compliance with data protection duties.
11.1 Disclosure duties towards public bodies
By the end of this section, learners will be able to adapt technical and organizational practices regarding information to ensure that an organization can provide meaningful information upon request by data protection authorities.
A key element of data protection enforcement is that regulators have substantial investigative and corrective powers. Under the GDPR, a supervisory authority can order controllers and processors to provide “any information it requires for the performance of its tasks”, as well as to carry out audits. Those and other investigative powers remain in force when AI systems are used to process personal data. They also apply when personal data is used in the training of AI systems and models. As such, data controllers and processors need to store and keep up to date the kind of information that a DPA will need if it needs to investigate the AI system or model in question.
When it comes to high-risk AI systems and general-purpose AI models, the AI Act adds more details both to the kind of information that needs to be stored and to the powers of supervisory authorities. It grants to market surveillance authorities have the power to obtain access to documentation, data sets, and even the source code of high-risk AI systems in certain cases. Also, as we shall see below, providers and deployers of high-risk systems and general-purpose AI models are required to keep some information for the purpose of compliance. Therefore, the AI Act reinforces the GDPR’s overall approach of obliging organizations to provide extensive support to regulators in their supervisory duties.
11.1.1 Confidentiality as a condition and a limit for disclosure
Because such information is extensive, its disclosure is potentially disadvantageous for organizations. One concern is that, if that information were to become public, people would be able to subvert or otherwise manipulate the AI systems or models. For example, if UNw adopts an algorithm to detect cheating in university examinations, a student that knows how that algorithm works might devise a means to avoid detection. This risk of gaming is often invoked by public sector authorities as a reason some aspects of algorithm design in domains such as fraud risk assessment cannot be made public.
The disclosure of information relating to an AI system or model can have consequences even if it is not used against the system or model itself. For example, a pricing model developed for a business will likely be developed and trained from information that is available to that business and consider elements of its commercial strategy. A competitor that replicates the model might benefit from those insights and the business’s technical work at a fraction of the cost. It might also be able to extract business secrets from the model. To avoid such risks, public and private organization both use the various strategies for opacity discussed in Section 4.3.
Acknowledging those concerns, both the GDPR and the AI Act feature mechanisms to balance the regulators’ need for information and the data controllers’ need for secrecy. Article 58(4) GDPR stipulates that the exercise of regulatory powers by data protection authorities must be accompanied by “appropriate safeguards”, which include effective judicial remedy. More specifically, Article 54(2) binds the staff members of those authority to a duty of professional secrecy with regard to confidential information they receive during their work, which continues to apply even after the end of the staff member’s term of office. The AI Act likewise requires that all regulatory authorities observe a duty of confidentiality, with special attention to the protection of intellectual property rights, trade secrets, and public and national security interests. The result is a system in which the information shared by public and private controllers and processors is protected against leaks from the DPA.
The other side to this elevated level of protection is that data controllers and processors are expected to be forthright when they release information to the supervisory authority. A failure to keep the information that is necessary to understand how a system processes data, or to supply it to authorities on request, can itself lead to sanctions, in addition to any sanctions that might come out of a potential GDPR breach. In the rest of this session, we will discuss what kinds of information must be provided in this context.
11.1.2 Information that must be made available to the authorities
In Part II, we covered a variety of data protection issues that can emerge from the development and use of AI technologies. Addressing those risks falls within the remit of data protection authorities. This means that the authorities will need access to information that allows them to identify how a particular data processing operation can harm data subject rights. It will also need the contextual detail to understand what kind of technical intervention is desirable: should the DPA order the data controller to pursue a technical fix? Mandate certain organizational measures? Or stipulate that the system cannot be salvaged at all? To arrive at those important decisions, a supervisory authority needs to consider the issues that can emerge at each step of an AI technology’s life cycle.
The first thing that must be said about those requirements is that they do not mandate any specific type of document. If an organization provides the information needed by the supervisory authority, it can do so in any form. Meeting the GDPR’s requirements, or even the AI Act’s, does not mean that an organization needs to forsake agile software practices for a waterfall model. What it does require is that organizations take care regarding the substance and the validity of the information contained in the documents.
Regarding validity, an organization must make sure that the documents reflect the version of the system that it uses. Otherwise, a comprehensive documentation might be even misleading.
One type of documentation issue an organization wants to avoid is a failure to describe safeguards that are in place. Consider a scenario in which DigiToys fails to mention that they adopted a tool for anonymizing some of the data they collect from children. This omission creates issues for the company, which will be expected to adopt safeguards for data that is not actually personal data. It also prevents adequate scrutiny of the system, as it does not provide information that is needed to evaluate whether the anonymization techniques are suitable for their purpose. The result is a scenario in which the documents offer an incomplete, and perhaps misleading, guide to the system.
Documentation might also be misleading if it is not accurate regarding the details of the system. For example, suppose the documents for one of InnovaHospital’s automated diagnosis tools fail to mention a change to the model used to power the system’s functionalities. If that happens, the data protection authority might end up requiring that the organization adopt safeguards that are not relevant for the current model. Keeping documentation up to date is not just an exercise of checkbox compliance, but something that can help organizations in understanding the technical and legal risks they face.
As for the contents of the documents, they will depend on the techniques being used to produce an AI system or model, and in the context in which that object is sold or used. An organization would do well to write down the analyses it conducts in the context of the various stages covered by its training module: what issues they found, how they measured the issue, what they did to address it, and what are the residual risks. Registering those factors not only allows an organization to demonstrate its due diligence, but also allows later scrutiny of its decisions.
When writing down that information, organizations might benefit from following best practices in software documentation. As the Write the Docs hub of software documentation recommends, the contents of good software documents should:
- Avoid repeating information that is available in other sources, such as the software code, unless some degree of repetition is beneficial for understanding.
- Keep in mind that readers tend to skim the documentation for useful examples and quick answers before reading it in depth.
- Be consistent with other sources in language and format.
- Be correct and reflect the current state of the software; incorrect documentation is worse than nothing.
Finally, those documents should be drafted in a way that allow their readers to find the information that is contained in them. Burying information amid the documentation runs against the spirit of those disclosure requirements and can easily become a resource drain for the supervisory authority and the supervised organization itself. As such, it should be avoided both for its practical wastefulness and for the risk of sanctions for non-compliance.
11.2 Disclosure duties towards downstream developers
By the end of this section, learners will be able to outline when and why the developers of AI models and systems are obliged to supply information to other actors who want to incorporate those products into their own systems.
Any AI system, no matter how small it is, is the product of a complex value chain. As we have seen in Part II, the creation and use of AI involves a variety of technical steps, and often relies on models and other components developed by third parties. This means that the actors at the end of this value chain do not always have visibility into the inner arrangements of the components they use. For example, if InnovaHospital decides to use a ready-made large language model to create a chatbot, the company supplying that model is unlikely to grant full access to the model’s configuration. Nonetheless, the hospital would still be responsible for the data processing it controls.
Data protection law and the AI Act both feature some mechanisms to address the potential information gaps ensuing from this situation. They do so by requiring that organizations supplying AI models and other components disclose some information about those components to the actors that incorporate them into their own systems. Data protection officers (DPOs) overseeing AI-driven initiatives should be aware of these legal requirements to safeguard user rights and meet regulatory standards.
11.2.1 Supply chain disclosure under the GDPR
Under the GDPR, developers of AI systems, when acting as data processors, must support downstream data controllers in fulfilling their obligations to respond to data subject rights requests. According to Article 28(3), a data controller that hires a processor to carry out a task must lay down by contract (or other legal act) the conditions under which that processing will take place. This includes the need to adopt safeguards.
Consider a situation in which the university UNw decides to hire a contractor to develop a plagiarism detection system for its exams. Not only will the university retain its responsibilities as a data controller, but it will also need to specify safeguards that must be followed by the contractor. Those safeguards might include technical measures, such as those discussed in the next unit of this training module. But any controller would do well to require the processor to supply some information that might be essential for the controller’s own compliance with legal requirements. They might also consider establishing protocols for communication between the organizations, to ensure smooth investigation of any future issues.
Likewise, the GDPR also requires an explicit division of competences in cases of joint controllership. Under Article 26(1) GDPR, joint controllers must clearly define their respective responsibilities for compliance. For instance, a healthcare AI model provider working jointly with InnovaHospital to process patient data must determine who will be responsible for communicating the data collection and usage terms to patients, ensuring that both parties uphold the GDPR’s transparency requirements. In this case, a controller might be able to avoid sharing information with its joint controllers. They cannot do so, however, at the expense of the information that must be supplied to data subjects and regulators.
11.2.2 Additional requirements under the AI Act
In the context of high-risk AI systems under the AI Act, further disclosure requirements apply. Article 25 stipulates that if a high-risk AI system’s purpose is repurposed by a downstream provider, the original provider is partially relieved of compliance obligations. However, they must still cooperate by providing essential information about the AI system to help the new provider meet regulatory standards.
For example, if a financial institution repurposes a high-risk AI system initially developed for fraud detection to assess credit risk, the original developer must share information on the model’s intended capabilities, limitations, and risks to support proper usage. Still, it is the financial institution that will be responsible for ensuring that the system complies with the applicable legal requirements when it is used for credit risk assessment.
For general-purpose AI models, Article 53 imposes a broader obligation to supply documentation and information. That documentation and information must be kept upto-date and made available to providers who intend to use the model in their own systems. The bare minimum content of that disclosure is specified in Annex XII:
1. A general description of the general-purpose AI model including:
(a) the tasks that the model is intended to perform and the type and nature of AI systems into which it can be integrated;
(b) the acceptable use policies applicable;
(c) the date of release and methods of distribution;
(d) how the model interacts, or can be used to interact, with hardware or software that is not part of the model itself, where applicable;
(e) the versions of relevant software related to the use of the general-purpose AI model, where applicable;
(f) the architecture and number of parameters;
(g) the modality (e.g. text, image) and format of inputs and outputs;
(h) the licence for the model.
2. A description of the elements of the model and of the process for its development, including:
(a) the technical means (e.g. instructions for use, infrastructure, tools) required for the general-purpose AI model to be integrated into AI systems;
(b) the modality (e.g. text, image, etc.) and format of the inputs and outputs and their maximum size (e.g. context window length, etc.);
(c) information on the data used for training, testing and validation, where applicable, including the type and provenance of data and curation methodologies.
As we have seen in the previous chapters, a data controller will need this kind of information to carry out their various duties. Without the kind of information listed in Point 2 above, a controller will be unable to assess whether the use of that model poses specific risks in the intended context or supply meaningful information about the AI system. Therefore, this AI Act requirement supports compliance with data protection requirements, regardless of the risk level of the application in which the model is used.
11.3 Technical disclosure and the right to an explanation
By the end of this section, learners will be able to distinguish between the various interpretations given to the “right to an explanation” in legal scholarship and practice.
One of the distinctive features of EU data protection law is that it grants a variety of rights to data subjects. To implement these rights, the GDPR creates a series of obligations for the data controllers who process data pertaining to those subjects. Those obligations remain valid when processing is done by an AI system, and they are supplemented by some additional requirements laid down in the AI Act. In this section, we will discuss what kinds of information data controllers must disclose to data subjects about their use of AI.
From a perspective of timing, it is important to distinguish between two moments of disclosure. Data controllers must disclose information about processing done by an AI system, whether the data has been collected from the data subject or from elsewhere. Additionally, data subjects have the right to request access to their personal data being processed and to information about processing.
Because those rights are connected to actual processing operations, they must be exercised regarding the controller of that processing. That is, the data controller(s) for processing during the training stage will supply information about personal data used to train the AI system, while the data controller(s) of the deployed system will supply information about its use in each context.1 In that regard, AI systems are treated just like any other form of processing.
What is unique about disclosure duties, when AI systems are involved, is the so-called “right to an explanation.” Recital 71 GDPR mentions that data subjects should have, at least, the right to an explanation of automated decisions, but such a right does not appear in Article 22 GDPR. As a result, there was considerable controversy about whether such a right exists. That controversy is likely to be settled in definitive by the forthcoming decision of the European Court of Justice in Case C-203/22 (Dun & Bradstreet Austria), in which one of the referred questions deal precisely with the extent to the right to an explanation.
In the meantime, the dominant view among academics (see, e.g., Kaminski 2019) and data protection authorities (see Vale and Zanfir-Fortuna 2022) is that such a right can be grounded in the right to access to “meaningful information” about the logic of automated decision-making under Article 15(1)(h) GDPR. This right does not apply to all AI systems, but it applies “at least” in cases of automated decision-making under Article 22, which are often carried out with AI. Therefore, at least some AI systems are subject to this rule.
The clause “at least in those cases” in Article 15(1)(h) GDPR suggests that a data controller might have an obligation to disclose “meaningful information about the logic involved”, as well as “the significance and envisaged consequences of processing”, even when processing does not qualify as automated decision-making. In a more restricted reading, one could understand this clause to merely state that data controllers can disclose that kind of information in other contexts. While this is certainly true, this possibility becomes an obligation in some cases.
Recently, the European Court of Justice broadened the understanding of “automated decision-making” under Article 22 GDPR. In the case C-634/21 (Schufa), it has ruled that a credit score calculated from personal data could be considered “automated individual decision-making” when a third-party receives that score and draws strongly on it to establish, implement, or terminate a contractual relationship. That is, an AI system (or any other form of data processing) that strongly influences a decision can be covered by Article 22 even if a human theoretically has a say in the process.
Additionally, Article 86 AI Act establishes that any affected person subject to a decision taken on the basis of the output of a high-risk AI system, which produces legal effects or similarly significantly affects that person, has the right to obtain “clear and meaningful explanations” of the role the AI system plays in the decision and the main elements of the decision taken. This right has been designed as a safeguard for cases that are not covered by the GDPR’s right to an explanation, and it requires a narrower form of disclosure. he deployer does not need to explain the logic guiding the decision, just the role of AI and the contents of the decision itself.
11.3.1 The concept of “meaningful information” about an AI system’s decision logic
The determination of what counts as “meaningful information” under the GDPR is necessarily contextual. That is because access to that information is a data subject right, and as such it must be thought from the subject’s perspective. The information provided about the decision logic must allow the subject to make sense of processing and how it affects their rights, liberties, and interests. It must give data subjects the grounding to decide whether to exercise their other rights, such as the right to contest an automated decision (Bayamlioglu 2022). If an explanation is to be successful in that aim, it must meet certain formal and substantive requirements.
On the formal side of things, an explanation must be presented in a way that a data subject can understand. But data subjects can come from a variety of backgrounds. T he average individual cannot be expected to have the time or the technical competences to understand technical explanations, so disclosing model parameters or a system’s source code will not contribute to their understanding of the system. On the other hand, a technically savvy individual, or a person working with a civil society organization, might have the resources for a more in-depth exploration of technical issues. So, they will likely be unsatisfied with an explanation directed at laypersons.
Given this broad range of data subject capabilities, organizations would do well to follow a multi-layered approach to disclosure (Kaminski and Malgieri 2021). Doing so would entail preparing information that can be digested at various levels of complexity, and supplying that information according to data subject needs, on request. That will ensure that data subjects that need basic information are not smothered in technical detail, while other data subjects can dig deeper within their rights.
On the substantive side of things, the requirements are much less clear. The main question that is raised (see, e.g., Brkan and Bonnet 2020) is whether the disclosure of the “meaningful logic” behind an automated decision can happen without revealing the system’s inner workings. On a literal reading of the requirement, that seems to be the case. An abstract description of how the system produces its inputs from its outputs might be enough to give an actionable view of why things have been decided in one way and not in another. However, compelling arguments have been made, both by academics and data protection authorities, that more disclosure is needed.
11.3.2 Elements of meaningful information
In the Dun & Bradstreet Austria judgment, the Court of Justice of the European Union held that “meaningful information about the logic involved” refers to information about the procedure and the principles applied to obtain a particular result. The judgment offers some details on how to interpret that legal requirement, identifying measures that are insufficient to comply with it. However, both the technological and the legal aspects that are relevant for meaningfulness are moving fast. As such, the following discussion offers some considerations that will likely be relevant for any understanding of “meaningful information” under data protection law.
A primary component of disclosing “meaningful information” about an AI system is explaining the inputs that system considers as it produces its own outputs. For example, if an AI system assesses creditworthiness, it may consider inputs like income, credit history, and recent transactions. Communicating these inputs to data subjects provides them with a clearer understanding of the data influencing decisions about them, enabling them to verify the accuracy of their personal data and, if necessary, request corrections. This transparency also helps ensure that data processing complies with principles of fairness, as individuals can better understand how relevant information impacts the outcomes they receive.
In addition to disclosing inputs, data controllers should communicate how different inputs could lead to different outcomes. While it is not always feasible to explain complex AI model logic in detail, providing examples or scenarios can help illustrate how certain changes in input data might affect the AI system’s output. For instance, explaining that a credit assessment score could be different if income or employment status were updated gives individuals a practical sense of the decision-making logic, without delving into technical complexities. Such explanations are valuable, as they give individuals a tangible understanding of the system’s logic, particularly in contexts where AI may influence significant aspects of their lives. In Chapter 12, we discuss some technical measures that can support organizations in generating this kind of explanation.
It is also crucial to clarify how the AI output impacts real-world decisions. Data controllers should indicate whether the AI system’s output is directly applied to make decisions or if it serves as a recommendation subject to human review. For example, an AI-driven hiring system may rank candidates based on qualifications, but a hiring manager makes the final selection. Distinguishing between direct and mediated applications of AI outputs helps individuals understand the role of human oversight in decision-making processes, fostering greater transparency about how their data is used.
While explainable AI methods and other technical means for transparency can assist in making these processes more transparent, they are not the sole solution. An explanation that covers these essential elements—inputs, potential outcome variability, and application context—is often sufficient to fulfil GDPR obligations without requiring deep technical detail. However, data controllers must balance this transparency with the need to protect trade secrets: their own secrets or those of the upstream providers from whom they acquire models and other components. Balancing the duty of disclosure with the need to respect those secrets can be a tricky challenge in practice. Still, it is a challenge organizations need to face in order to comply with data protection law when they use AI.
11.4 Conclusion
This chapter has covered many types of disclosure duties that are present in data protection law and the AI Act. The various forms of opacity discussed in Section 4.3 all come into play here, creating obstacles to deployers and providers of AI technologies. It is in the best interest of those organizations to adopt measures that secure the information they need to disclosure. The bad news is that the disclosure obligations remain in force even though AI makes things much more complicated. The good news is that there are various measures that can contribute to disclosure.
A few of those are relevant to many, if not all, of the modes of disclosure we have considered above:
- Maintaining comprehensive and updated documentation of processes and decisions.
- Keeping in mind how the recipients of documents and other forms of disclosure will use the information and preparing it accordingly.
- Using documents for the clear definition of responsibilities throughout the supply chain.
- Rely on examples and context to make information more accessible.
- Rely on a multi-layered approach to disclosure, in which the same information can be presented in ways that are more accessible to each kind of stakeholder.
Each of these practices faces its own obstacles. For example, a multi-layered approach creates the challenge of ensuring that all forms of disclosure remain coherent with one another. Still, for the most part, the previous sessions illustrate how disclosure remains possible even in a world of opaque AI everywhere.
Exercises
Exercise 1. How can DigiToys prepare for supervisory authority audits of its high-risk AI systems?
- a. Encrypt all system data to avoid disclosure.
- b. Limit information sharing to product descriptions.
- c. Provide regulators access to physical facilities instead of documentation.
- d. Remove all data logs periodically to minimize risk.
- e. Ensure documentation includes details about datasets, models, and safeguards.
Exercise 2. How can UNw ensure transparency when using a pre-trained AI model for campus security?
- a. Require the vendor to provide details about the model’s training data and limitations.
- b. Avoid entering contractual agreements to preserve operational flexibility.
- c. Rely solely on the information that the AI vendor is obliged to disclose under the law.
- d. Invoke the public interest in campus security to avoid disclosing information.
- e. Keep compliance limited to internal use cases, avoiding broader disclosures.
Exercise 3. Which approach can help balance transparency between experts and laypersons?
- a. Providing every user with a highly detailed explanation of an AI system’s operation.
- b. Releasing information only to users who prove their technical competences.
- c. Multi-layered disclosure tailored to different audiences.
- d. Providing every user with explanations that are legible for a general audience.
- e. Including only examples of an AI system’s outputs without context.
Exercise 4. How can InnovaHospital comply with its transparency obligations towards patients when it uses an automated diagnosis tool?
- a. Disclose the source code of the AI-powered tool.
- b. Ensure that data subjects waive their right to receive an explanation.
- c. Keep the disclosure of how the AI system arrives at decisions limited to regulators.
- d. Clarify to patients the main elements of the logic behind the diagnosis procedure.
- e. Use complex jargon in disclosures to protect trade secrets.
Exercise 5. What is one similarity between GDPR and AI Act transparency obligations?
- a. Both require the publication of all AI models.
- b. Both emphasize the importance of up-to-date documentation.
- c. Both absolve organizations of trade secret concerns.
- d. Both limit transparency to internal stakeholders.
- e. Both exempt small businesses like DigiToys.
11.4.0.0.0.1 Prompt for reflection
UNw is considering incorporating a third-party AI model into its admissions process. However, it worries about its ability to ensure compliance with GDPR transparency requirements when it relies on an external provider. How should it manage its relationship with the third-party provider to ensure compliance with GDPR and AI Act requirements? What types of information should the third-party provider share with the university to enable transparency with students and regulators? Discuss the role of contracts, such as those mandated under Article 28(3) GDPR, in securing access to information in AI supply chains.
11.4.1 Answer sheet
Exercise 1. Alternative E is correct, as it is the only set of practices that will help DigiToys gather information that it will need to provide to the data protection authority.
Exercise 2. Alternative A is correct.
Exercise 3. Alternative C is correct, as it allows an organization to meet the needs of distinct kinds of data subjects. Each of the other alternatives would lead to some gain in information about the system, but they would all put the needs of a set of data subjects over the needs of others.
Exercise 4. Alternative D is correct. Source code disclosure is not mandatory, but data subjects have some right to access information about the operation of the AI system. Reliance on complex jargon is unlikely to meet the communicative requirement, and the explanation waiver is questionable from a legal and ethical perspective.
Exercise 5. Alternative B is correct. Even though small businesses are supported in their compliance, they are not excluded from regulatory obligations.
References
Article 29 WP, ‘Guidelines on Transparency under Regulation 2016/679’ (2018).
Emre Bayamlıoğlu, ‘The Right to Contest Automated Decisions under the General Data Protection Regulation: Beyond the so-Called “Right to Explanation”’ (2022) 16 Regulation & Governance 1058.
Andrew Bell and others, ‘It’s Just Not That Simple: An Empirical Study of the Accuracy-Explainability Trade-off in Machine Learning for Public Policy’ in (ACM 2022) FAccT ’22 248.
Adrien Bibal and others, ‘Legal Requirements on Explainability in Machine Learning’ (2021) 29 Artificial Intelligence and Law 149.
Sebastian Bordt and others, ‘Post-Hoc Explanations Fail to Achieve Their Purpose in Adversarial Contexts’ in (ACM 2022) FAccT ’22 891.
Maja Brkan and Grégory Bonnet, ‘Legal and Technical Feasibility of the GDPR’s Quest for Explanation of Algorithmic Decisions: Of Black Boxes, White Boxes and Fata Morganas’ (2020) 11 European Journal of Risk Regulation 18.
Melanie Fink and Michèle Finck, ‘Reasoned A(I)Administration: Explanation Requirements in EU Law and the Automation of Public Administration’ (2022) 47 European Law Review 376.
Margot E Kaminski, ‘The Right to Explanation, Explained’ (2019) 34 Berkeley Technology Law Journal 189.
Margot E Kaminski and Gianclaudio Malgieri, ‘Algorithmic Impact Assessments under the GDPR: Producing Multi-Layered Explanations’ (2021) 11 International Data Privacy Law 125.
Blazej Kuzniacki and others, ‘Towards eXplainable Artificial Intelligence (XAI) in Tax Law: The Need for a Minimum Legal Standard’ (2022) 14 World Tax Journal 573.
Gabriela Zanfir-Fortuna, ‘Article 13. Information to Be Provided Where Personal Data Are Collected from the Data Subject’ in Christopher Kuner and others (eds), The EU General Data Protection Regulation (GDPR): A Commentary (Oxford University Press 2020).
Gabriela Zanfir-Fortuna, ‘Article 14. Information to Be Provided Where Personal Data Have Not Been Obtained from the Data Subject’ in Christopher Kuner and others (eds), The EU General Data Protection Regulation (GDPR): A Commentary (Oxford University Press 2020).
Gabriela Zanfir-Fortuna, ‘Article 15. Right of Access by the Data Subject’ in Christopher Kuner and others (eds), The EU General Data Protection Regulation (GDPR): A Commentary(Oxford University Press 2020).
On the identification of those actors, see Section 6.2.↩︎