13 Supporting the Lawful Use of AI
By the end of this chapter, learners will be able to explain why non-binding sources such as technical standards, third-party certification, and codes of practice are relevant for compliance with AI-related legal requirements.
Furthermore, they will be able to assess whether and how conformity with such a scheme is suitable to organizational needs.
This chapter wraps up the book by introducing learners to the various technical instruments that can support their management of AI-related issues. It does not focus on specific standards (especially as the harmonized European standards still have not been published) but discusses their legal value and limitations.
Both the GDPR and the AI Act are designed as technology-neutral laws. That is, their obligations are meant to cover present and future technologies within the scope of those laws. Their legal requirements are formulated in general terms, and the application of those terms to specific technologies is left to a later stage. As seen in Chapter 12 and Chapter 14, this often means that data controllers are actively involved in this process of translating legal requirements into technical ones. But they are not the only actors involved in this process.
To comply with their legal obligations, data controllers can rely on a variety of secondary sources. They can consult academic works on data protection law, hire external consultants, among other possibilities. None of that is mandatory, but they all might supply gaps in the interpretation of the law that an organization can have.
Under EU law, some of those sources are given a privileged status. As Section 13.1 examines, the AI Act stipulates that conformity with harmonized technical standards generates a presumption that the AI system or model complies with the legal obligations covered by the applied standards. An organization can still decide to depart from such a standard, but they will need to show their approach meets the essential elements laid down in the Act. So, there are advantages to following this source even if it is not mandatory.
This significant role of standards is unique to the AI Act, but the Act and the GDPR also grant a special status to other sources. Section 13.2 examines the differences in the legal value that each of those instruments grants to certification procedures and selfregulation mechanisms. Then, Section 13.3 wraps up the training module by introducing the learner to measures that the AI Act introduces to support innovation in AI technologies.
13.1 Technical standards
By the end of this section, learners will be able to explain the legal value of various kinds of technical standards under European Union law. Equipped with those distinctions, learners will be able to evaluate what kinds of standards, if any, are suitable for their compliance needs.
Technical standards are documents that specify a way of doing something. For example, the European standard EN 124:2015 governs the production of manholes: it distinguishes between different classes of weight loads to which manholes might be subject and defines some attributes that manholes for each class might have. While a manhole does not have much in common with AI technologies, the latter can also be standardized to some extent.
A technical standard for AI would define the properties that a certain AI-powered technology must meet. For example, a standard for facial recognition technologies might stipulate the need to adopt techniques that detect bias, as well as accuracy levels that must be met for a successful application. As such, those standards can help organizations identify which metrics are relevant for their technologies, and what values these metrics should have.
13.1.1 Types of technical standards
A technical standard is a written document that lays down norms. This document must be written by someone, and it will have a specific format. Given the technical nature of a standard, understanding its contents is something that requires some specialized knowledge in the domain of application. For example, the target audience of AI technical standards is that of AI experts working in the development of AI audiences. Beyond this core of meaning, however, technical standards can take a variety of forms, depending on their audience. We will now discuss some types of standards that can be relevant for AI.
The first distinction between standards concerns the object being standardized:
- Product standards, such as the EN 124 discussed above, establish technical norms for a physical or digital object.
- Process standards, instead, deal with how an organization does things. For example, the famous ISO 9000 series of technical standards establishes various norms that organizations can follow to increase the quality of the products and services they offer. As such, they are meant to change practices within an organization.
- Another role that standards can play is that of establishing shared concepts and vocabulary within a technical field. For example, the standard ISO/IEC 15408-1:2022 defines concepts and principles that should guide the evaluation of IT security.
These three examples illustrate that compliance to standards might require changes to the behaviour of different people within an organization.
Another relevant distinction concerns the goals laid down by a standard. Design standards are technical standards that specify how a particular goal must be achieved. For example, the TCP protocol stipulates how a computer must format data and the procedure it must follow to transmit data to another computer. If a computer tries to communicate through this protocol without following those specifications, it will not succeed.
This approach can be contrasted with that of a performance standard, which merely specifies goals but leaves the manufacturer free to choose how these goals will be met. For example, a pollution standard for cars might specify that a car cannot emit more than a certain volume of CO2 per kilometre. If the car emits less than that, it meets the standard regardless of the technologies used to move it.
A final distinction that is relevant for our purposes concerns the source of a given technical standard. Some standards are produced by private organizations or consortia of private organizations: for example, the Blu-ray standard was created by a group of companies led by Sony. Others are produced in a more collaborative way, by organizations formed by representatives of private (and sometimes, public) entities, such as the International Organization for Standardization (ISO) or the Institute of Electrical and Electronics Engineers (IEEE). Finally, some standards are produced by public bodies, such as the US National Institute of Standards and Technology (NIST) or the national harmonization bodies in the European Union. The pedigree of a technical standard might have implications for its legal implications.
13.1.2 The legal value of standards
Under the GDPR, technical standards are not a particularly salient factor. Article 43 GDPR mentions that the Commission may adopt implementing acts laying down technical standards for certification mechanisms and data protection seals and marks. However, it does not directly establish the power to adopt technical standards of general value. Data protection authorities can compel data controllers and processors to adopt certain measures, and they can adopt and authorize contractual clauses that might have technical implications. Still, these powers do not include neither the elaboration of binding technical standards nor the power to oblige all data controllers to follow a certain standard. As such, the use of particular technical standards is rarely mandatory for compliance with the GDPR, even if organizations are still free to rely on those standards to help them interpret the technical implications of data protection.
The AI Act, instead, gives considerable value to a specific type of technical standards. Its Article 40 establishes that conformity with harmonized technical standards generate the presumption of conformity with the provisions of the AI Act covered by that technical standard. That is, a high-risk AI system or a general-purpose AI model that follows the applicable standards is assumed to comply with the AI Act unless it can be shown otherwise. As one can expect, following such standards is therefore a way to reduce the effort involved in understanding what the AI Act requires of a data controller or processor.
This presumption only applies to harmonized technical standards (or parts of them) that have their references published in the Official Journal of the European Union. That is, an actor that follows a private standard such as ISO 42001 must still demonstrate that the measures they took meet the requirements of the AI Act. The standards that trigger the presumption are only those published by two European Standardization Organizations—CEN and CENELEC—in response to a request by the European Commission, and only once the references to them have been published in the Official Journal. Such standards are unlikely to be made public before the end of 2025.
Additionally, the European Commission has the power to issue common specifications. In terms of content, a common specification is just like a technical standard—and as such, an organization is free to decide whether to follow it or not. What distinguishes it is the form of its adoption. The Commission can only create a common specification if it finds a technical issue that is not adequately covered by the harmonized technical standards it requested, and it must follow a specific legal procedure. But, if and once such a specification is adopted, it also generates a presumption of conformity with the legal requirements covered by it.
Under the AI Act, organizations developing or deploying AI technologies have strong incentives to follow harmonized standards or common specifications when they exist. Why might they rely on other kinds of standards, then? A few reasons might explain that:
- Following an international standard might be desirable or needed to reach markets beyond the EU. It might even be required by the laws of some other country in which an organization operates or sells its products. For example, China has its own approach to regulation of AI technologies.
- The EU standards might not be detailed enough. In this case, the nonharmonized standard can help an organization make sense of the information it needs to implement the harmonized standard.
- A data protection obligation might not be covered by a harmonized standard. Given that harmonized standards for AI are shaped by the AI Act, they might not cover all the data protection risks examined above.
These conditions suggest organizations have good reasons to rely on sources beyond the forthcoming harmonized standards. When they do so, however, they must take care to show that the measures they adopt are enough to comply with the relevant legal requirements. Furthermore, regardless of the reliance on standards, they remain obliged to comply with data protection law. Following a standard (harmonized or not) does not eliminate this need but can be a useful tool in demonstrating compliance.
13.2 Other mechanisms to support compliance
By the end of this section, learners will be able to identify when third-party certification of an AI system is required under EU law. They will also be able to describe the key features of self-regulation mechanisms.
When seeking information about their legal obligations, organizations can rely in sources beyond technical standards. In this session, we will discuss two of the sources. First, we will consider how certification schemes can help actors in demonstrating their compliance and in obtaining information about the content of their obligations. Then, we will discuss how documents such as codes of practice and codes of conduct can guide organizations as they interpret their duties. Reliance on certifications and documents is not mandatory, but it is sometimes encouraged by legal advantages. As such, it is important to know how those arrangements work and whether they are suitable for a particular context.
13.2.1 Certification schemes as a tool for demonstrating compliance
Broadly speaking, certification is a process in which an organization relies on a trusted process to evaluate a product or a service that the organization offers. It is an established practice in modern lives: the food we eat, the electronic devices we buy, and so many other things often have certificates meant to reassure us of their quality. The situation is not different when it comes to the digital world, as sellers might want to use certification to build trust in an innovative technology.
In data protection law, the primary role of certification is as a form of demonstrating compliance. The GDPR clarifies that the certification process is voluntary. Controllers and processors can choose whatever certification they want (or none at all). However, certifications issued by bodies compliant with the requirements laid down in the GDPR are considered when an organization must show that it observed the data protection requirements for a given processing. This allows Member States to ensure a certain degree of quality for high-end certifications, while still leaving market actors free to pursue other arrangements.
The AI Act also falls short of making third-party certification mandatory. In fact, for most AI systems, conformity with the applicable legal requirements must be demonstrated by the providers themselves through an internal assessment procedure. Article 43 further clarifies that this procedure does not leave room for the involvement of a certification body. Instead, those bodies (called “notified bodies” under the AI Act) are only involved in specific cases. In particular, third-party assessment remains s mandatory if the product in which AI is used would be subject to such an assessment under other provisions of EU law. Whenever that is the case, such certification must be pursued before the system can be placed on the market, put into service, or used in the EU.
In the absence of such a mandate, third-party certification offers little advantage from the legal perspective of the AI Act. It might nonetheless remain desirable for pragmatic reasons. Subjecting your product to the scrutiny of a trusted third party might be a way to create trust in it. For example, DigiToys might want to undergo external certification in order to show to prospective buyers that its smart toys are safe enough to be used with children. Additionally, an organization might use third-party certification to double-check or supplement its own internal controls. At the end of the day, external certification is no substitute for internal due diligence but can be a powerful supplement to it.
13.2.2 Codes of practice and other self-regulation instruments
As the previous units of this training module have shown, the EU approach to AI regulation allows considerable flexibility for developers and deployers of AI technologies. For the most part, those actors are the ones who identify the relevant risks and how they are best addressed by technical and organizational measures. Regulators have the power to address situations in which those measures are insufficient to protect rights, freedoms, and interests affected by the use of AI. Still, to a considerable extent, this regulatory power is expected to help the regulated actors in finding the best way for compliance.
To that effect, Article 40 GDPR establishes that the EU Member States and their data protection authorities, the EDPB, and the Commission must encourage the drawing up of codes of conduct. These codes of conduct are to be drawn up by associations and other bodies representing categories of controllers or processors and offer guidance for the problems faced by that category. For example, a code of conduct regarding the processing of medical data would be useful for InnovaHospital as it deals with the design of its AI systems that process personal data. It might offer guidance about how to pseudonymize data, how to configure parameters to ensure appropriate levels of accuracy, and so on. A code of conduct therefore offers a bridge between the general provisions of the law and the specifics of particular applications of data processing.
A code of conduct is a voluntary commitment. An organization, be it public or private, can choose whether it will follow a code drawn up by representatives of a category. For example, the GDPR does not oblige the university UNw to follow a code of conduct elaborated by the National Association of Universities.1 However, data protection authorities play a supervisory role in the process of drawing up these codes. This means data protection authorities can offer guidance regarding the sector-specific issues that associations can identify and ensure the quality of a code of conduct. By following a code of conduct approved by a data protection authority under the GDPR’s procedure, an organization can be sure that its processes reflect the best practices in data protection.
This volunteer approach to data protection means that authorities can tap into the technical and practical knowledge of domain experts, while still guiding them on data protection. As such, the overall level of data protection might benefit from the competition between different codes of conduct, as well as from the experiences of different sectors.
Such an approach has been extended by the AI Act, which features two types of codes. The first type of code is the code of practice for general-purpose AI models. According to Article 56, the EU AI Office2 must encourage and facilitate the creation of those codes of practice. They are meant to guide the proper application of the Act’s provisions on general-purpose AI systems, detailing obligations laid down in Articles 53 and 55 of the Act, as discussed in Chapter 14. Their elaboration involves the providers of general-purpose models, national authorities, civil society organizations, academics, and other interested parties.
Drawing on those perspectives, the Commission adopted the first code of practice, on the governance of general-purpose AI models. That code of practice offers detailed instructions on what providers of general-purpose AI models must do to comply with the AI Act. In doing so, it details the contents that this code must have under Article 56 AI Act. Providers are expected to define specific objectives and measures that must be adopted. They must also include specific metrics for tracking conformity to those objectives, and the actors who embrace the code of practice must provide regular reports on how they implemented their commitments. This means an organization that adheres to a code of practice obliges itself to implement their AI models in a way that reflects current best practices on software development.
Compliance with a code of practice remains voluntary. Yet, embracing such a code can bring advantages to providers of general-purpose AI models. Until harmonized standards on general-purpose AI models are published, a provider can use a code of practice to demonstrate compliance with obligations. That is, the fact that an organization joined a code of practice and is up to date with its commitments will be enough to establish that it has fulfilled the obligations covered by those practices.
Once a harmonized standard is released under the procedure covered in Section 13.1, the codes of practice lose this additional value. Even then, a provider will likely be compliant with relevant AI Act provisions if it follows an up-to-date code of practice. What changes is that the provider will need to show the concrete measures it has taken. Mere adhesion to the code of practice will no longer be considered enough.
Finally, the AI Act also allows organizations to adopt codes of conduct. Through these codes of conduct, organizations are expected to voluntarily pledge to follow some (or all) the requirements for high-risk AI systems, even if their system is not classified as such. Because the requirements are not legally binding on them, following a code of conduct does not generate a presumption of quality. Still, it is seen as a way to push organizations towards best practices against AI risks. This is why the AI Office and the Member States are expected to encourage and facilitate the elaboration of such codes.
13.3 Measures supporting innovation in AI
By the end of this section, learners will be able to identify potential sources for innovation guidance as they deploy AI systems.
Part of the difficulty in regulating AI technologies comes from their novelty. Because AI allows for the automation of tasks that were previously outside the reach of computing, sometimes it can be difficult to figure out what can go wrong with a particular AI system or technology. Even when the risks are known, there is also uncertainty about whether the proposed fixes are sufficient to address them. After all, a solution that works well in a controlled test environment might not work so well in the real world. This creates a knowledge problem, which regulators try and address in a few ways.
Within the framework of the GDPR, data protection authorities have been active in providing guidance about factors relevant for the use of AI. The European Data Protection Board has edited various guidelines about legal requirements such as automated decision-making, data protection by design and by default, and the legitimate interest basis for automated decision-making. Likewise, national authorities have often published guides about specific technologies, such as the generative AI systems discussed in Chapter 14. A compliance plan for AI systems should refer to those documents whenever available.
13.3.1 Regulatory sandboxes in the AI Act
The AI Act includes some additional mechanisms for supporting organizations that intend to deploy AI systems. The first one is that of regulatory sandboxes, established in Article 57 AI Act. A sandbox is a controlled environment that can be used to assess emerging technologies before they are placed on the market or put into service. In this environment, an organization can experiment with the AI system in conditions resembling the real world. They will need to follow the testing protocols defined by the authority establishing that sandbox, which are meant to identify and fix risks before the system is put into widespread use. In this process, providers of AI systems are supported by the national regulatory authorities, which will offer guidance in identifying risks from AI and in complying with the applicable legal requirements.
Joining a sandbox can be advantageous for an organization that wants to develop an AI system or model. The first advantage is that a regulatory sandbox creates a space for dialogue: organizations can benefit from the expertise of regulators on the technical and legal issues raised by AI, and potentially benefit from the experiences of other organizations within the sandbox. In particular, the authorities responsible for a sandbox are required under Article 57(6) AI Act to help organizations diagnose potential risks to fundamental rights, health, and safety stemming from the use of AI.
Within the sandbox, all legal requirements remain applicable. Organizations are expected to comply not only with the AI Act’s requirements, but with the GDPR and any sector-specific legislation that covers their AI system or model. Competent authorities still retain their supervisory and corrective powers. However, their exercise of these powers is also limited by Article 57 AI Act. When enforcing the law, Article 57(11) these authorities are expected to use their discretionary powers in a way that supports innovation. Furthermore, Article 57(12) precludes the authorities involved in the sandbox from applying administrative fines to organizations that follow the sandbox’s testing protocols in good faith. Regulators are therefore expected to guide organizations towards compliance.
By 2 August 2026, Article 57(1) AI Act requires each Member State of the EU to set up at least one sandbox for AI systems. That sandbox can be a general sandbox for all kinds of innovative AI systems, but Member States can also set up separate sandboxes for different domains. For example, a state might create a sandbox for stimulating AI innovation in the medical sector and another one, following different rules, for innovations in education. The conditions that an organization must meet to enter the sandbox and to exit it (that is, to adopt a product that is cleared for use) are defined by the competent authorities for AI regulation. This means that such sandboxes might be extended to systems beyond the AI Act’s high-risk classification.
The possibility of having sandboxes beyond high-risk systems is particularly useful because the sandboxes are not limited to AI Act enforcement. Under the AI Act, the data protection authority must be involved in any sandboxes concerning personal data. Likewise, sector-specific regulators must be involved in the sandboxes relating to their sectors of competence. Therefore, joining a sandbox allows organizations to understand what is required of them before they commercialize or put into service an AI system.
13.3.2 Processing personal data within sandboxes
Another major advantage of joining a regulatory sandbox is that it allows for the further use of personal data. Within a sandbox, Article 59 AI Act allows the lawful reuse of personal data collected for other purposes, if the AI system is meant to safeguard a substantial public interest in one of the following areas:14
(i) public safety and public health, including disease detection, diagnosis prevention, control and treatment and improvement of health care systems;
(ii) a high level of protection and improvement of the quality of the environment, protection of biodiversity, protection against pollution, green transition measures, climate change mitigation and adaptation measures;
(iii) energy sustainability;
(iv) safety and resilience of transport systems and mobility, critical infrastructure and networks;
(v) efficiency and quality of public administration and public services;
The same article imposes several limits to this further use of personal data. The organization that wants to develop an AI system for those purposes within a sandbox must show that the use of such data is needed to meet the requirements for high-risk AI systems, in particular by showing that alternative sources such as anonymized or synthetic data would be inadequate. It must also adopt a series of safeguards for data use and follow the testing protocols in the sandbox.
13.3.3 Real-world testing of AI systems
In addition to sandboxes, the AI Act also features a mechanism for testing AI systems in real-world conditions. Such tests are subject to a strict discipline, laid down in Article 60(4) AI Act. Those conditions include the need for approval of a testing plan (and, in many cases, registration) before any test can start, restrictions on the transfer of data to outside the EU, a limited duration for the test (at most six months, which can be extended by up to another six months upon a justified request), and safeguards for the testing subjects. Under Article 61, those subjects must provide their informed consent to participation in any such test.
Market surveillance authorities are granted powers to supervise the tests and interrupt them if necessary. However, unlike the sandbox procedure stipulated above, real-world tests outside a sandbox are not necessarily integrated with data protection enforcement. As such, data protection requirements can be enforced normally, without the restrictions placed by sandboxes. Therefore, an organization might consider moving to this kind of testing only after it has established a solid basis for its processing of personal data.
13.4 Conclusion
The final chapter of this book has covered some tools and mechanisms that AI providers and deployers can use for making sense of the legal requirements in the GDPR and the AI Act. Because these legal instruments are designed to cover all sorts of circumstances, they cannot offer detailed guidance about every use case or technology. To supply this type of guidance, the legal instruments create some incentives that support private and quasi-private actors, such as standardization bodies, in providing tailored guidance. Relying on these sources is, of course, no substitute for organizational diligence, but they can be incredibly helpful for organizations as they try to comply with legal demands.
When it comes to technical standards, a few distinctions become relevant. The first distinction is between the harmonized standards that will be issued by CEN and CENELEC—which create a presumption of compliance with the relevant AI Act provisions—and other technical standards, which do not create the presumption of compliance but can be used for demonstrating that an organization followed best practices. It is also important to distinguish between standards that govern processes and standards that govern products, as well as between standards that lay down requirements and standards that lay down performance goals.
Certification and self-governance mechanisms, such as codes of practice, can also be a source of guidance. They are not always granted the privileged status given to harmonized standards under the AI Act, but they still contribute to compliance. These documents can distil the best practices available in industry and explain how they apply to specific contexts, helping regulated actors with interpretation. They can also be used as means to demonstrate the practices that an organization followed in design.
Finally, measures supporting innovation in AI—such as regulatory sandboxes, real-world testing, and facilitated compliance for SMEs—can reduce the legal barriers for the use of AI technologies. They allow organizations to benefit from guidance by regulators, while allowing regulators to learn more about risks before technologies are put into place. As such, joining them might be interesting, especially for organizations developing or deploying unproven AI technologies.
Ultimately, the decision on whether to rely on one or more of those tools falls to the organization itself. There might be good reasons not to pursue them, such as the cost of purchasing technical standards or pursuing extensive certification. Still, given the uncertainties surrounding AI technologies, they offer potentially valuable options for supporting any organization in its path to data protection compliance when using AI.
Exercises
Exercise 1. Why might an organization prefer to rely on international standards rather than harmonized ones for its AI deployment?
- a. International standards are always more detailed.
- b. Harmonized standards are only relevant for system developers.
- c. International standards may align better with global collaboration and research needs.
- d. Harmonized standards are mandatory, limiting flexibility.
- e. Harmonized standards are not legally recognized in the EU.
Exercise 2. How might an organization benefit from third-party certification?
- a. It would ensure automatic legal compliance with the GDPR and AI Act.
- b. It could demonstrate the safety and trustworthiness of their products to consumers.
- c. It would exempt them from adhering to codes of conduct.
- d. It would guarantee exemption from audits by authorities.
- e. It would allow them to avoid implementing internal compliance measures.
Exercise 3. What is a regulatory sandbox?
- a. A controlled environment for testing AI systems before deployment.
- b. A public database of AI systems.
- c. A mandatory step for all high-risk AI systems.
- d. A set of guidelines for AI system design.
- e. A certification program for AI models.
Exercise 4. What is a key advantage of processing personal data within regulatory sandboxes under the AI Act?
- a. Personal data can be reused for any purpose without restrictions.
- b. Sandboxes remove the need to demonstrate compliance with GDPR.
- c. Organizations are exempt from adopting safeguards for personal data.
- d. Personal data can only be processed if anonymized beforehand.
- e. Personal data can be reused for substantial public interest purposes under strict safeguards.
Exercise 5. How can a code of conduct and certification complement each other in demonstrating compliance with the GDPR and AI Act?
- a. Codes of conduct are mandatory, while certifications are voluntary.
- b. Certifications provide sector-specific guidance, while codes of conduct demonstrate compliance.
- c. Following a certification scheme ensures adherence to all data protection laws without the need for codes of conduct.
- d. Codes of conduct provide tailored guidance, while certifications offer external validation of compliance.
- e. Both mechanisms must be used together to comply with GDPR and AI Act.
13.4.0.1 Prompt for reflection
Reflect on the differences between harmonized standards, international standards, and codes of conduct in the context of AI compliance. How might an organization like DigiToys decide which to adopt, and what factors should influence this decision?
13.4.1 Answer sheet
Exercise 1. Alternative C is correct. The use of harmonized standards is not mandatory in the EU, but it brings some advantages in compliance. They are not necessarily less detailed than international standards, and they might cover technical aspects in the deployment stage.
Exercise 2. Alternative B is correct. Except (in part) for the cases in which third-party assessment is mandatory under the AI Act, certification cannot replace internal controls. They offer a signal that an organization is following best practices but are not enough to establish compliance without a need for a closer look at the system or model in context.
Exercise 3. Alternative A is correct.
Exercise 4. Alternative E is correct. Sandboxes do not eliminate the need to observe data protection requirements. In fact, the AI Act ensures that data protection authorities will be involved in the governance of those sandboxes.
Exercise 5. Alternative D is correct.
References
Marco Almada and Nicolas Petit, ‘The EU AI Act: Between the Rock of Product Safety and the Hard Place of Fundamental Rights’ (2025) 62 Common Market Law Review.
Marta Cantero Gamito and Christopher T Marsden, ‘Artificial Intelligence Co-Regulation? The Role of Standards in the EU AI Act’ (2024) 32 International Journal of Law and Information Technology eaae011.
Peter Cihon and others, ‘AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries’ (2021) 2 IEEE Transactions on Technology and Society 200.
Mélanie Gornet and Winston Maxwell, ‘The European Approach to Regulating AI through Technical Standards’ (2024) 13 Internet Policy Review.
Eric Lachaud, ‘What GDPR Tells about Certification’ (2020) 38 Computer Law & Security Review 105457.
Sybe de Vries and others, ‘Internal Market 3.0: The Old “New Approach” for Harmonising AI Regulation’ (2023) 8 European Papers - A Journal on Law and Integration 583.