1 Introduction to Artificial Intelligence and Data Protection
By the end of this chapter, readers will be able to:
- illustrate risks and opportunities of using AI in various contexts.
- describe the core features of the three case studies.
- explain how the new EU instruments on AI relate to data protection; and
- indicate the core elements of the AI Act’s regulatory framework.
Why is AI relevant from a data protection standpoint? In part, this relevance comes from the fact that many AI applications have personal data in their inputs and/or outputs. For example, an AI system might use various pieces of information about an individual (input data) to make an inference (output) about whether they would be a suitable hire for a business. But, as Chapter 2 of this book will discuss in more depth, personal data also plays a more structural role in AI, when it is used in the training processes that take place when an AI system is developed. This is why, for instance, the Italian Data Protection Authority (the Garante per la Protezione dei Dati Personali) has opened proceedings against the companies supplying the large language models ChatGPT and DeepSeek, requiring them to adopt measures to address perceived violations to data protection principles. Considering how widespread the use of AI technologies is, their dependence on personal data suggests that data protection professionals need to look closely at whether that data is processed in accordance with EU law.
This is not to say that the use of personal data in AI is inherently undesirable. After all, it has the potential to bring a variety of economic and social benefits. Those benefits can range from personal convenience (a good recommender system, for example, might save you the trouble of looking for a product you need to buy but keep forgetting about) to societal advantages, as the adoption of AI in public sector applications is often proposed as a way to deliver better public services. Considering these benefits, the use of even substantial amounts of personal data might be justifiable if it complies with the requirements of data protection law.
Observing those requirements is crucial, as the use of personal data in AI is not without risks. Because modern AI applications require significant amounts of personal data for their development and use, the accumulation of personal data gives margin to various risks that are well known by data protection professionals, such as those of misuse or data breaches. In addition, AI technologies create or amplify various risks, as illustrated by various scandals concerning discriminatory decision-making by algorithmic systems. The requirements and safeguards created by data protection law thus become particularly desirable when AI is involved.
The main purpose of this chapter is to show how the EU’s regulation of AI technologies interacts with data protection law. For that purpose, Section 1.1 offers a general discussion of artificial intelligence, introducing the risks and opportunities associates with those technologies. Section 1.2 discusses how the AI Act complements data protection law by addressing risks that are specific to AI technologies. Finally, Section 1.3 introduces three hypothetical cases that illustrate how the AI Act and the GDPR both apply to different uses of AI in the public and private sectors. These cases return throughout the module as a source of examples for the various concepts we cover.
1.1 The risks and opportunities of artificial intelligence
By the end of this section, learners will be able to describe why AI technologies have become more common in the last few years and identify some of the benefits and issues created by that diffusion.
AI technologies are becoming ubiquitous in modern society, shaping our routines and business environments in profound ways. For instance, facial recognition tools, used in border control and building access, streamline security checks but also carry significant privacy implications. Social networks leverage AI-powered recommender systems to predict and influence what content users see. Generative AI tools like ChatGPT can produce a wide range of content, from casual text to sophisticated audiovisual materials, demonstrating both the potential and the unpredictability of AI outputs. These examples suggest that AI is not a novel and futuristic concept, but rather something that is already deeply integrated into routine processes and high-stakes decisions in our lives.
Beyond these visible uses, AI has also become a part of our social infrastructures. Many businesses around the world now use AI-powered technologies to carry out various internal tasks. Human resources departments increasingly rely on AI tools to screen out candidate applications, especially as candidates themselves sometimes use AI to tailor their profiles. Strategic decision-making in large companies is guided by various forms of data analytics, such as those concerning market performance. Chatbots are used increasingly as a first channel of contact with consumers, which only interact with humans for more complex queries. Many of those uses of AI are also present in the public sector, as governmental organizations rely on AI-powered tools to carry out various facets of their work. This means that, in many countries, both the private and the public sectors depend much on their use of AI technologies.
1.1.1 Why AI?
The widespread adoption of AI is driven by multiple factors:
- Advances in machine learning and neural networks have enabled AI systems to perform tasks that were previously thought to be impossible or impractical.
- The declining costs of data processing and storage, along with the increased availability of computational power, make AI solutions accessible to more organizations than ever before.
- In addition, the digitalization of everyday activities has generated an abundance of data, creating both the need and the opportunity to leverage AI for analysis and decision-making.
- Organizations, whether in the private or public sector, are often motivated by the competitive pressure to innovate and the fear of falling behind, which can lead to rapid and sometimes poorly thought-out adoption of AI technologies.
These and other developments lead public and private organizations to adopt AI technologies for a variety of purposes.
The usefulness of AI for organizations depends on the tasks that one intends to automate and the available technical capabilities. AI systems excel in certain tasks, providing clear advantages in efficiency and scale. Language translation tools, for instance, have made it easier for people to communicate across linguistic barriers, enhancing both personal and professional interactions.
Even when AI does not outperform human capabilities, it can still offer cost-effective solutions. A good example is the use of generative AI systems in marketing campaigns. While the content they produce may not always be of the highest quality, those systems can generate large volumes of personalized messaging at a fraction of the cost of traditional methods.
In some cases, AI enables activities that would be impossible without automation, such as comprehensive audits of tax filings, which can help governments uncover patterns of fraud more effectively than manual inspections could.
1.1.2 Data protection concerns from AI
Seen from a data protection angle, however, the rapid proliferation of AI technologies is not without significant risks. A major concern is the reliability of AI systems. Despite their impressive capabilities, AI tools can sometimes fail to perform as expected, leading to potentially grave consequences. For instance, emotion recognition technologies are often marketed as tools that can detect a person’s feelings based on facial expressions or voice tone. Yet, the scientific basis for these claims is weak, and the algorithms frequently produce misleading results (Stark and Hutson 2022). The complexity of AI models can make it difficult to identify errors or biases in their predictions, leaving users and regulators blind to potential flaws until they cause real-world harm.
Another concern arises when AI is used for inherently problematic or unlawful purposes, regardless of how well the technology performs. For instance, an AI system designed to make hiring decisions may inadvertently exclude certain demographic groups if it has been trained on biased data, reinforcing existing inequalities in the job market. In such cases, the effectiveness of the AI can amplify rather than mitigate harm, as it systematically executes a flawed process more efficiently than a human could. Similarly, AI-driven surveillance tools may enable extensive monitoring of individuals without their consent, raising serious ethical and legal questions about the right to privacy.
The reliance of AI technologies on large datasets can also create significant privacy risks. AI systems are often trained on vast amounts of personal information, sometimes collected without proper consent, or used in ways that individuals might not expect. This can lead to unintended consequences, such as exposing sensitive personal details or allowing for intrusive profiling. For example, an AI model used to predict consumer preferences might draw on data from social media, shopping history, or even biometric information, potentially leading to privacy violations if this data is mishandled or shared without adequate safeguards.
To address these risks and harness the benefits of AI responsibly, the European Union (EU) has embarked on regulatory initiatives aimed at balancing innovation with the protection of fundamental rights. As we have seen in the introduction to this chapter, data protection law itself plays a vital role in this protective scheme. Because AI systems are often built on personal data and rely on it for their operation, data protection obligations remain in force, and thus help address some of those risks. In the following section, we will discuss another piece of legislation that contributes to AI governance in the EU: the Artificial Intelligence Act, which establishes additional factors that data protection professionals must consider in their work.
1.2 The AI Act (Regulation (EU) 2024/1689)
By the end of this section, learners will be able to describe, at a high level of abstraction, the core features of the AI Act (Regulation (EU) 2024/1689) and compare them with the treatment of risks in the GDPR.
The AI Act is a recent piece of legislation. It was proposed in response to various concerns about AI technologies that were voiced in society. Some of these, as illustrated in Section 1.1, are hypothetical concerns. Others, instead, reflect real-world harms related to AI technologies that are already in use. See, for example, the SyRI case in the Netherlands, in which the courts ruled that a risk scoring algorithm proposed by the government did not respect the right to a private life. To address those concerns, the EU lawmakers proposed a regulation that is very different from the GDPR, as it is based on the laws governing product safety rather than on data protection law.1 Still, the reliance of AI technologies on data means that the AI Act affects how organizations must deal with their data protection obligations. In this section, we introduce the overall logic that guides the AI Act, before looking into its specific regulatory provisions in the rest of the book.
A significant difference between the GDPR and the AI Act comes from their object, that is, from what those laws regulate in the first place. - The GDPR is directed at the processing of personal data, that is, what one does with the data. - The AI Act focuses instead on the technologies used to do that processing. - It regulates AI systems, which it defines (Article 3(1) AI Act) as a type of computer system that can do tasks such as generating content, recommendations, or even making decisions. The Act also features some rules directed at AI models, which are the components that allow AI systems to carry out those tasks.2
Because they regulate different things, for different purposes, the AI Act and the GDPR follow different approaches.
One should not, however, overestimate the differences between the GDPR and the AI Act. They both create obligations to minimize the risks created by their regulated objects:
- In the GDPR, these obligations are directed at data controllers:
- Article 25 obliges them to adopt measures and safeguards to deal with risks to data protection principles
- Article 32 establishes an obligation to address risks to cybersecurity.
- In the AI Act:
- The providers of high-risk AI systems are required to adopt risk management measures (Article 9)
- The deployers of those systems must adopt their own approaches to deal with risks that appear in a specific application (Article 26), such as the impact assessments that are required in the cases covered by Articles 26 and 27.
However, risk assessment in the AI Act is considerably narrower than it is in the GDPR.
Two factors contribute to the narrower assessment. The first one is that the obligations of providers of AI systems are mostly limited to technical risks. - The actors regulated by the AI Act are expected to deal with risks that can be addressed through technical means or by providing technical information (see, e.g., Article 9(3) AI Act). - In this regard, the GDPR goes further. It obliges regulated actors to adopt both technical measures such as changes to the AI model powering an AI system—and organizational ones, such as limiting the number of persons that can operate an AI system.
It follows from this difference that compliance with the AI Act’s requirements for technical design might not be enough to meet what the GDPR demands.
The second limiting factor is that the AI Act establishes a top-down risk assessment. It does not apply a uniform set of rules to all AI systems and models. Instead, it separates those systems and models into different classes, each subject to its own legal framework. While the providers and deployers of AI systems are still obliged to identify and address the risks those systems create in practice, such an assessment takes place within the categories defined by the AI Act. Accordingly, it is necessary to examine the criteria the AI Act uses to assign systems and models to those categories.
1.2.1 Three different frameworks for AI systems
When it comes to AI systems, risk classification is based on the purpose for which a system was designed. The AI Act features a list of prohibited AI practices. That is, it is illegal to use an AI system for any of the applications listed in Article 5 AI Act. For example, one cannot use AI to materially distort the behaviour of a person (or group of persons) in a way that causes or is likely to cause harm to them or to others, such as manipulating them into a poor financial investment. This is because the EU lawmaker has concluded that no measures can make AI systems safe enough to use in those contexts.
Within the lawful uses of AI, Article 6 AI Act singles out some applications of AI (listed in Annex I and III AI Act). Any system designed for use in such an application is a high-risk AI system, except in the case an application listed in Annex III is also covered by one of the derogations in Article 6(3). Whenever a system is classified as high risk, it becomes subject to a harmonized legal framework, which means that the rules that apply to them are the same throughout the European Union. Most of the AI Act is dedicated to setting up that legal framework, and some of these provisions will be analysed in this book.
Finally, the AI Act does not establish a general framework for AI systems that are not high-risk or prohibited. It creates some obligations that are specific to certain applications. For example, Article 50(1) establishes that the providers of AI systems that interact directly with natural persons must make sure that those persons can know they are interacting with an AI system. Article 4 also obliges the providers and deployers of AI systems, regardless of their risk level, to foster AI literacy among those dealing with the operation and use of AI systems on their behalf. Yet, for the most part, it considers that the risks of systems outside the two categories addressed above are covered by existing laws, such as the GDPR and sector-specific regulation at the EU and national levels.
1.2.2 Cumulative requirements for general-purpose AI models
By definition, the idea of regulating based on a specific purpose does not work for AI models that can be used for various purposes. To deal with those general-purpose AI models, the AI Act follows a cumulative approach. It establishes in Article 53 that the providers of all general-purpose AI models must comply with EU law on copyright and make some information about the model available to different types of stakeholders. For example, that article stipulates that providers of general-purpose models must supply information and documentation about a model to those who want to incorporate this model to their own AI systems. The core idea behind those requirements is that they allow other actors to comply with their own legal requirements. Somebody using a general-purpose model to create their own AI system will need to have information to know how to use the model, and the general public is given the right to know about how the model is created.
Under Article 55 AI Act, some general-purpose AI models with high-impact capabilities are classified as generalpurpose AI models with systemic risk, and subject to additional requirements. The notions of “high-impact capabilities” and “systemic risk” are both defined in Article 3 AI Act. However, the classification as a model with systemic risk is based not on the interpretation of these definitions but on the application of technical thresholds defined in Article 51. For example, that article introduces a presumption that any generalpurpose that has required more than 1025 floating-point operations for its training has systemic risk. Alternatively, the Commission has the power to designate a model as having systemic risk if its capabilities are somehow equivalent to that of systems meeting the relevant thresholds. For the most part, the AI Act treats systemic risk as something that can be quantitatively measured.
If a general-purpose AI model meets the criteria for systemic risk, its provider becomes subject to additional obligations under Article 55 AI Act. The provider must, among other things, mitigate the systemic risks created by the model’s high-impact capabilities. By following such requirements, a provider is—at least in theory—addressing risks that could not be addressed by the downstream providers, that is, by those who use a general-purpose AI model to build a system. So, the rules on systemic risk are designed to promote trustworthy AI throughout the value chain of AI technologies.
1.2.3 Applying the AI Act
As a product safety law, the AI Act frames its obligations in terms of AI systems and models. Yet these objects are not the ones that must actually fulfil the obligations. This task falls primarily to two actors mentioned above: the provider of an AI system or model and its deployer.3
To put it shortly, a provider is responsible for placing the AI system or model on the EU market, while a deployer uses an AI system for one or more purposes. The compliance of those two actors with the AI Act’s requirements is overseen by market surveillance authorities. It is now time to briefly examine those definitions.
1.2.3.1 Providing AI systems and models
Under Article 3(3) AI Act, a provider is anybody—a natural person, a legal person, or any other entity—that either develops an AI system or general-purpose AI model. One is also a provider if they place an AI system or model on the EU market or put into service under their own name. This is the case even if they did not develop the AI system in question. For example, if the RandomCorp corporation hires some developers to produce an AI system that will be sold under the RandomCorp brand, it becomes the provider of that system.
Additionally, Article 25 AI Act establishes that one becomes the provider of a high-risk AI system if they modify the system or its intended purpose. For example, suppose the online marketplace SillyMarket has a successful customer service chatbot it hired from a provider RandomCorp. Based on that success, somebody at SillyMarket has the idea of modifying the chatbot into a tool that mediates disputes between buyers and sellers. This new use is a high-risk application under Point 8(1) of Annex III, which was not foreseen by RandomCorp as a potential use case for their chatbot. In this case, the AI Act stipulates that SillyMarket, not RandomCorp, is the one subject to the obligations for high-risk AI systems.
It is also useful to distinguish between the provider of an AI model and the downstream providers that incorporate the AI model into their own AI systems. The model provider might be subject to the obligations concerning general-purpose AI models, including those on systemic risk if applicable. But, if RandomCorp uses a model supplied by ModelCorp to create a high-risk AI system, ModelCorp is not in principle obliged to ensure that the system complies with the AI Act’s rules on high-risk AI. RandomCorp, on the other hand, cannot avoid compliance with its obligations by blaming issues on ModelCorp’s model, even if it has little power to change to that model. This is why Article 53 obliges ModelCorp to make some kinds of information about its model available to RandomCorp.
1.2.3.2 Deploying AI systems
As established in the AI Act’s Article 3(4), a deployer of an AI system is anybody—again, regardless of legal form—that uses an AI system under their own authority. For example, a sole trader that uses an AI system to optimize their operations would be the deployer of that system. So would a public sector organization that decides to use AI to automate internal processes.
Any deployer is subject to the AI literacy duty imposed by Article 4 AI Act: they must make sure that the people operating AI on their behalf know about the capacities, impacts, and limitations of an AI system. Deployers of high-risk AI systems are subject to additional duties, laid down in Articles 26 and 27 AI Act and examined in Part II.
As an exception to the classification above, Article 3(4) AI Act also stipulates that using an AI system in a personal non-professional activity does not count as deployment. This means that somebody who uses an AI tool to research information, or to tinker with their own photos, is not subject to the AI Act’s obligations for deployers. They remain nonetheless covered by the requirements of other applicable laws, including the GDPR.
1.2.4 Enforcing legal requirements
The AI Act’s requirements apply throughout the life cycle of AI systems and models. Providers and deployers must ensure compliance when an AI system (or model) is first placed on the market, put into service, or used. But they must also ensure ongoing conformity to the Act’s requirements, which might require adjustments to a system or model. It might even be the case that a previously lawful AI system or model must be withdrawn from the EU market because it can no longer be sold or used in a safe way. Complying with the AI Act, just like with the GDPR, is an ongoing effort.
Before an AI system or model can enter the EU market, it must be in conformity with the AI Act’s requirements. In most cases, conformity is assessed by the providers themselves, who draw up documentation to attest that the requirements are observed. There are some cases in which the AI Act requires third-party certification, such as for the biometric applications listed in Point 1 of Annex III and for AI systems that are products (or components of products) that are themselves subject to third-party certification under Article 43. This means, for instance, that the provider of a credit scoring system does not need to rely on an external certification body. It might, however, pursue external certification to build legitimacy for their product.
Once an AI system is on the market, providers and deployers are obliged to carry out post-market monitoring of the AI system. If they perceive that a system that is already on the market or in service can harm fundamental rights or other values protected by the AI Act, they must take appropriate measures. To ensure that is done, the AI Act’s market surveillance mechanism empowers a series of market surveillance authorities.
As required by Article 70 AI Act, Each Member State must nominate at least one market surveillance authority. A market surveillance authority is granted extensive powers to investigate AI systems that create risks to the values protected by the Act (see, e.g., Article 74). Based on those powers, it has the power to request that providers and deployers adopt corrective measures or even recall an AI system from the market. A market surveillance authority can also issue fines and other sanctions in case of non-compliance with applicable requirements.
The AI Act stipulates that market surveillance authorities must have the resources and infrastructure to carry out these tasks. It leaves Member States mostly free to determine which authorities will carry out the role. However, Article 74(3) specifies that the market surveillance authorities designated by other instruments of EU law are responsible for the AI systems within their scope. For example, financial regulators are responsible for the surveillance of AI systems used in regulated financial activities. Therefore, it is likely that each country will have more than one AI supervisory authority. In that case, each Member State must designate one of those authorities as the single contact point for the purposes of the Act.
In contrast with the rules for AI systems, the rules for general-purpose AI models are enforced in a centralized fashion. Enforcement powers are concentrated in the AI Office, which is a part of the European Commission. It is this authority that is responsible for defining the technical thresholds for systemic risk and by ensuring that providers comply with the Act’s requirements.
Given the overlap between data protection and the use of AI, some have suggested that data protection authorities are well-positioned to be involved in market surveillance. In fact, the AI Act designates the European Data Protection Supervisor as the surveillance authority for AI systems used by EU institutions, bodies, and agencies. It remains to be seen whether Member States will follow that lead. But, even if they do not, data protection authorities retain the power to enforce data protection law against these models.
1.3 Hypothetical case studies
By the end of this section, learners will be able to describe the general features of the hypothetical cases used as sources of examples throughout the book.
As we have seen in the previous sections, AI technologies can be used in many contexts and for many reasons. This variety makes is a challenge for AI regulation. It makes more difficult for regulator to pin down risk levels, and to create obligations that are relevant for all systems with a certain level of risk. For those of us designing books on AI, it also means that examples must cover many cases. Because both the GDPR and the AI Act apply to a substantial number of AI systems and models, there are many specificities that one must consider. Without engaging with the specifics of various contexts, an analysis might be too vague to be useful. However, one cannot cover all training cases within a single text, given the variety of sectors that would need to be covered.
To address this problem, this book relies on three hypothetical case studies. Those cases are representative of many AI use contexts in the public and private sectors. In each case, AI systems and models are used for a variety of purposes, relying on different approaches to development, and based on distinct types of personal data. Therefore, the use of these cases as examples throughout the book will help illustrate the broad range of factors that need to be considered when assessing whether AI is being developed and used lawfully within an organization.
1.3.1 Case study 1: Artificial intelligence at the University of Nowhere
The University of Nowhere (UNw) is a large public university, which has thousands of undergraduate and postgraduate students in all areas of knowledge. Among its main research units is a well-known Law School and a small computer science that is among the best European centres on AI and technical security. Over the past decade, the university has more than doubled its number of students. However, cuts in public funding to education have meant that the university was unable to hire a comparable number of new professors and administrative staff. In this context, UNw is currently evaluating whether and how AI technologies might assist in its functions.
It is not hard to find examples of proposed uses of AI in education. Under Annex III AI Act, some of those applications are listed as high-risk use cases. If UNw decides, for example, to use an algorithm to decide which students are most likely to thrive in its law school, the ensuing system would be classified as high risk. The outputs of this system might affect a potential student’s likelihood of pursuing a law degree at UNw, or of continuing their studies once admitted. Hence, the AI Act would oblige the university to conform to various requirements before it can put such a system into service.
The high-risk classification, in this case, is based on the impact such a system might have on the outputs of the system might affect various fundamental rights of the students. Their right to good administration enshrined in Article 41 of the Charter of Fundamental Rights of the EU might be affected as an automated system takes decisions about their future without giving them a chance to be heard. Biased decisions by an AI system might fall foul of the right to non-discrimination (Article 25 of the Charter) if they are based on protected grounds such as ethnic or social origin, age, or political opinions. Those rights must be considered in the interpretation of the AI Act’s provisions, as well as of other risk-based requirements, such as the data protection by design requirement from Article 25 GDPR.
Other applications of AI that might support UNw’s activities would not be classified as high-risk AI under the AI Act. For example, the university might decide to create a chatbot that can answer to common student requests such as the generation of diplomas and academic transcripts. In this case, Article 50(1) AI Act stipulates that the system must be designed in a way that allow individuals to know that they are interacting with an AI system. Article 4 also requires UNw to educate its staff regarding the chatbot’s capabilities. But, for the most part, the main source of legal requirements here would be data protection law.
The specific contents of the requirements imposed on UNw’s use of AI will be examined in the various sections under Part II and Part III. Before any such analysis, however, it is important to clarify two aspects of this case study: where UNw gets data from and how it procures its AI systems.
Regarding personal data, UNw has access to considerable amounts of data about its students and academic and administrative staff. This data includes information presented at enrolment, student grades and sanctions, and the salaries of all its staff. It also has the technical means to acquire information from external sources, such as scraping the social network profiles of people who make their affiliation with UNw public. Lastly, the university might rely on external data providers (“data brokers”) to acquire information that it cannot secure directly, such as information about potential hires or students. A data protection professional will therefore need to determine whether the data obtained from those various sources has been procured lawfully.
As for technology procurement, UNw has a strong computer science department and a large ICT team. This means it can afford to develop its own AI systems and models, as well as to fine-tune existing AI models for their own purposes. If they need (or decide) to hire AI systems and models, they must follow a public procurement procedure to do so. Therefore, there is a tendency to do things in-house, though, as discussed in Chapter 14, this does not mean UNw is entirely independent from external providers.
1.3.2 Case study 2: AI in a small business
A few years ago, a couple decided to open their own business of smart toys. After much work and diligence, their startup DigiToys seems to finally be taking off. It now commercializes a small but growing range of interactive toys with educative purposes. By incorporating AI tools into dolls, puzzles, and other children’s toys, they aim to help children above the age of three to cultivate a healthier relationship with the digital world. Within this proposal, the company is particularly interested in ensuring the good reputation and the legal conformity of its products.
DigiToys currently has approximately thirty workers. Its team includes a handful of AI developers, who work in fine-tuning large language models for use within the toys. It also includes two teams of data scientists, who use AI tools for analysing data. As a result, the company is unlikely to develop general-purpose AI models of its own, let alone those with systemic risks. But it has the capabilities to use those models as components of their own systems, including their products.
In particular, their use of AI systems within toys might raise obligations under the GDPR and the AI Act. If the toys process personal data, they become subject to EU data protection law. Furthermore, the company’s concern with safety means that it has opted to follow a third-party certification procedure for its toys. It follows from this decision that DigiToys’s products are covered by Article 6(1) AI Act, and therefore subject to the rules on high-risk AI.
Additionally, DigiToys’s data scientists also make use of AI systems. Their product team uses AI to analyse large volumes of data about the toys, which stem from sources such as consumer satisfaction reports as well as telemetric data and error reports from each individual toy. These analyses are used to diagnose errors in toys, identify if they are having a healthy effect on the behaviour of children, and to produce ideas for new products. None of those applications is covered by the list of high-risk AI applications in Annex III AI Act. Still, the data used for those analyses is likely to contain significant amounts of personal data from interaction with children.
Data scientists in DigiToys’s marketing team rely on data from other sources. In fact, the company goes to a great length to make sure marketing never has access to data collected from products. Marketing operations rely instead on information sourced from the company’s customer databases and from online advertisement platforms. That information is used to segment potential and actual customers into profitability groups, as well as to offer personalized product recommendations to them. Once again, those applications fall outside the high-risk classification in the AI Act, but they involve substantial volumes of personal data about the adults that buy (or might buy) toys for their children.
1.3.3 Case study 3: Data-driven medical technologies
The hospital InnovaHospital is a private, non-profit medical organization that has branches all over the country. Over the past few decades, it has acquired a reputation for rigorous observance of patient confidentiality and data protection requirements, particularly for its serious response in the few times data leaks and other breaches took place. It is also known for its openness to innovation, as it hires healthcare professionals that are always working on the development of new techniques.
Within InnovaHospital, executives have identified two priority areas for the application of AI technologies. First, they want to use AI technologies to streamline their human resources department, spotting talent and helping its development from early on. This application would be classified as high-risk under Point 4 of Annex III AI Act, as it has the potential to affect the careers of everybody hired by the hospital and, in doing so, affect their rights as an employee. To create such a system, the hospital has access to its internal data keeping, such as evaluation reports, as well as data it collects during the hiring process. Some decision-makers have also considered acquiring data from additional sources, such as the social networks of new hires.
Second, they want to evaluate whether and how they can use patient data to develop technologies that support clinical practice. As examples of the ideas that have been raised include, one can see the use of data from patient exams to train AI systems that can be used as medical devices or for personalizing the treatment given to each patient.
One obstacle that InnovaHospital faces in its use of AI is that, despite its large availability of data, it does not have the ICT capabilities needed to develop cutting-edge AI technologies on its own. As such, it will need to hire new professionals, buy ready-made AI solutions, or rely on AI-as-a-service solutions purchased from a provider. Each of these solutions has its own drawbacks, which will come up at various points in this book.
1.4 Conclusion
AI technologies can take many forms, and they can play many roles within organizations. In many of these roles, the creation and use of AI systems and models is highly dependent on personal data. As such, data protection law is an important piece of AI governance, and the AI Act does not make the GDPR redundant. If anything, data protection law becomes more relevant, both because of direct mentions and because AI regulation creates better conditions for applying data protection law for AI technologies. Still, it is undeniable that the result is a complex legal framework, even for seasoned data protection professionals.
This unit has supplied an overview of the AI Act’s regulatory framework. Such an overview is necessarily abstract, given that the Act covers a vast range of applications which cannot all be treated in the same way. Just like the GDPR, the legal requirements remain the same from one case to the other, but the concrete risks that need to be tackled in each context can be vastly different. By understanding the overall logic behind the Act, you will now be better positioned to understand how its requirements interact with the GDPR. This knowledge will provide a starting point for the rest of the book. Therefore, take your time to revisit this section and do the exercises before moving forward. Doing so will pay off in the longer run.
Exercises
Exercise 1. Which of the following scenarios best illustrates the dual nature of AI as both an opportunity and a risk?
- a. An AI system used for medical diagnoses.
- b. A facial recognition system, built on a large database of photos of individuals, which is used to control access to a building.
- c. An automated translation too.
- d. A chatbot providing immediate responses to customer queries.
- e. A personalized music recommendation system used in streaming platforms.
Exercise 2. Which factor contributes most to the risks to fundamental rights associated with AI technologies?
- a. The inability of AI systems to learn from new data.
- b. The high computational cost of training AI models.
- c. The limited scalability of AI applications.
- d. The collection and processing of vast amounts of personal data
- e. The replacement of traditional workflows with automated systems.
Exercise 3. What distinguishes the GDPR from the AI Act in terms of their scope?
- a. The GDPR regulates the safety of AI technologies, while the AI Act addresses personal data protection.
- b. The GDPR applies only to private organizations, while the AI Act applies only to public institutions.
- c. The GDPR focuses on the processing of personal data, while the AI Act regulates AI systems and models.
- d. The GDPR governs the use of general-purpose AI, while the AI Act regulates specialized AI applications.
- e. The GDPR and the AI Act have identical scopes and objectives.
Exercise 4. Which of the following is true about the AI Act’s relationship with the GDPR?
- a. The AI Act replaces the GDPR in AI governance.
- b. The AI Act primarily focuses on organizational measures for data protection.
- c. The AI Act creates obligations for AI systems regardless of their risk level.
- d. The AI Act and GDPR apply to completely separate categories of activities.
- e. The AI Act complements the GDPR by addressing risks specific to AI technologies.
Exercise 5. Which of the following is an obligation specifically for high-risk AI systems under the AI Act?
- a. Implementing technical features that enable human oversight.
- b. Disclosing interactions with AI to end-users.
- c. Ensuring AI literacy for all system users.
- d. Reporting AI development details to national authorities.
- e. Guaranteeing that AI systems perform better than human counterparts.
1.4.1 Prompt for reflection
Discuss how the AI Act’s classification of risks (prohibited, high-risk, and other AI systems) helps balance innovation and fundamental rights. Consider whether this approach is sufficient to address emerging AI challenges and whether it complements the GDPR effectively.
1.4.2 Answer sheet
Exercise 1. Of all the alternatives, alternative B is the one where risks and opportunities are more evident: mass data collection may affect rights to data protection and privacy, while the application promotes security. The other alternatives might have undesirable side-effects, but they are less immediate, as they depend on the social responses to the use of those technologies.
Exercise 2. Alternative D is correct, as it can directly threaten the right to data protection while the impact of AI might affect other rights such as freedom of expression or the right to non-discrimination. Alternatives A and C state the inverse of what has been seen so far: AI systems can learn from new data, and they have been able to grow in scale. Alternatives B and E both make true affirmations but represent types of risk.
Exercise 3. Alternative C is correct. Both regulations are horizontal, and as such they apply to a broad range of cases. They both cover the impact of some kinds of data processing on the rights and freedoms of individuals. But they do so in different ways.
Exercise 4. Alternative E is correct.
Exercise 5. Alternative A is correct. Of the other alternatives, only E refers to an obligation that is not created by the AI Act. However, alternative D refers to a duty directed towards the providers of general-purpose AI models with systemic risk, while alternatives B and C are not limited to high-risk systems.
References
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU)
2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) [2024] OJ L.
European Commission, ‘Proposal for a Regulation of the European Parliament and of the Council Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts’ (COM(2021) 206 final, 21 April 2021).
Marco Almada and Nicolas Petit, ‘The EU AI Act: Between the Rock of Product Safety and the Hard Place of Fundamental Rights’ (2025) 62 Common Market Law Review.
David Fernández-Llorca and others, ‘An Interdisciplinary Account of the Terminological Choices by EU Policymakers Ahead of the Final Agreement on the AI Act: AI System, General Purpose AI System, Foundation Model, and Generative AI’ [2024] Artificial Intelligence and Law.
Gabriele Mazzini and Salvatore Scalzo, ‘The Proposal for the Artificial Intelligence Act: Considerations around Some Key Concepts’ in Carmelita Camardi (ed), La via europea per l’Intelligenza artificiale(Cedam 2022).
Claudio Novelli and others, ‘AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act’ (2024) 3 Digital Society 13.
Kasia Söderlund and Stefan Larsson, ‘Enforcement Design Patterns in EU Law: An Analysis of the AI Act’ (2024) 3 Digital Society 41.
Luke Stark and Jevan Hutson, ‘Physiognomic Artificial Intelligence’ (2022) 32 Fordham Intellectual Property, Media and Entertainment Law Journal 922.
Sandra Wachter, ‘Limitations and Loopholes in the E.U. AI Act and AI Liability Directives: What This Means for the European Union, the United States, and Beyond’ (2024) 26 Yale Journal of Law & Technology 671.
On the structural differences between the AI Act and the GDPR, see Almada and Petit (2025).↩︎
On the distinction between AI systems and models, see Chapter 2 of this book.↩︎
Articles 22 to 25 AI Act also stipulate obligations for other actors, such as importers, but the bulk of the Act concentrates on providers and deployers.↩︎