Part II: The Life Cycle of an AI System

TipLearning outcomes

By the end of this part, learners will be able to:

  • differentiate the various stages of an AI system’s life cycle and the technical and organization decisions that take place at each stage.
  • assess data protection risks that can emerge because of those technical and organizational decisions.
  • sketch an initial set of compliance measures for the legal requirements that apply at each stage of the life cycle.
  • illustrate how issues that are not addressed at earlier stages of the system’s life cycle can propagate to later stages; and
  • propose technical and organizational practices that can mitigate the risks associated with each life cycle stage.

One of the main challenges of AI regulation is that it deals with a moving target. Technologies change all the time, sometimes radically. Deploying AI in your organization in 2025 requires different kinds of technical work than it required in 2019, which in turn are very different than what AI developers did in 2010. At the same time, the social contexts in which those technologies are used can change considerably, too. The widespread enthusiasm for large language models seen in 2022 and 2023, for instance, was somewhat tempered since then as society became increasingly aware of risks associated with those technologies. Hence, the measures that govern AI technologies cannot remain static but must adjust to those new realities.

Both the GDPR and the AI Act feature adaptation mechanisms. Under Article 25 GDPR, data controllers are required to address the risks created by processing “both at the time of the determination of the means to processing and at the time of the processing itself”. In short, this obligation reinforces that data protection is not a “fire and forget” duty. While measures in the initial design of a system can be crucial for ensuring adequate protection, they are not enough: data protection must be ensured in each individual processing, too.

In the AI Act, this moving targeted is captured by the notion of the “life cycle” of AI systems and models. Article 9, for instance, requires the providers of high-risk AI systems to manage risks throughout the entire life cycle of an AI system, in particular by ensuring that the system keeps adequate levels of accuracy, cybersecurity, and robustness. Likewise, Article 40 stipulates that harmonized technical standards must deal with an AI system’s energy consumption throughout its life cycle. Yet, the Act does not contain a forma definition of the “life cycle” itself.

Such a definition is left, instead, to technical sources. The idea of a software life cycle is well-established among software engineers (see, for example, Kneuper (2018)), who use the term as a shorthand for the various technical processes involved in constructing and maintaining a computer system until the end of its operation. To better visualize those processes, software engineers often rely on life cycle models, which divide those processes into a succession of stages. This approach will guide the present book.

More specifically, the book takes as its starting point the life cycle model proposed by the international standard ISO/IEC 5338:2023. Future updates of this standard, or alternative standard such as those issued by European Standardization Organizations, might lead to different arrangements of the technical processes related to AI. But viewing those processes in an organized way will be useful for anticipating issues and incorporating data protection responses into what an organization already does.

Part II of the book begins with 5  The Inception of AI Technologies, which discusses the inception stage of the life cycle of an AI system, that is, the strategic decisions that shape whether and how an organization will use AI-based software. 6  Designing and Developing AI Technologies then discusses AI-specific concerns that emerge with the use of personal data in the design and development of an AI system, followed by a discussion in 7  Verification and Validation of AI Systems and Models about how to evaluate AI systems before and after deployment. After that discussion, 8  The Deployment of an AI System considers what organizations must do to lawfully deploy AI systems for specific tasks. Finally, 9  Operation and Monitoring of an AI System considers their continuous obligation to monitor whether and how an AI system is functioning.