Impact of the EU AI Act on the creative industries

August 2, 2024
AI brain

Following its initial proposal in April 2024 and approval by the European Council in May 2024, the European Union’s Artificial Intelligence Act[1] was published in the Official Journal on 12 July 2024. It came into force on 1 August 2024.

The Act will set the tone for providers and users of AI systems in the EU, and probably globally. Its stated purpose is “to promote the uptake of human-centric and trustworthy artificial intelligence”[2] and to make sure that “AI systems in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly”.[3]

It is wide-ranging in scope and will be a catalyst for discussion and analysis for years to come. In this article, we take a look at the parts of the Act most likely to be relevant to the creative industries and when the various requirements are set to come into force.

Who does the Act apply to?

The Act applies to businesses based within the EU, as well as those outside the EU that have a customer base within it – a crucial point for UK businesses.

Article 2 sets out the scope of the Act, which applies, among others, to “providers placing on the market or putting into service AI systems irrespective of whether they are in the EU or are in a third country”, as well as “providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the AI system is used in the Union” and “affected persons that are located in the Union”.

Article 3 defines a “provider” as an entity that “develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge” and defines a “deployer” as an entity “using an AI system under its authority”, in a professional context.

The breadth of these definitions alongside Article 2 means that most global organisations that use AI in their processes will be captured in some way by the AI Act, regardless of whether they are established in the EU, if the technology is for internal-only use, or if the technology was procured from a third party.  

The comfort for many organisations is that the Act’s most stringent obligations fall on those that provide AI systems classified by the Act as “high-risk” (see further below). All organisations will need to be aware of the Act, however, to ensure that they are complying with the more general obligations surrounding transparency.

How does the Act work?

The Act classifies AI systems into four risk categories, as well as whether the system is a “General Purpose AI” and regulates them by category.

1. Unacceptable risk.

These are AI systems that are manipulative and exploitative, and which may cause significant harm. They are prohibited under the Act. They include AI that might:

  • deploy “subliminal”, “purposively manipulative” or “deceptive” techniques that cause a change to a person's decision-making and that are reasonably likely to cause significant harm (i.e. by adding deceptive elements to virtual reality);[4]
  • conduct social scoring (i.e. rating an individual according to their characteristics or social behaviour, for the purposes of job applications or similar);
  • perform untargeted scraping of facial images from the internet or CCTV to create databases;
  • biometrically categorise persons based on sensitive characteristics (such as gender, ethnicity, or sexual orientation); or
  • exploit vulnerabilities to alter behaviour in order to cause significant harm.[5]

Any breach of those provisions may result in a fine of up to €35 million or 7% of total worldwide annual turnover for the preceding financial year (whichever is higher).[6]

2. High risk

These are AI systems that are permitted, but regulated the most. They are AI applications that might negatively affect the health and safety of people, their fundamental rights or the environment.

Annex I and Annex III (which the EU Commission may update[7]) set out the various circumstances in which AI systems will be deemed high-risk. Annex 1 sets out that, if a product is subject to certain EU product safety regulations, then it will be deemed high-risk. Annex III sets out the additional areas in which usage of an AI system might render such usage high-risk, including:

  • employment (e.g. for recruitment, decision-making around promotions, or terminations);
  • biometrics (e.g. to verify a person’s identity, or for emotion recognition); or
  • educational or vocational training (e.g. to determine admission to an institution, to evaluate learning outcomes, or to assess the level of training required for an individual).

There is a carve-out to the listed areas in Annex III for systems that “do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons”. This is self-assessed and must be documented and registered with a database to be created by the EU Commission.[8]

3. Limited risk

The Act also deals with some AI systems that are neither prohibited nor high-risk. These are referred to in the Act as “certain AI systems” (but colloquially have been referred to as “limited-risk” in commentary), and the requirements apply depending on the AI system’s function, as set out in Article 50.

4. Minimal risk

These are systems, like email spam-filters, that do not constitute high-risk or prohibited systems and are not captured by Article 50. As such they attract no obligations under the Act, except for the broad obligation around AI literacy which applies to all AI systems.[9]

General-purpose AI

Beyond the four categories listed above, there is an additional grouping for general-purpose AI models (GPAI models) that applies regardless of whether the system is high-risk, limited-risk or minimal-risk, and which imposes additional stringent requirements. GPAI models are multi-purpose AI models that can usually be integrated into a variety of systems, use a high degree of computational power and are trained with a “large amount of data using self-supervision at scale”.[10] It is not entirely clear which specific AI systems will be affected, and the EU Commission may additionally determine this on a case-by-case basis based on factors listed in Appendix XIII. GPAI models are likely to cover large language models that might be used to generate text, audio, images or video, and will probably include popular systems like ChatGPT or Google Gemini.

When thinking about use cases in the creative industries, the most relevant AI systems that come to mind are generative AI systems that may be used to augment the creative process, to generate prototypes or to assist existing work streams. In this context, many of such AI systems seem likely, depending on their use cases and implementation, to constitute limited-risk AI systems, and they may also be GPAI models. 

How are high-risk systems regulated?

Articles 8 to 27 of the Act provide a lengthy list of requirements related to high-risk AI systems. Those include establishing a risk-management system, conducting data governance, achieving appropriate levels of accuracy, robustness and cybersecurity and assigning appropriately trained human oversight.

Recital 67 sets out what datasets ought to look like for high-risk systems, including that they ought to be high-quality, sufficiently representative and, as far as possible, free from errors, and complete in view of the system’s intended purpose. They should also reflect appropriate statistical properties (e.g. by reflecting different types of persons) in order to mitigate potential bias. To be compliant with EU data protection legislation, they should also be transparent about the original purpose of the data collection.[11]

How are limited-risk systems regulated?

Article 4 of the Act contains a general obligation for all providers and deployers of AI systems to ensure that their staff are suitably trained to a sufficient level of “AI literacy”. 

Additionally, the transparency requirements under Article 50 apply based on the specific functionality of the system. There are some carve-outs, including relating to law enforcement activities. As a general overview:

AI functionality Disclosure requirement Exception relevant to creative industries
Intended to interact directly with natural persons (e.g. chatbots). The natural persons are informed that they are interacting with an AI system (unless obvious). N/A.
Generating synthetic audio, image, video or text content. Outputs are marked in a machine-readable format and are detectable as being artificially generated or manipulated. The disclosure requirement does not apply as far as the AI system performs an assistive function for standard editing or does not substantially alter the input data provided by the deployer or the semantics of those.
Emotion recognition systems or biometric categorisation systems. The natural persons are informed of the operation of the system and the usage complies with data protection legislation. N/A.
Generating or manipulating images, audio or video content, constituting a deep fake. Disclose that the content has been artificially generated or manipulated. Limited to “disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work” for content which forms part of work or a programme which is “evidently artistic, creative, satirical, fictional or analogous”.

The disclosure requirements listed above must be presented “in a clear and distinguishable manner at the latest at the time of the first interaction or exposure”. At the moment it is not clear exactly what wording the EU Commission would encourage users of AI systems to adopt. The Act notes that the EU AI Office will “encourage and facilitate” industry Codes of Conduct, which the Commission has the option to approve officially.[12]

Any failure to comply with certain provisions relating to high-risk systems or with the transparency obligations under Article 50 may result in penalties of up to €15 million or 3% of worldwide turnover (whichever is higher).[13]

How are GPAI models regulated?

In addition to any obligations imposed on them if they are deemed high-risk or fall into one of the limited-risk use cases, GPAI providers must notify the Commission within two weeks of a system under development meeting the criteria to be designated a GPAI model.[14] Providers of GPAI models are also responsible for:

  • maintaining technical documentation for the model relating to training and testing, as more specifically detailed in Annex XI;
  • maintaining documentation to assist any downstream provider that intends to integrate the GPAI model into its own AI systems, as more specifically detailed in Annex XII – this notably includes “information on the data used for training, testing and validation, where applicable, including the type and provenance of data and curation methodologies”;
  • establishing a public policy to respect EU copyright law;
  • publishing a sufficiently detailed summary about the content used to train the model (according to a template to be provided by the EU AI Office); and
  • where based in a third country, appointing a representative established within the EU.[15]

Free and open-source GPAI model providers are exempt from certain duties under the Act, including those relating to providing the technical documentation described above.[16]

In particular, providers of GPAI models based in the UK should be aware of the requirement to appoint an EU representative, which is an EU-based entity mandated to act as a go-between between the provider and the EU AI Office.

Article 52(2) notes that it will be an exception if a GPAI model does not present a systemic risk, so in many cases the model will be deemed to present “systemic risks”.  Whether a GPAI model constitutes a “systemic risk” will ultimately be determined by the EU Commission. If the model is deemed to have systemic risks, then providers must also:

  • perform model evaluations, including adversarial testing, to identify and mitigate systemic risks;
  • assess and mitigate possible systemic risks at an EU-wide level;
  • track “serious incidents”, their corrective measures and report those to the AI Office or national authorities; and
  • ensure that there is an adequate level of cybersecurity protection.[17]

Any failure to comply will result in penalties of up to €15 million or 3% of worldwide turnover (whichever is higher).[18]

Which GPAI transparency requirements are most relevant to the creative industries?

Transparency of datasets

Recital 107 sets out that, to increase transparency for data used to train GPAI models, providers must draw up and make publicly available a “sufficiently detailed summary” of the content used. It should be “generally comprehensive” in scope, to enable parties to enforce their rights under EU law.[19] That will probably be cited by the Commission when determining compliance with Article 53. Creatives and artists will welcome this provision, which should assist them when trying to assess whether an AI system has been trained on their copyright works.

Output

Under Article 50, all generated content produced from the models must be marked in a machine-readable format and needs to be detectable as either artificially generated or manipulated.

Any user who is generating or manipulating an image, audio or video-content that constitutes a “deep fake” (defined by the Act as an “AI generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to authentic or truthful”[20]) must clearly disclose that the content has been artificially generated or manipulated.

There are key exceptions, as set out in the table above – sometimes referred to as the “right to freedom of the arts”.[21]

The AI Office will facilitate the drawing up of Codes of Practice to co-ordinate the application of the GPAI model regime, which will cover the obligations for providers and for those GPAI models with systemic risks.[22]

How does the Act interact with current copyright legislation?

As noted above, GPAI providers must adhere to EU copyright legislation and adopt a policy that enables this.

Article 53(1)(c) also ties into the Text and Data Mining (TDM) permission under EU law.[23] Under EU law, it is generally permitted to reproduce/extract information contained in networks or databases where access is lawful and for TDM purposes (including to train AI systems).

Importantly, however, the rights-holder can opt out of the exception. It is anticipated that, if rights-holders do opt out (which is typically done via a website’s terms and conditions), providers of AI systems must have a policy in place explaining how they will respect that opt-out, irrespective of whether that takes place within the EU or elsewhere.[24]

What is the timeline for implementation?

The implementation is staggered:

  • 2 February 2025. Bans on systems deemed to pose unacceptable risks come into force.
  • 2 August 2025. The regime for GPAI models comes into force, as do potential penalties (except penalties for GPAI models).
  • 2 August 2026. The Act generally comes into force.  The regulations for high-risk systems under Annex III come into force, as do the transparency obligations for any limited-risk systems and penalties relating to GPAI models.
  • 2 August 2027. The regulations for high-risk systems under Annex I relating to EU product safety legislation come into force.

Comment

The Act is a landmark piece of legislation, and its roll-out and impact will be closely watched globally by both lawmakers taking their own legislative approaches, and the providers, deployers and users of AI systems alike.

Some tech start-ups are concerned that emerging industry and innovation will be stifled by red tape, while others have expressed concerns that a rush to get the legislation over the line has led to a lack of clarity over key details. For example, the definition of “provider” under the Act might overlap in some cases with that of “deployer”, making it unclear which entity in the supply chain is responsible for certain transparency obligations. There is also little detail within the Act on the management of copyright infringement within datasets used, or how the content being produced from certain AI-systems might be protected and monetised – both issues that have been particularly contentious within the AI-space in the last few years, and that will need commercial industry engagement.[25]

Meta has already chosen to withhold the advanced version of its latest multimodal AI model Llama 3 from the EU citing the “unpredictable nature of the EU regulatory environment”. Yet there is a line of thought that this may be less to do with the EU AI Act and more an inability to train Llama 3 with customers’ data while remaining compliant with the EU GDPR.[26]

Businesses will take comfort from the implementation timeline: now that the text of the Act is set, organisations have six months to start categorising the AI systems they use and phasing out banned systems. Business should bear in mind that high-risk systems could arise in more mundane processes, particularly in HR functions like sorting through applications or assisting with performance reviews. Additionally, industry bodies should now be working with the EU Commission to provide Codes of Conduct for specific use cases on an industry-by-industry basis.

Ultimately, as with data protection issues in the 2010s, the EU has led the way on the global stage for legislating in a complex area.  It won’t be perfect – complexities and grey areas are certain to come out in the wash. Yet, by being a “first mover” in this space, it is more likely that other jurisdictions will use this Act as a roadmap for their own efforts, and businesses globally would be well advised to keep abreast of its requirements, as it may apply directly to them and set the stage for future developments. This is certainly true for the UK, given the current pressure on the new Labour government to take a nuanced approach to AI-dedicated legislation that is pro-innovation but also protects rights-holders in the wake of accelerated change.  

Article written for Entertainment Law Review.

Andrew Wilson-BushellAndrew Wilson-Bushell
Andrew Wilson-Bushell
Andrew Wilson-Bushell
-
Associate
Sophie NealSophie Neal
Sophie Neal
Sophie Neal
-
Associate

News & Insights