Loading Logo

The EU AI Act: regulation vs innovation

September 2025
 by Beatrice Bacci

The EU AI Act: regulation vs innovation

September 2025
 By Beatrice Bacci

The European Union is eager to be at the forefront of technological innovation, always striving to keep up with the United States’ technological capabilities. An interesting trend emerging from the current political landscape is that while the US develops technology that pushes the boundaries of what’s possible in human society, the EU pushes to regulate that technology, making it safer for users, and creating a framework for the rest of the world to police and control the usage of data and technology.

A great example of this trend is of course GDPR, which came into force in 2018, and which set the direction for other jurisdictions to try and regulate how users’ data was being collected and stored. Following this success, the European Commission is now trying to follow this up with another, hopefully groundbreaking, piece of regulation: the AI Act.

The AI Act passed in August 2024, but has only just come into effect a year later, despite companies lobbying for a postponement. Over the next two years, companies will have to achieve compliance pending fines, with different timelines depending on the type of system and the date it was first deployed. The Act will apply to AI systems and GPAI (general purpose AI) models placed on the EU market by companies with a place of establishment in the EU, or to companies located outside the EU where the output of the AI system is used in the EU. It does not apply to systems used for military, defence, or national security purposes, nor to “models, including their output, which are specifically developed and put into service for the sole purpose of scientific research and development.”

Classifying AI risk

Specifically, the Act classifies AI systems based on the level of risk inherent in their deployment. Some systems are considered an ‘unacceptable risk’; manipulating human behaviour, social scoring, biometric categorisation, and real-time biometric identification such as facial recognition in public spaces will all be banned, although some exceptions may be allowed for law enforcement purposes. ‘High risk’ systems, such as those used in the production of cars and medical devices, or in employment, law enforcement, and border control management, will have to be registered and assessed.

The third tier contains systems with a ‘transparency risk’: “AI systems intended to interact with natural persons or to generate content [which] may pose specific risks of impersonation or deception,” the prime example being deep fakes. Information and transparency will be required of these; deployers will have to disclose that the content has been artificially generated or manipulated (for example with a watermark). Finally, systems that pose a ‘minimal risk’, such as spam filters, will be exempt from any requirements under this Act.

GPAI systems are classified separately, and their requirements include respecting EU copyright law, publishing a detailed summary of the content used to train their models, and appointing an EU representative if they are located outside the Union. Free and open-source models, however, will be exempt from these obligations.

Regulation vs innovation

The focus of the narrative around the AI Act has been the balance between regulation and innovation. The European Parliament states that “The law aims to support AI innovation and start-ups in Europe, allowing companies to develop and test general-purpose AI models before public release.” While various political and civil society actors have been stressing the importance of regulating new AI systems, companies have been expressing concern that such regulations might not achieve the desired objectives. They indeed run the risk of stifling innovation within the European Union, thus deepening the dependence on American systems.

The bigger picture is, of course, that AI needs regulation, and needs it fast. This is potentially why the Commission went ahead with enforcing the Act; aspects of AI systems are already causing harm to users. Kate Crawford, in 2021, published a comprehensive account of the socio-political and environmental challenges posed by AI, from labour issues in mines, to the energy consumed by data centres, to racially-biased predictive offenders recidivism algorithms. Nonprofits SaferAI and the Future of Life Institute have recently published a study highlighting the risks associated with each of the top AI companies of today.

Data and environmental factors

A considerable component of the Act focuses on the environmental aspects of AI development: the usage of water, carbon, rare earth minerals, and soil on which to build data centres and processing plants. Regulation for such a fundamental change in the fabric of our economics will play a key part in ensuring AI systems are developed sustainably.

How can this possibly work? Simply put, the European Union does have things that the Big Tech companies want: a market, and data. Developing an AI product that cannot be deployed in the EU would be a huge waste of opportunity on the part of companies that could tap into that market. The other bargaining chip is the vast amount of data that is stored in the EU, thanks in no small part to GDPR, which is essential to develop training sets for AI systems.

We will see how compliance with the Act will develop over the buffer period until all systems have to fall in line. Perhaps the EU’s project of leading tech regulation will be successful without stifling innovation – we will find out over the next two years.

Join our newsletter and get access to all the latest information and news:

Privacy Policy.
Revoke consent.

© Digitalis Media Ltd. Privacy Policy.