The EU’s “Brave New World” of Regulated Trustworthy AI

12 May 2021

With the unfailing rise of Artificial Intelligence (AI) systems being used in everything from planning national defence strategies to driving recruitment campaigns, the proposed introduction of a European Regulation on AI may just have come in the “nick of time”.

Our Comment

Given the seemingly ubiquitous current trend of integrating AI technology in the strategizing, decision-making and operational functions of many organisations around the world, it was perhaps inevitable that focused regulatory oversight of some kind would be introduced – a need to go beyond reliance on the general principles for cooperation and piecemeal existing sector legislation broadly embraced by governments and regulators to date. Now, the European Commission may just have taken that one step further, with its new proposed regulatory framework specifically intended to regulate AI (the EU Draft Regulation on Artificial Intelligence issued last week on 21 April 2021). If even Elon Musk, well-known digital tech innovator, has gone on record as warning of the potential dangers of an unregulated AI industry, can the policymakers and regulators afford to overlook the potential public policy issues at stake?

Of course, there will be some who think that the AI technology/market is still nascent and therefore needs a light or deregulatory touch to encourage innovation. Arguably, though, the draft EU Regulation on AI meets these concerns, focusing its attention on the risk profile associated with relevant AI use cases and systems. This risk-based approach to regulation to enhance transparency, safety and trust in the use of AI should be welcomed as a positive foundation on which to continue to innovate. That said, the Regulation is bound to throw up further debate in terms of its feasibility as it moves through the EU Parliament on the next stage of its approvals process.

The Commission’s Proposed Risk-Based Approach

The proposed EU Regulation is a clear move away from the “principles” based approach, adopted by many countries to date to address potential challenges in this fast-moving area of AI, to a standalone risk-based AI regulatory framework. That said, there is recognition of the overlap and need for consistency with existing sector legislation and digital economy policies, for example in the context of the use and protection of data (the General Data Protection Regulation (GDPR)) and adjacent sector legislation (such as, the automotive sector in the context of connected cars).

The EU Regulation seeks to cover four areas of potential risk in the development and use of AI, which may not be mutually exclusive, represented in the form of a pyramid below:


The Commission expects the majority of AI systems to fall within the bottom tier of the pyramid, and thus fall outside the main regulatory thrust of the Regulation. It is nevertheless proposing a level of self-regulation by the industry in the form of voluntary codes of conduct to maintain the appropriate standards of trust and transparency in producing and developing even these minimal or no risk AI systems.

What AI systems/technology is regulated?

Both Machine learning and expert systems are included within the scope of the EU Regulation. The Commission has adopted a deliberately tech-neutral approach in order to future-proof the application of the Regulation, adopting the established OECD definition:

“a software that is developed with one or more of the techniques and approaches listed… and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”

As with all broad and general definitions used in legislation, there is bound to be debate about what’s in/what’s out, particularly in terms of the functionality and application of legacy systems.

Regulatory “sandboxes” are provided for, reflecting the evolutionary nature of AI technologies and the Commission’s objective to create a legal framework that is innovation-friendly, future-proof and resilient to disruption. These AI regulatory sandboxes would establish a controlled environment to test innovative technologies for a limited time on the basis of a testing plan agreed with competent national authorities.

Who is regulated?

In a complex multi-stakeholder AI world, the onus is placed on AI “providers” (developers of AI) and AI “users” (those who procure and use AI systems in the course of a professional activity) to comply with the primary obligations of the Regulation. Other intermediaries in the supply chain, such as importers and distributors of AI systems, will have more limited obligations. Measures to reduce the regulatory burden on SMEs and start-ups are included.

It is also worth noting that, as with the GDPR, the EU Regulation will have extra-territorial effect, impacting any AI provider or user (or other relevant intermediary) providing or using AI systems in European markets.

What are the sanctions for non-compliance?

Member States will appoint competent authorities to supervise compliance, with the power to issue fines and other penalties as determined nationally. However, engaging in prohibited practices or the supply of incorrect or misleading information can result in a fine of 4% of annual global turnover or up to Euro 20million. Given the potential for such significant fines, AI providers and AI users will need to consider carefully the implications of the Regulation for their relevant businesses.

What is prohibited?

There are clear “red lines” drawn in the EU Regulation in relation to unacceptable risk areas, where AI systems involving any of the following practices will not be allowed to be produced or used in the EU:

  • Manipulative subliminal practices
  • Discriminatory or exploitative practices
  • Social scoring practices.

In addition, the EU Regulation prohibits the development and use of Remote Biometric Identification systems (RBI), also known as facial recognition systems (such as, public CCTV systems), other than in the specific circumstances covered by existing sector rules on GDPR and non-sector rules on human rights and freedoms. This is a controversial area, and the Commission has effectively sought to carve this out of the scope of the proposed Regulation whilst acknowledging that existing legislation may allow RBI systems to be used.

High-risk AI systems

This is the central focus of the Regulation. It identifies particular types of AI system which it deems “high-risk” and thus subject to additional obligations. This high-risk category includes both embedded AI components, and standalone AI systems (such as credit scoring, EdTech, MedTech, employment management systems). The Regulation includes a lengthy list of standalone high-risk AI systems which the Commission believe pose significant health and safety concerns or severely impact fundamental human rights/freedoms, which is intended to be updated from time to time to reflect technological change. Whilst this is a good way of ensuring that the Regulation is not outpaced by innovation and providing some legal certainty, it could impact the investment climate for new AI products and services if any such updating is arbitrary and disproportionate.

Key requirements are shown below – largely imposed on the AI providers:

Harmonised CE Testing & Marking

New CE testing and marking assessments will be introduced, in line with existing safety standards for distribution of products and software across EU markets.

Risk management processes

These will need to be established based on the intended purpose of the AI system in relation to data and data governance, documentation and recording keeping, transparency and provision of information to users, human oversight, robustness, accuracy and cybersecurity.

Key obligations

AI providers will need to establish a quality management system, undertake the conformity assessment, affix CE marking, register the AI system in an EU database, collaborate with market surveillance authorities and conduct post-market monitoring to ensure continued compliance/updating of the system.

Users are, importantly, required to ensure human oversight when using an AI system.

The Commission believes that its proposed minimum requirements are already state-of-the-art for many diligent operators as a result of its two years of preparatory work, piloted by more than 350 organisations. It also states that its proposals are largely consistent with other international recommendations and principles already in existence. Moreover, AI providers are to be given flexibility to choose the way to meet their requirements, taking into account the state-of-the-art and technological and scientific progress. The precise technical solutions to achieve compliance with the EU Regulation’s requirements may be provided by standards or by other technical specifications, or otherwise be developed at the discretion of the AI provider.

The operational and commercial feasibility of compliance with these extensive proposals will nevertheless need further consideration, particularly for the smaller AI companies involved in developing AI products and systems.

Interactive AI systems

Even if AI systems are not high-risk, where they are intended to interact with individuals, additional requirements on algorithmic transparency will apply. In particular, providers will need to ensure that software is designed to notify users that they are interacting with an AI system. An obvious example would be impersonification systems, such as bots, being used for consumer helpline functions.

The Future

The next stage of the approvals process for this EU Regulation will be in the European Parliament. It will be interesting to see the detail of the proposals being further debated and examined in the context of the technical, ethical and commercial nuances of the different types and uses of AI systems and technologies. The Parliament will seek to balance regulation as both a driver for AI innovation and favourable investment climate through the creation of legal and regulatory certainty, and safeguarder of rights and freedoms.

The Commission’s approach on creating a safe and trustworthy AI climate will undoubtedly feed into the thinking and approaches of other markets around the world, and the ongoing global dialogue on the fostering of AI – indeed, in a consistent move, the US FTC has recently announced reliance on existing powers to enforce against the use of discriminatory algorithmic practices.

Dhana Doobay
Partner – Telecoms, Media & Technology
Dhana Doobay is a Partner at Spencer West. She specialises in Telecoms, Media & Technology, Cloud and digital services, Network-sharing and infrastructure projects, International “best practice” regulatory strategy, Mobile ecosystems and Strategic partnerships