Skip to main content

https://rtau.blog.gov.uk/2021/05/11/the-european-commissions-artificial-intelligence-act-highlights-the-need-for-an-effective-ai-assurance-ecosystem/

The European Commission’s Artificial Intelligence Act highlights the need for an effective AI assurance ecosystem

Posted by: , Posted on: - Categories: Artificial intelligence, Responsible innovation

The European Commission released its highly anticipated Artificial Intelligence (AI) Act on 21 April 2021. It represents the most ambitious attempt to regulate AI technologies to date, setting out a cross-sectoral regulatory approach to the use of AI systems across the European Union (EU) and its Single Market.

With this regulation, the Commission aims to make the rules for AI consistent across the EU and thereby ensure legal certainty, encourage investment and innovation in AI, and build public trust that AI systems are used in ways that respect fundamental rights and European values. It is based on a four-tiered risk framework. Each tier aims to set proportionate requirements and obligations for providers and users of AI systems, recognising the range of potential risks to health, safety, and fundamental rights by different types of AI systems in various contexts.

In this blog, we summarise key elements of the proposed regulation, focusing in particular on how its proposal for AI 'conformity assessments' highlights the need for an ecosystem of effective AI assurance, which gives citizens and businesses confidence that the use of AI technologies conforms to a set of agreed standards and is trustworthy in practice. Such an ecosystem is yet to be developed and international collaboration will be required to facilitate the scale-up and interoperability of AI assurance approaches and services across jurisdictions.

Definition of AI

The current regulation contains a very broad definition of AI

‘Any software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs, such as content, predictions, recommendations, or decisions influencing the environments they interact with.’ 

While the list in Annex 1 specifies techniques that fall under the umbrella of machine learning, it also specifies that ‘statistical approaches’ in general will be considered amongst the suite of AI techniques. This is wider in scope than most definitions of AI and would mean that software not commonly considered as using AI would be covered by the regulation.

Regulatory scope

The regulation will apply to private and public sector actors that are providers and/or users of an AI system. It will extend beyond the contours of the EU due to its extraterritorial scope, applying to providers in third countries who place services with AI systems in the EU’s Single Market, as well as to providers whose AI systems produce outputs which are used in the EU. This has led a number of commentators to anticipate the Brussels Effect of this regulation; that is, like the GDPR, the EU’s rules for AI could extend to much of the world as international companies comply with the Single Market’s rules and standardise these practices in other jurisdictions. 

The proposed regulation has been designed to be complementary to cross-sectoral EU legislation, notably the GDPR, and sectoral safety legislation which is currently harmonised under the New Legislative Framework. The AI regulation is complemented by an updated Regulation for Machinery Products – a revision of the 2006 Machinery Directive – which classifies AI systems that are used as a safety component as ‘high risk’. All such systems will need to undergo mandatory third-party conformity assessments. 

AI systems developed or used exclusively for military purposes will be excluded and very few references are made to AI research. 

Risk framework

The regulation has shifted from the binary ‘low risk’ vs. ‘high risk’ framework, proposed in the Commission’s White Paper on AI (2020), to a four-tiered risk framework, which recognises the varying levels of risk posed by AI systems to one’s health, safety, and/or fundamental rights and sets out proportionate requirements and obligations per risk level. 

AI systems that pose ‘minimal or no risk’, such as spam filters, will be permitted with no restrictions but providers will be encouraged to adhere to voluntary codes of conduct. The Commission envisages that most AI systems will fall into this category.

Providers of AI systems that pose a ‘limited risk’, such as chatbots, will be subject to transparency obligations (e.g. technical documentation on function, development, and performance) and may similarly choose to adhere to voluntary codes of conduct. The transparency obligations will serve, amongst others, to allow users to make informed decisions about how they integrate an AI software into their products and/or services.

AI systems that pose a ‘high risk’ to fundamental rights will be required to undergo ex-ante and ex-post conformity assessments, however these are split into two categories: 

  • As per existing sectoral safety legislation, AI systems used as a product safety component, such as in medical devices and cars, will be subject to a third-party, ex-ante conformity assessments (i.e. performed by an external organisation before the systems can be placed on the market or put into service). Remote biometric identification applications will also have to undergo third-party assessments.
  • Stand-alone high risk AI systems specified in Annex 3 in sectors ranging from education to law enforcement will be subject to first-party, ex-ante conformity assessments (i.e. self-assessments prior to their use), as well as ex-post quality and risk management assessments and post-market monitoring.

AI systems that pose an ‘unacceptable risk’ to the safety, livelihoods, and rights of people, such as for social scoring, exploitation of children, or distortion of human behaviour whereby physical or psychological harms are likely to occur, will be banned. However, research on banned AI systems ‘for legitimate purposes’ will be permitted on condition that  it follows ethical standards for scientific research and doesn’t cause harm to real persons.

Enforcement

Member States will be required to appoint one or more national competent authorities to enforce the regulation at the national level in coordination with existing sectoral legislation. Member States will be tasked to lay down rules on penalties, which should be proportionate to company and market size, dissuasive, and consider the interests of SMEs and start-ups. The maximum fines will be either 30,000,000 EUR or 6% of total worldwide annual turnover for the preceding financial year, whichever is higher.

Member States will be encouraged to launch AI regulatory sandboxes to promote the safe testing and adoption of AI systems under the direct guidance and supervision of national competent authorities, with preferential treatment for SMEs and startups to support innovators with fewer resources. Competent authorities should also provide tailored guidance to support SMEs and start-ups to ensure the regulation does not stifle innovation.

A new European AI Board will be established to facilitate the consistent implementation of the regulation, comprising representatives from Member States’ national competent authorities, the European Data Protection Supervisor, and the Commission. The Board will be influential in determining which AI technologies are classified as ‘high risk’. It will be complemented by a public database of high risk AI systems managed by the Commission.

While the proposed AI regulation will still need to go through the EU’s ordinary legislative procedure before entering into force, the legislative financial statement states that implementation costs are budgeted for fiscal year 2023.  

Crossovers to the CDEI’s programme of work on AI assurance 

The new AI regulation will require a well-developed ecosystem of AI assurance approaches and tools to ensure that the various requirements and transparency obligations are performed to certain standards, in particular given the regulation’s reliance on first-party conformity assessments for standalone high risk AI systems. 

However, an AI assurance ecosystem which standardises approaches does not yet exist. The CDEI’s work programme on AI assurance responds to this problem, setting out the need for effective AI assurance, spanning various types and user needs. In the CDEI’s AI assurance framework, we make the distinction between two types of assurance: compliance and risk assurance. Compliance assurance aims to test or confirm whether a system, organisation, or individual complies with a standard. Methods include audits, certification, and verification. By contrast, risk assurance is complementary to compliance assurance and asks open-ended questions about how a system works, requiring judgement, expertise, and context-specific guidance to ensure that AI systems are trustworthy in practice. Methods for AI risk assurance include impact assessments, evaluation, and bias audits. 

The proposed regulation’s reliance on first-party conformity assessments for standalone high risk AI is closer to risk assurance than compliance assurance in the CDEI’s framework. These include a number of specified requirements that will require significant judgement on behalf of the provider, such as whether appropriate levels of accuracy, robustness, and cybersecurity have been met or whether training/testing data was ‘sufficiently representative’. 

While this high-degree of self-assessment is positioned to support innovation, the combination of ambiguous requirements and strict compliance penalties have the potential to stifle innovation, in particular burdening SMEs and start-ups with fewer resources, and underline the urgent need for an effective AI assurance ecosystem. While the CDEI's upcoming AI assurance roadmap will principally focus on the development of such an ecosystem in the UK, it will be important to work with international partners to facilitate the scale-up and interoperability of AI assurance services and approaches across jurisdictions.

If you want to know more about our latest work on AI assurance, have a look at our recent blog posts or get in touch via ai.assurance@cdei.gov.uk.

Sharing and comments

Share this page

Leave a comment

We only ask for your email address so we know you're a real person

By submitting a comment you understand it may be published on this public website. Please read our privacy notice to see how the GOV.UK blogging platform handles your information.