Categories
Information Governance

Exploring the upcoming (EU) Artificial Intelligence Act

UPDATE: on 13 March 2024, the European Parliament approved the Artificial Intelligence Acthttps://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html

At least one thing is clear in this period of fluctuation, and that is the direction of travel in our industry. Artificial Intelligence (AI) is the future of technology and will have a significant role in almost every private and public sector, be it Education, IT, Medical, Finance, Marketing, Automotive or Legal – the list goes on! However, every new piece of tech has a flip side too. With the increasing dominance of AI, concerns have emerged around AI’s ethics and trust standards.

In the same light, the European Commission has proposed an Artificial Intelligence Act (AIA or the Act) for the development and use of AI. The proposed Act seeks to set out a common regulatory and legal framework for artificial intelligence (the gold standard on ethical considerations, safety, and trustworthiness). The Act also seeks to ensure that AI systems are accountable, transparent, and compliant with fundamental rights, such as privacy and data protection, intended to create clear parameters. With this proposal, the European Union will again emerge as a leader in putting protections around technology, safeguarding human interests and rights while creating trust around the ethical implications of AI.

How the AIA will interact with legal technology, especially areas like e-discovery and data analytics is yet to be seen because the data used, for example in Technology Assisted Review, is generally privileged as part of legal proceedings (GDPR for example contained several exemptions for legal proceedings). In this article we will look at some of the big-ticket items relating to the AIA, while the full construction is still evolving, here we can get a good foundation.

Introduction to Artificial Intelligence (AI) 

There are many definitions of AI but typically, it encompasses the development of systems that display ‘intelligent’ behavior by analyzing their environment and then taking actions that maximize their chances of success, based on that analysis. It includes machine learning approaches such as supervised, unsupervised, reinforcement learning, logic/knowledge-based approaches like representation, inductive logic programming, inference, and deduction engines. Additionally, AI involves using statistical methods, Bayesian estimation, search and optimization techniques, and probabilistic graphical models. Finally, it also covers commonsense reasoning, natural language processing, dialogue systems, multi-agent systems, and robotics. 

As for the purposes of the AIA, let’s suppose a software product features these approaches, and its outputs influence the environments they interact with. In that case, it falls into one of three categories for the purposes of the new legislation: an unacceptable risk (government overwatch or manipulation of human behaviour), a high risk (selection using personal or biometric data), limited risk (like a chatbot) and minimal risk (spam filters). MEPs are working on the mechanisms that will provide the Commission with assessment/consultation/revision powers, incorporating new developments as time goes on. Recently, for example, there has been a wave of interest in ChatGPT AI, which interacts with the user in a detailed conversational way.

Just like GDPR, the AIA will be extraterritorial 

If you are wondering why this is a hot topic at the moment it is because the AIA’s extraterritorial remit means that its impact is far-reaching, affecting any provider, user or distributor of AI whose services or products are made available in the EU – see below from the current scope clause: 

  1. providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country;
  2. users of AI systems located within the Union;
  3. providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union;

The above will therefore affect the United Kingdom, where we still wish to be part of European data flows, interact with the EU market and in any event, the UK is already putting its own AI legislation in place (the Data Protection & Digital Information Bill and the Online Harms Bill). In the meantime, the AIA applies to any output of an AI system used within the EU, regardless of where the system itself is located.

First things first: Is my company website still legit? What about my favorite online games? And what about my voice assistant apps at home?

Whilst these common systems do interact with people, they are likely classified as minimal or limited risk. Therefore, although they will be subject to some transparency requirements set out in the Act and also under the existing GDPR, these rules will be less onerous, for example notifying users that they are interacting with an AI system and what personal data is being collected in the process. 

Your online games use AI plenty but are unlikely to be affected much (so gamers needn’t worry!) however the act does impose disclosure obligations for deep fakes. According to the Commission’s website, the vast majority of AI systems currently used in the EU fall into the low risk categories.

As for your digital voice assistant at home or on your phone, it will be very interesting to see how these interact with the AIA, which will likely be tested in due course. In the meantime, these are regulated in Europe by legislation within the Digital Services Package.

Prohibited AI Practices

The Act contains a section usefully named, PROHIBITED ARTIFICIAL INTELLIGENCE PRACTICES such as the use of manipulative or exploitative practices, like those utilizing subliminal techniques (NB there is already chatter about broadening this to include non-subliminal techniques), to materially distort the behavior of individuals and cause any psychological or physical harm. In particular, vulnerable groups, such as children and persons with disabilities, are protected from such practices. Additionally, existing data and consumer protection, as well as digital service legislation are in place to ensure that adults have the right to be informed and choose not to be subject to profiling or other behaviour-affecting practices. Furthermore, general social scoring by authorities and the use of ‘real time’ remote biometric identification systems in publicly accessible spaces for law enforcement are (generally) prohibited.

High Risk AI and the Establishment of an EU Wide Database to increase transparency

High Risk AI types will be well aware of all this, as usual ‘High Risk’ seems to have garnered most of the attention, so we will spend some time unpacking it here. Looking at the draft Annex we can see that AI systems deployed in critical infrastructure, education, employment, essential services (both private and public), law enforcement, migration (border control), and the administration of justice are deemed to be high-risk. It includes any AI products that have the potential to affect the outcome for a ‘plurality of persons’ as mentioned in the amendments to the Annex of the draft Act and where it is not reasonably possible to opt-out from that outcome.

The AIA requires organizations to develop and maintain an AI risk management system, which should be tailored to the organization’s particulars. This framework must include among other things, a governance plan to demonstrate oversight, an analysis of technical and ethical risks, mitigation steps, and review procedures for monitoring, testing, and validating the system’s performance. Organizations must also ensure that any data used to develop or train the AI system is collected, stored, and processed in accordance with applicable data privacy laws and regulations. They should also provide appropriate transparency to users of the AI system, and ensure that users are aware of the risks associated with its use.

The Establishment of an EU-wide database is one of the most substantial pieces of the AIA wherein providers of high-risk AI products and services will be responsible for completing a conformity assessment and filing it with the regulator. The draft language says “AI providers will be obliged to provide meaningful information about their systems and the conformity assessment carried out on those systems.” (eg descriptions of the purpose, function, potential risks, mitigation plan, and evidence of compliance with the legislation.) The introduction of these hoops to jump through will create a barrier for any companies putting themselves out there as AI creators when they are not.

Enforcement Summary

The European AI Board (with the Commission) and member state regulators will be responsible for implementation and enforcement. Interestingly, this means member states will need to allocate good resources to regulate AI within their jurisdiction and avoid disparity in enforcement between website-hosting countries. Users interacting with AI via the internet for example, may not be aware of the different host country’s standards and it should also not appear that some places have additional barriers to entry for tech innovators, over others.

Conclusion

Balance is always a good thing. Some queries have been raised about the AIA proportionality and even commentary about stifling growth in the AI sector. For the most part, however, the AIA seems to be understood and could make Europe the global hub for trustworthy AI development. The EU has again (see GDPR) set global standards regarding regulatory compliances and being concerned for citizens’ fundamental rights. This proposal is a step in the right direction, as it will help ensure that AI systems are used responsibly and ethically. It will also help to create trust between users, companies, and governments. Industries significantly impacted by the AIA include financial services, education, employment, human resources, law enforcement, medical, automobiles, and manufacturing. Looking forward, the EU is already rolling out the AI Liability Directive which expands the scope of liability/proof for AI controllers and will work in parallel with the AIA. 

One final note from your friends at Knovos: you may recall there was a last-minute scramble around GDPR compliance where your inbox was flooded with consent requests – it seemed like it was deadline day by the time everyone cottoned on to the implications of the legislation. AIA may seem a little narrower but keep in mind the extraterritorial scope and placement of responsibility on providers, distributors, importers, and other user participants in this long technology chain. So, this time let’s get ahead of it. Now is a good point for both private and public sector technology players to start preparing themselves for the future. Further information including conformity assessments, free practical guides, and tools can be easily found online to help you in this process. AI affects many parts of your life and business, so it is worth taking an interest, and if all that is still not enough… non-compliance with AI regulations can result in action from the regulators including hefty fines ranging from 10M to 30M Euros on the upper end.

The text of the proposed legislation and explanatory memorandum can be found here.