The EU parliament has come together to decide how to best regulate Artificial Intelligence (AI) in order to boost innovation, ethical standards and trust in technology. 

The proposals cover the following 3 key areas:

1. An ethics framework for AI 

The proposal includes a new draft regulation outlining the framework of ethical principles and legal obligations to be followed when developing, deploying and using artificial intelligence, robotics and related technologies in the EU including software, algorithms and data. 

This framework is:

  • A binding legal charter:  Laws should be made in accordance with several guiding principles, including: a human-centric and human-made AI; safety, transparency and accountability; safeguards against bias and discrimination; right to redress; social and environmental responsibility; and respect for privacy and data protection.
  • Human centric: High-risk AI technologies, such as those with self-learning capacities, should be designed to allow for human oversight at any time. If a functionality is used that would result in a serious breach of ethical principles and could be dangerous, the self-learning capacities should be disabled and full human control should be restored.

Our thoughts: In practice, the proposed text asks for similar (if not the same) safeguards to be put into place as the GDPR: Impact assessments, transparency, "ethics" by design and default and cybersecurity all sound awfully familiar.

2. Liability for AI causing damage

A concern when using AI is who pays when it goes wrong. This is why the EU wants to introduce a clear legal framework would stimulate innovation by providing businesses with legal certainty, whilst protecting citizens and promoting their trust in AI technologies by deterring activities that might be dangerous.

  • The rules are for everyone: The rules should apply to physical or virtual AI activity that harms or damages life, health, physical integrity, property, or that causes significant immaterial harm if it results in “verifiable economic loss”. 

Insurance is a must: High-risk AI technology operators should hold insurance similar to that used for motor vehicles.

Our thoughts: This is a general concern in the industry, and whereas the proposal is a good step forward in practice it will be difficult to determine where the buck stops, especially when the risk is very high (say, if we are insuring an automated bus - do we insure the hardware, the software, TFL, the subcontractor, the leasing company, what if the traffic lights are also automated, and a very long etc.) We expect that this specific topic will depend on how the technology is rolled out in practice and the level of automation finally accepted by society as a whole.

3. Intellectual property rights

The EU considers that to have effective EU global leadership in AI, they must have an effective intellectual property rights system  and safeguards for the EU’s patent system to protect innovative developers, while stressing that this should not come at the expense of human creators’ interests, nor the European Union’s ethical principles.

AI is not a person: They specify that AI should not have legal personality; thus, ownership of IPRs should only be granted to humans. 

Our thoughts: Aside from wondering what the creators of the Bicentennial Man and Steven Spielberg may have to say about the statement above, we understand that the general concept of this is fine, but the practice will be, to say the very least, a Gordian knot for IP lawyers to look forward to. 

Things to look forward to in 2021

Privacy pros have a lot to look forward to in 2021, as despite the Covid crisis and Brexit negotiations, the Commission AI legislative proposal is expected early next year, just after the new Model clauses. It is clear the EU is keen to pave the way to become a global leader in the development of AI and uphold privacy rights while they're at it. Excellent news, surely!