Home / Insights / Publication / Joanna DeBiase at the TVCC’s Windsor Debates: AI is already here!

Joanna DeBiase at the TVCC’s Windsor Debates: AI is already here!

Joanna DeBiase at the TVCC’s Windsor Debates: AI is already here!

Joanna DeBiase at the TVCC’s Windsor Debates: AI is already here!

AI is already being used in many applications we use every day from Google type search engines to the GPS location systems on our mobile phones. The technology is leaping forward at a great pace.

The technology is showing great promise in improving accuracy in monitoring and analysing data sets. It can eliminate human error but it can also replicate and amplify bias in data sets and it can reinforce negative predictions if the data used to train is not carefully screened.

Deciding when and how to adopt AI is a pressing challenge for UK business leaders – we want to innovate and reap the benefits but many of us don’t understand this powerful technology and understandably, feel nervous about deploying it.

The purpose of this article is to explore some of the regulatory and legal aspects of adoption. The aim is to reassure and signpost the legal pitfalls to prepare business leaders for what is to come so they can plan with confidence.

Are there any laws or regulations governing the development, adoption or use of AI in the UK?

The UK Government have published a National AI Strategy (publishing.service.gov.uk) with no plans to create any further regulatory rules or legislation – although they do warn they will legislate if necessary!

Instead the UK government sets out five principles as guidance to organisations and individuals developing or adopting AI:

• Safety, security and robustness
• Appropriate transparency and explanation
• Fairness
• Accountability and governance
• Contestability and redress

The stated aim is that UK Regulators will apply these principles to the rules regulating the various industries across England and Wales. Not ideal (in my view) as this is likely to lead to fragmented rule making and confusion over which regulators rules apply in industries covered by more than one regulator.

For more information have a look at your regulator’s website to see how they have interpreted the principles.

EU Artificial Intelligence Act

On the other hand, the European Commission has declared that they will regulate AI.

The purpose of the legislation will be to protect the rights of citizens in EU Member States and it is derived from the application of EU ethical principles. The Act is still in a state of limbo but agreement has been reached on the structure and timetable for the new legislation. It is planned that the AI Act will come in to force in 2026.

The EU AI Act takes a risk-based approach to regulation. The obligations on developers and users of AI systems will depend on how high the risk of something going wrong is.

Sam Altman of Open AI recently acknowledged the risks when he told the US Congress:

“When this technology goes wrong it goes very wrong!”

For example, AI adoption or development in the highest category of risk defined by the EU will be banned whereas adoption in the medium risk categories will be subject to documentation and reporting requirements. This is not a new approach to technology risk. The pharmaceutical industry are subject to a testing and reporting regime before they are able to license new drugs for human use and then there are further hurdles to overcome as they are distributed by licensed practitioners – medically qualified health professionals.

The lowest risk categories of AI adoption and development will attract little or no reporting requirements. These will be the adoption of systems that automate administrative processes on the basis that any data used in these systems will be managed correctly in accordance with data protection legislation and common law rules.

The Act will contain stiff powers to fine organisations and individuals. The fines will be tougher than the fines that currently exist under the General Data Protection Regulations (GDPR).

The AI Act will have transnational effect – meaning it will apply to organisations and individuals developing or selling products using AI systems that are aimed at or used in EU Member States. If your organisation is operating in the EU or selling products and services using AI systems, you need to keep an eye on how the regulations develop and check with your advisers at regular intervals over the next 12-18 months.

Other existing laws to watch out for in the UK:

• Intellectual Property protection in the common law and Copyright & designs Act 1988
• Passing off and defamation
• Data Protection law contained in the Data Protection Act 2018 and the GDPR regulations

A good way to begin to think about how current UK law applies to AI is to imagine that AI is a hungry beast – you need to feed it data. Data to train it and then data to seek and test predictions and patterns to derive insight.

Data is already regulated in the UK and the EU under the Data Protection Act 2018 in the UK and the GDPR 2022 in the EU. Data in the form of creative output is protected under the Copyright and Designs Act 1988, the Common Law (Judge made case law) in respect of the prohibition on “passing off” and by the system of patent and trademark registration in England and Wales.

Accordingly, organisations and individuals that abide by the law and take their responsibilities for good governance under these laws seriously are on the right road to using AI ethically and safely.

Audit trails showing how data protection policies have been complied with are going to be essential, along with the policies themselves and risk assessments when new systems of data management are set up and used.

Organisations will need to show they have checked for bias in their data and that the systems themselves are not making decisions in relation to humans that cannot be explained. Humans will need to design in these checks and balances to their systems. The current data protection/GDPR rules guide organisations to use a framework to identify sensitive or personal data and protect it.

The Information Commissioner’s (ICO) website has some excellent Guidance on AI and data protection | ICO.

The material is very dense but does comprehensively explain how data is regulated and the intersection between the deployment of AI and data protection regulation. If you cannot get through the ICO guidance then you can of course ask an adviser to help you develop policies and procedures.

It may be worthwhile considering creating a multi-disciplinary internal team including your IT systems architects and developers or data scientists to re-imagine your data collection, requirements and uses in order to re-organise the way data is managed in your organisation to minimise risks and prepare for greater deployment of AI tools.

Email me if you would like some more ideas about this topic.

There is not enough room in this article to discuss all the ways in which some AI systems are being used to push the boundaries in intellectual property law and other areas of the law. Over the past year (2023), dozens of writers, musicians, visual artists and software code developers have filed copyright infringement claims and other commercial disputes in multiple courts against OpenAI and its rival start-ups. Here are some very brief examples:

Example: The yet unfinished case of Getty Images (US) Inc & Ors v Stability AI Ltd [2023] EWHC 3090 (Ch) (01 December 2023) (bailii.org) where Getty Images is alleging that Stability AI has unlawfully scraped copyright data (images) from their carefully curated database of watermarked images.

Example: John Grisham alleges OpenAI failed to secure writers approval to use copyrighted works to train its language models.

Example: programmers allege that MS, its GitHub subsidiary and OpenAI pooled resources to launch an AI assistant, Codex and Copilot software tools but did not program them to “treat attribution, copyright notices and licence terms.”

Example: Universal Music, the world’s largest music group, sued OpenAI’s rival Anthropic, alleging its AI based platform Claude generates nearly word-for-word copyrighted lyrics.

Example: visual artists have targeted AI ventures StabilityAI, Midjourney and DeviantArt for copyright infringement, claiming their platforms were trained on the styles of plaintiffs works without first seeking permissions or offering credit or compensation.

In the US, some AI developers defending against copyright claims have relied in part on the “fair use doctrine,” which was deployed by Google in 2015 to defeat a claim by the Authors Guild, the writers organisation, that its online book searching function violated writers’ copyrights.

Defamation – OpenAI sued by a US radio host (Mark Walters) who alleges Chat GPT generated a complete fabrication of a lawsuit against him.

Defamation – aerospace author Jeffrey Battle alleges MS’s AI assisted Bing conflated him with a convicted felon. That lawsuit seeks to treat Bing as a publisher speaker of information provided by itself.

Undoubtedly, these cases will start a process of shaping the law in these areas of AI. Lawyers often find novel ways to make the law work for their clients and we might see new licensing arrangements emerge or insurance policies to protect owners.

AI developers are hoping that copyright infringement concerns do not discourage businesses from buying their services. In September 2023 Microsoft committed itself to paying any legal costs for commercial customers that are sued for using tools or any output generated by its AI.

Very interesting but I am not in the creative industries – what else can I do to protect myself or my organisation?

The interesting point about these cases is that some of these emerging AI tools are being used to break existing laws – this happens under the misapprehension that AI is a new frontier and no laws apply. Beware of using existing tools for a novel purpose without thinking through the consequences.

Make sure you risk assess deployment of any new systems. Some very dull administrative tasks can be transformed using AI and there is absolutely no personal or sensitive human data involved. Look for these isolated quick wins to be both innovative and efficient while you learn about the technology.

Do review your terms of business if you are collecting and using data. Make sure your customers and users know what their data is being used for and how it is being processed.

Think about good corporate governance and the movement toward ethical investing with Environmental Social and Governance (ESG) scoring. Recently, Norway’s wealth fund and LGIM backed a resolution asking Apple Inc to outline the risks associated with AI and asked the company to report on ethical guidelines the company has adopted regarding the use of AI technology. By coincidence (or not), 21% of Microsoft shareholders supported a resolution demanding more information about the risks posed by misinformation generated and disseminated through AI.

Finally, make sure you understand whether the people in your organisation understand the risks involved in using public Large Language Models such as Chat GPT. You don’t want them feeding the hungry beast your confidential or proprietary information!

Joanna DeBiase

Managing Partner, IBB Law.

Keynote Speaker for the Hillingdon Chamber of Commerce Windsor Debates March 2024, held at Windsor Castle.

Debates Topic: Artificial Intelligence as a Force for Good. Session 2: AI – Regulation and Ethics.