The rise of AI and its challenges: How can regulation ensure that Artificial Intelligence benefits innovation in UK law firms?

Introduction 

Artificial intelligence (AI) is undeniably a tool that will assist law firms in streamlining administrative tasks, providing valuable analytical insight, and carrying out more efficient legal research. While such benefits will allow legal professionals to focus on complicated issues that requires ethically inquisitive care, AI use simultaneously requires strict and meticulously developed regulation to maximise the benefits, and minimise the pitfalls, of technological advancement. Such concerns are not new to legal discourse. They have been seen previously, when emerging technology is rapidly adopted within the corporate sphere, as highlighted in the Post Office Horizon Scandal discussed later in this article. Therefore, this article will proceed to question whether the exponential increase in AI use is premature in relation to our understanding of it. To do this, this article will analyse the extent to which algorithmic bias impacts the accuracy of legal data, as well as diversity within law firms, before finally, seeking to address these challenges, but exploring the current UK and EU regulations in place. 

The rate of adoption of AI and its use in law firms

Between July 2023 and September 2024, the number of lawyers using AI increased tenfold, from 11% to a staggering 41%, and the number of lawyers planning to not use AI within the same period has dropped exponentially from 61% to 15%.  LexisNexis provides valuable insight into where Artificial Intelligence is being put into place, as it notes that it is being used in 5 key areas of the workplace. These include Recruitment; Employee engagement; Learning and development; Employee management; and Strategic workforce planning and analytics. The employment of these in the workplace has significantly cut down administrative tasks, such as responding to employee queries about HR related questions, task allocation to reduce time spent drafting standard documents such as contracts and determining billable hours. This streamlining also carries on, into improving analytical insight. For example, Artificial Intelligence programme Clio Duo, can extract key details from cases and documents within seconds to reduce time taken to draft contracts.  On a similar spectrum, programmes such as CoCounsel, can aid in calling on multiple cases when tackling legal queries, providing relevant legislation and or case law to complete work more efficiently. This has the potential to give legal professionals up to an extra 4 hours per week, addressing more complex aspects of a case or building client relationships with a focus on rapport development. 

Technological advances such as Clio Duo, CoCounsel and Chat GPT, have been highly anticipated in the legal field, and are already proving to be a valuable and cost-effective tool for firms. A recent example of this has been seen at Clifford Chance, with a loan portfolio migration, where with the correct technology implemented, a cost reduction of 40% was achieved for the client in a due diligence project.  Traditionally documents are allocated to solicitors who review them and complete the DDQ’s in Microsoft Word templates. However, with specialisation of the Global Delivery Centre Team, once a DDQ was agreed upon with the client, a digital questionnaire on a secure platform could be developed with access for the parties involved in the due diligence, ensuring quicker communication and file-sharing, to minimise delays. However, although the implementation of these programmes and tools has, so far, been proven to be an effective method of reducing inefficient hours and optimising the efficiency of HR and Recruitment departments, there are inevitable downsides to adopting such advanced technology. These have been seen in the past with the Post Office Horizon Scandal, and continue to exist in the future with the challenge of overcoming Algorithmic Bias. 

The Post Office Horizon IT System Scandal 

Although Horizon was not an AI system, the effects of implementing such contemporary technology across all departments of the post office were devastatingly catalytic for the entire chain of command across the Royal Mail service, from employees to executives. Faulty Horizon data resulted in 900 sub-postmaster convictions of theft, fraud and false accounting, and countless bankruptcies with permanent impacts on the convict’s lives. It goes without saying, that any IT system will have faults, much like Horizon did upon its establishment in 1996, with what were initially minor bugs soon enough causing widespread chaos. One of these bugs was named the Dalmellington bug. It would repeat entered withdrawals in the ledger every time an employee pressed enter on a frozen interface. This initially seemingly inconsequential bug resulted in withdrawals appearing larger than they were without the knowledge of the user, leading to suspicions of theft. Many sub postmasters had reported problems with the system; however, their valid concerns were consistently refuted. Following these false convictions, a group of MPs and the JFSA pushed for a Second Sight Interim report, which was released in July 2013.  It concluded that, although there was no direct evidence of problems with the Horizon system, it had identified that there had been hardware issues that had caused account imbalances, which the Post Office failed to appropriately respond to.

It is however important to note that although its rapid and highly unregulated adoption certainly played a part in the unfortunate result of the implementation of the Horizon System, there were other, equally disruptive factors such as: Neglectful management, ineffective chains of command and the irresponsible decision to adopt Horizon by the Post Office IT department (when its past failures were taken into consideration). These factors, of course, had no external impact on the quality of the system and the speed of its implementation. 25 years after the Post Office Horizon Scandal, the quality of technological development and understanding has improved a great deal, from specialist in-house IT teams in firms and in government, to extensive employee training on new systems. As a result of these developments, it could be argued that generally, underdeveloped technological systems are an issue of the past. This, however, does not quash the challenges inherent to the creation and use of Artificial Intelligence: developing specialised regulation and the potential threat posed by algorithmic bias. 

The current challenges posed by Artificial Intelligence: Algorithmic Bias

Whilst tabling the potential risks of the pace at which AI programmes are being adopted, a key regulatory challenge is still exists: algorithmic bias. Bias is an aspect of decision making in both humans and AI alike and discrimination is the adverse effect of that bias. As Artificial Intelligence systems learn from data, they are naturally susceptible to reflecting biased or discriminatory outcomes. This chapter will proceed to outline how imbalanced training data, as well as training data that reflects discrimination, makes it difficult to avoid this bias in Artificial intelligence systems. Taking these factors into account, it highlights that the implementation of AI systems without algorithmic bias will be a challenging task. However, with effective, clear regulation on the use of AI, this new technology will revolutionise the legal field. 

Artificial Intelligence systems are built with human intelligence and the data it has received. For example, LLM’s (Large language Models) identify similarities in data they have been given and generate output based on these similarities. As bias is inherent to human nature due to factors such as survival instinct, social conditioning and efficiency of cognitive processing, there are sets of data, where it is tedious to try and determine which results are based on unbiased fact and which results are a product of bias. Therefore, often, AI systems like LLM’s are fed biased data, which will consequently produce biased results. A 2023 study, using 640 business and IT professionals, found that 65% of these professionals believed that there was currently data bias in their organisation. 

There is still potential for algorithmic bias to cause significant harm, particularly in legal recruitment in the form of facial analysis AI. With the development of Artificial Intelligence, comes the opportunity for minimizing cost and increasing the efficiency of HR and recruitment departments, particularly for the interview process. Immediate analysis of a candidate’s micro expressions and even tone of voice, can reduce the time taken for a firm to decide whether they are a suitable candidate and compare them to previously successful candidates. However, as previously discussed and seen in surveys, bias may play a significant role in the deciphering of the tone and meaning of a facial expression based on the race, gender or cognitive ability of a candidate, consequently discriminating against those from varied ethnicities or those who classify themselves as disabled. 

The proportion of BAME solicitors has risen in the legal field from 14% in 2015 to 19% in 2023, and has been an overall strength for the legal field. In a 2018 paper by Lauren Rhue on racial influence, the difference in interpretation by Microsoft AI Face++ between white and black basketball players is analysed. The result of these was findings that the programme perceived black basketball players to be twice as angry and three times more scared than white basketball players. The worry is that similar inaccurate interpretations would be made by AI in interviews and decrease the proportion of BAME solicitors yet again. On the other hand, it is important to consider that although the data is representative of the potential risks of Artificial Intelligence and lack of inclusion, with the rate at which AI is developing, data that is even 7 years old may have truth to it, however it should be kept in mind that it is not as reflective of potential risks as it may appear. 

What regulation do we have in place to manage the potential risks of Artificial Intelligence? 

So far, the vast strengths of employing Artificial Intelligence in firms have been outlined throughout this article. It is likely to reduce costs in Human Resources and Recruitment departments, reduce time spent on repetitive tasks such as drafting contracts and fast-track innovation and client rapport by enabling employees to have more time to tackle challenging legal queries. However, It has also been highlighted that these time and cost-effective AI programmes do not come without fault or risk. Challenges such as algorithmic bias and over-reliance will need effective regulation and understanding gained through training. I will proceed to analyse the regulation in place in the UK, the regulation in place in the EU and what the UK can potentially adopt from the proposed EU AI Act.

As Artificial Intelligence has been exponentially adopted by corporations in the last few years, conducive talks about AI regulation only began in 2017. Therefore, the most concrete regulation up to date is the EU AI Act (2021), which was put into full force in August 2024. The Act covers the entire cycle of Artificial Intelligence from development to deployment. As the goal of the act is to help minimise the risk caused by AI, the higher the risk of a programme or type of AI the stricter the regulation, which makes the Act one of a kind. Consequently, the Act is setting the global standard for Artificial Intelligence regulation. The risk levels are split into 5 different sections including unacceptable, high, limited, minimal and finally a separate section for Generative AI such as Chat GPT. As the strength of the regulation reflects the potential risk of Artificial Intelligence, types of AI in the unacceptable category have bans on them. Were these strict regulations adopted by the UK; the risks of AI would be greatly minimised. This is because the type of AI that has been placed into the unacceptable category includes Social Scoring AI (which categorises people based on race, ethnicity and socio-economic status), aswell as biometric identification and categorisation of people. Furthermore, before deployment AI systems must undergo 3rd party conformity assessments and must be registered in a European Commission database. As incorrect categorisation can result in significant regulatory penalties, it is likely European firms will have increased operational costs for AI professionals who can audit and monitor.  

These laws and regulations are highly reflective of the ideas of professionals in financial, legal and tax industries about the ethics of using AI in certain situations. In the 2nd annual Future of Professionals report from the Thomson Reuters Institute, professionals from 3 different fields (legal; Risk, Fraud and Compliance; Tax and Trade) recorded their ethical stances on AI. Within the legal industry specifically 96% of lawyers agreed that AI representation of Clients is too much power for AI to have. This would arguably fall within the high-risk category of the EU AI Act, and thus, AI systems falling into specific areas will have to be registered in an EU database.

In the United Kingdom however, the AI and data protection guidance was updated in March 2023 after requests from the UK industry to outline requirements for fairness in AI. Whilst working alongside the Digital Regulation Cooperation Forum (DRCF), the Information Commissioner’s Office (ICO), the UK has begun to develop this guidance.  This is a more flexible approach to Artificial Intelligence, compared to the approach of the EU; but this similarly has both benefits and drawbacks.The EU approach primarily focuses on a centralized regulatory Artificial Intelligence approach, categorising it into different risk levels and assigning regulation based on its risk level. This approach is rights-based in the sense that the regulation and categorisation of programmes is fundamentally rooted in human rights. If monitoring, auditing or 3rd party regulation results in mis-categorising systems, regulatory penalties are imposed, drawing a clear and protective boundary for European Citizens. 

In contrast, the UK has adopted a ‘pro-innovation’ approach to AI as set out in White Paper March 2023 based on 5 cross-sectoral principles for regulators to apply to their respective sectors, using government guidance, as well as their individual interpretation of the legislation and judgement.  These include safety, security and robustness; transparency over how AI works; fairness to avoid algorithmic bias; accountability for organisations; and finally, proportionality of regulation to innovation. A more flexible approach to Artificial Intelligence regulation will ensure that there is room for innovation, potential for further cutting of costs in UK firms avoiding the costs of excessive monitoring and auditing, and it goes without saying that it will maintain the UK’s reputation as a hub for excellence and innovation in both legal and financial services. It is reasonable to state that the UK’s approach is more innovation “friendly”, however very little has been done to impose this regulation. A private members bill in September 2024 was the last news on the bill, which has now been tabled and moved to committee stage. 

Conclusion 

It is a challenging task to predict what kind of regulation will be appropriate for advanced technology that is developing and being adopted by firms and clients alike at such an exponential rate. As we have seen in the past, at times, regulatory standards Individual to regulators have not been sufficient in effectively managing the risks of advanced technology, but arguably this has been a result of regulation not reflecting the nature of the field it is regulating, as well as the bureaucratic aspects involved in regulation and the potential for regulatory capture. It is only a matter of time before an optimal middle ground between EU stringency and UK flexibility is identified and Artificial Intelligence is established as a meticulously regulated pioneer for the technological advancement and continued exemplifier of excellence in UK law firms. 


Annabel Ross

Bibliography

UK case law

Bates and Others v Post Office Limited [2019] EWHC 3408 (QB); [2019] 12 WLUK 208

UK Publications

Secretary of State for Science, Innovation and Technology, A pro-innovation approach to AI regulation (Cm 815, 2023)

International Regulation

European Parliament, ‘EU AI Act: First Regulation on Artificial Intelligence’ <https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence> accessed 3 March 2025.

Secondary sources

BCLP Law, ‘Will the UK’s AI Regulation Keep Up or Be Left Behind?’ <https://perspectives.bclplaw.com/emerging-themes/creating-connections/technology/ai-in-2025-will-the-UKs-regulation-keep-up-or-be-left-behind/#:~:text=A%20new%20Private%20Member's%20Bill,governing%20AI%20in%20the%20UK> accessed 4 March 2025.

Brown D, ‘AI Adoption Soars Across UK Legal Sector’ <https://www.lexisnexis.co.uk/blog/future-of-law/ai-adoption-soars-across-uk-legal-sector#:~:text=Exponential%20growth%20in%20AI%20adoption&text=The%20survey%20findings%20paint%20a,impressive%2041%25%20in%20September%202024> accessed 1 March 2025. 

Clifford Chance, ‘Legal Technology Capabilities’ <https://www.cliffordchance.com/innovation-hub/innovation/capabilities/legal-technology.html> accessed 1 March 2025.

Clifford Chance, ‘Loan Portfolio Migration Case Study’ <https://www.cliffordchance.com/innovation-hub/innovation/innovation-insights/case-study/loan-portfolio.html> accessed 1 March 2025.

Clio, ‘AI Tools for Lawyers’ <https://www.clio.com/resources/ai-for-lawyers/ai-tools-for-lawyers/> accessed 1 March 2025.

Davies V, Cyber Magazine, ‘65% of Organisations Suffer from Data Bias’ <https://cybermagazine.com/articles/65-of-organisations-suffer-from-data-bias> accessed 1 March 2025.

Hern A, ‘How the Post Office’s Horizon System Failed: A Technical Breakdown’ (The Guardian, 9 January 2024) <https://www.theguardian.com/uk-news/2024/jan/09/how-the-post-offices-horizon-system-failed-a-technical-breakdown> accessed 17 March 2025.

Justice for Subpostmasters Alliance, Interim Report (March 2021) <https://www.jfsa.org.uk/uploads/5/4/3/1/54312921/pol_interim_report_signed.pdf> accessed 17 March 2025, section 7.

Kennedys Law, ‘AI Regulation in the UK and EU’ <https://kennedyslaw.com/en/thought-leadership/article/2024/financier-worldwide-ai-regulation-in-the-uk-and-eu/#> accessed 4 March 2025

Lexis, ‘UK Document’ <https://plus.lexis.com/uk/document/?pdmfid=1001073&crid=e3addb08-f490-4039-9f09-c46bb4b08427&pdactivityid=2533dc59-d229-4f7a-863a-b033c9ac73e9&pdtargetclientid=-None-&ecomp=5t8k> accessed 1 March 2025.

Prodger M, BBC,  ‘Bug found in Post Office row computer system’ <https://www.bbc.co.uk/news/uk-23233573> accessed 21st March 2025.

Race M, Jones L, BBC, ‘Post Office scandal: The ordinary lives devastated by a faulty IT system’ <https://www.bbc.co.uk/news/business-67956962> accessed 21st March 2025.

Rhue L, ‘Racial Influence on Automated Perceptions of Emotions’ (2018) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3281765> accessed 17 March 2025.

Solicitors Regulation Authority, ‘Diverse Legal Profession’ <https://www.sra.org.uk/sra/equality-diversity/diversity-profession/diverse-legal-profession/#:~:text=minority%20ethnic%20groups.-,All%20lawyers,1%25%20of%20lawyers%20are%20Other.> accessed 17 March 2025.

Spiceworks, ‘Facial Analysis Technology for Recruitment’ <https://www.spiceworks.com/hr/recruitment-onboarding/articles/facial-analysis-tech-for-recruitment/> accessed 1 March 2025.

Thomson Reuters, ‘CoCounsel’ <https://www.thomsonreuters.com/en/cocounsel> accessed 17th March 2025.

Thomson Reuters, ‘Making the most of AI’s potential time savings for corporate counsel’ ‘<https://www.thomsonreuters.com/en-us/posts/corporates/lawyers-ai-time-savings/#:~:text=According%20to%20the%20recently%20published,roughly%20200%20hours%20per%20year.> accessed 17th March 2025

The Post Office Horizon Inquiry, Baljit Sethi, First Statement (2022) <https://www.postofficehorizoninquiry.org.uk/sites/default/files/2022-02/WITN02000101%20-%20Baljit%20Sethi%20-%20First%20Statement%20-%20Exhibit.pdf> accessed 17 March 2025.

Thomson Reuters, ‘AI Act: The World’s First Comprehensive Laws to Regulate AI’ <https://legalsolutions.thomsonreuters.co.uk/blog/2024/08/08/ai-act-the-worlds-first-comprehensive-laws-to-regulate-ai/#:~:text=In%20April%202021%2C%20the%20European,first%20comprehensive%20regulation%20for%20AI> accessed 1 March 2025.

Previous
Previous

International Law in Crisis: The Ukraine War and the Failure of Legal Mechanisms

Next
Next

The Discriminatory Side Effects of Artificial Intelligence