AI and the law: Imperative need for regulatory measures

Friday, 24 November 2023 00:26 -     - {{hitsCtrl.values.hits}}

Using AI Technology, without the needed laws and policies to understand and monitor it, can be risky 


“The advent of super intelligent AI would be either the best or the worst thing ever to happen to humanity. The real risk with AI isn’t malice but 

competence. A super-intelligent AI will be extremely good at accomplishing its goals and if those goals aren’t aligned with ours we’re in trouble.1”


Generative AI, most well-known example being ChatGPT, has surprised many around the world, due to its output to queries being very human likeable. Its impact on industries and professions will be unprecedented, including the legal profession. However, there are pressing ethical and even legal matters that need to be recognised and addressed, particularly in the areas of intellectual property and data protection. 

Firstly, how does one define Artificial Intelligence? AI systems could be considered as information processing technologies that integrate models and algorithms that produces capacity to learn and to perform cognitive tasks leading to outcomes such as prediction and decision-making in material and virtual environments. Though in general parlance we have referred to them as robots, AI is developing at such a rapid pace that it is bound to be far more independent than one can ever imagine.

As AI migrated from Machine Learning (ML) to Generative AI, the risks we are looking at also took an exponential curve. The release of Generative technologies is not human centric. These systems provide results that cannot be exactly proven or replicated; they may even fabricate and hallucinate. Science fiction writer, Vernor Vinge, speaks of the concept of ‘technological singularity’, where one can imagine machines with super human intelligence “outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders and potentially subduing us with weapons we cannot even understand. Whereas the short term impact depends on who controls it, the long-term impact depends on whether it cannot be controlled at all2”.

 

The EU AI Act and other judgements

Laws and regulations are in the process of being enacted in some of the developed countries, such as the EU and the USA. The EU AI Act (“Act”) is one of the main regulatory statutes that is being scrutinised. The approach that the MEPs (Members of the European Parliament) have taken with regard to the “Act” has been encouraging. On 1 June, a vote was taken where MEPs endorsed new risk management and transparency rules for AI systems. This was primarily to endorse a human-centric and ethical development of AI. They are keen to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory and environmentally friendly. The term AI will also have a uniform definition which will be technology neutral, so that it applies to AI systems today and tomorrow. 

Co-rapporteur Dragos Tudovache (Renew, Romania) stated, “We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate, while protecting fundamental rights, strengthening democratic oversight and ensuring a mature system of AI governance and enforcement3.”

The “Act” has also adopted a ‘Risk Based Approach’ in terms of categorising AI systems, and has made recommendations accordingly. The four levels of risk are, 

  • Unacceptable risk (e.g., remote biometric identification systems in public), 

  • High risk (e.g., use of AI in the administration of justice and democratic processes), 

  • Limited risk (e.g., using AI systems in chatbots) and 

  • Minimal risk (e.g., spam filters).

Under the “Act”, AI systems which are categorised as ‘Unacceptable Risk’ will be banned. For ‘High Risk’ AI systems, which is the second tier, developers are required to adhere to rigorous testing requirements, maintain proper documentation and implement an adequate accountability framework. For ‘Limited Risk’ systems, the “Act” requires certain transparency features which allows a user to make informed choices regarding its usage. Lastly, for ‘Minimal Risk’ AI systems, a voluntary code of conduct is encouraged.

Moreover, in May 2023, a judgement4 was given in the USA (State of Texas), where all attorneys must file a certificate that contains two statements stating that no part of the filing was drafted by Generative AI and that language drafted by Generative AI has been verified for accuracy by a human being. The New York attorney had used ChatGPT, which had cited non-existent cases. Judge Brantley Starr stated, “[T]hese platforms in their current states are prone to hallucinations and bias….on hallucinations, they make stuff up – even quotes and citations.” As ChatGPT and other Generative AI technologies are being used more and more, including in the legal profession, it is imperative that professional bodies and other regulatory bodies draw up appropriate legislature and policies to include the usage of these technologies.

 

UNESCO

On 23 November 2021, UNESCO published a document titled, ‘Recommendations on the Ethics of Artificial Intelligence5’. It emphasises the importance of governments adopting a regulatory framework that clearly sets out a procedure, particularly for public authorities to carry out ethical impact assessments on AI systems, in order to predict consequences, address societal challenges and facilitate citizen participation. In explaining the assessment further, the recommendations by UNESCO also stated that it should have appropriate oversight mechanisms, including auditability, traceability and explainability, which enables the assessment of algorithms and data and design processes as well including an external review of AI systems. The 10 principles that are highlighted in this include:

  • Proportionality and Do Not Harm

  • Safety and Security

  • Fairness and Non-Discrimination

  • Sustainability

  • Right to Privacy and Data Protection

  • Human Oversight and Determination

  • Transparency and Explainability

  • Responsibility and Accountability

  • Awareness and Literacy

  • Multi Stakeholder and Adaptive Governance and Collaboration.

 

Conclusion

The level of trust citizens have in AI systems can be a factor to determine the success in AI systems being used more in the future. As long as there is transparency in the models used in AI systems, one can hope to achieve a degree of respect, protection and promotion of human rights, fundamental freedoms and ethical principles6.” UNESCO Director General Audrey Azoulay stated, “Artificial Intelligence can be a great opportunity to accelerate the achievement of sustainable development goals. But any technological revolution leads to new imbalances that we must anticipate.”

Multi stakeholders in every state need to come together in order to advise and enact the relevant laws. Using AI Technology, without the needed laws and policies to understand and monitor it, can be risky. On the other hand, not using available AI systems for tasks at hand, would be a waste. In conclusion, in the words of Stephen Hawking7, “Our future is a race between the growing power of our technology and the wisdom with which we use it. Let’s make sure wisdom wins.”

Footnotes:

1Pg 11/12; Will Artificial Intelligence outsmart us?’ by Stephen Hawking; Essay taken from ‘Brief Answers to the Big Questions’ John Murray, (2018)  

2 Ibid

3https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence

4https://www.theregister.com/2023/05/31/texas_ai_law_court/

5 https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence

6 Ibid; Pg 22

7 ‘Will Artificial Intelligence outsmart us?’ Stephen Hawking; Essay taken from ‘Brief Answers to the Big Questions’ John Murray, (2018)  


(The writer is an Attorney-at-Law, LL.B (Hons.) (Warwick), LL.M (Lon.), Barrister (Lincoln’s Inn), UK. She obtained a Certificate in AI Policy at the Centre for AI Digital Policy (CAIDP) in Washington, USA in 2022. She was also a speaker at the World Litigation Forum Law Conference in Singapore (May 2023) on the topic of Lawyers using AI, Legal Technology and Big Data and was a participant at the IGF Conference 2023 in Kyoto, Japan.)

Recent columns

COMMENTS