Libertatem Magazine

Legal issues concerning unregulated use of Artificial Intelligence ‘AI’

Contents of this Page

Artificial Intelligence (“AI”) is not a new concept, rather it was named in the year 1956 by famous American computer scientist John McCarthy. Another famous American scientist Marvin Minsky defined AI as “the construction of computer programs that engage in tasks that are currently more satisfactorily performed by human beings because they require high-level mental processes such as: perceptual learning, memory organization and critical reasoning.”. AI has gradually become a part of our daily life, and its great potential has accelerated its acceptance rate. It will not be premature to say that this is the decade of AI.

In the recent times generative AI based software and applications have exploded around the Globe. Generative AI gives results based on the volumes of information with which it is fed, and such information can further be analysed by generative AI based on patterns and behaviours giving unimaginable output.

A generative AI can write scripts, novels, give competitive exams, generate images, generate sound, etc. However, with generative AI, more is not the merrier. The fact that generative AI can even provide an output based on minutest detail is a thing to ponder upon. There have been incidents reported across the globe, where in some cases generative AI has been misused to commit heinous crimes or attempts have been made to commit such crimes, while in other cases, the output generated by the AI has resulted into various breaches. A question which arises is, should we control and regulate such AI? An even bigger question is, how can we control and regulate such AI?

The answer to the first question has a legal inclination, and the answer should be a YES. The second question is more on a technical point of view. Some of the legal issues involved with the unregulated use of generative AI are enumerated hereinbelow:


  • Generative AI, including large language models (LLMs) like Generative Pre-trained Transformers ‘GPT’ are trained on data sets that sometimes include Personally Identifiable Information (PII) about individuals.
  • This data can sometimes be elicited with a simple text prompt, and compared to traditional search engines, it can be more difficult for a consumer to locate and/ or request for the removal of the information, which could lead to violation of ‘right to be forgotten’ or the right to seek details for end use of personal data.
  • Companies that build or fine-tune LLMs must ensure that PII isn’t embedded in the language models and that it’s easy to remove PII from these models in compliance with privacy and data protection laws.
  • London’s Royal Free hospital failed to comply with the Data Protection Act when it handed over personal data of 1.6 million patients to DeepMind, a Google subsidiary, according to the Information Commissioner’s Office. The data transfer was part of the two organisation’s partnership to create the healthcare app Streams, an alert, diagnosis, and detection system for acute kidney injury.[1]
  • Recently a proposed class action lawsuit was alleged to has been filed before  a California Federal Court, claiming that Open AI has scraped the personal data of millions of people from the internet. It was prayed that injunctive relief may be granted by temporarily freezing further commercial use of Open AI’s Product and provide financial compensation to people whose information was used by Open AI.[2]


  • Generative AI has the capability adopt and analyse behavioural patterns of human being and can also clone voice or even faces.
  • As per report in Firstpost, a lady in USA received a call where the person at the other end stated that he has abducted her daughter and demanded ransom. What was surprising is that the lady heard voice of her daughter in the background, which made her believe the claim. After some enquiry, it was discovered that voice of her daughter was cloned using generative AI.[3]
  • In a recent case as reported by India Today, a man from Kerala was duped Rs. 40,000 by using Deep fake technology. The scammers impersonated as a friend of the victim on WhatsApp call and requested the amount for treatment of his relatives. The scam was eventually discovered when the scammer tried to extort additional Rs. 35,000. The Kerala Police’s Cyber Crime Wing was able to retrieve back the money. However, the miscreant is yet to be identified.[4]


  • Generative AI can lead to new Era of fake news and online misinformation. In seconds, sophisticated generative AI algorithms can produce cloned human voices as well as hyper-realistic pictures, videos, and audio. When combined with sophisticated social media algorithms, forged, and digitally manufactured information may spread quickly and target highly specialized groups, possibly lowering campaign dirty tricks to new lows.
  • Recently some images started showing up on internet, where the arrest of Donald Trump was circulated in the case pertaining to the payment of hush money to a woman by Donald Trump with whom he allegedly had an affair.[5]When, it was noted that such fake news has the potential to spread false narratives and tarnish the image of politicians or any person for that matter.
  • Recently an image of bomb blast in Pentagon, USA was circulated on social media creating a major panic around the globe, which was later discovered to be a fake image.[6]
  • Recently, Steven Schwartz, a lawyer in USA used ChatGPT for research for a case. It turns out that out of all the cases produced by ChatGPT, six were bogus and non-existing.[7]

Breach of sensitive/confidential data

  • Generative AI works based on information/data which it is fed and provides output by analysing the behaviour and patterns with respect to such data. However, in such scenario generative AI has tendency to provide output of such sensitive/confidential data which could possibly cause prejudice to the user or any other person to whom such data belongs.
  • As per the report by Bloomberg, one of the giants in electronic and tech business had put a ban on use of one of a GPT application by its employees as there were sensitive codes being uploaded by the employees which created a vulnerability considering the advance mechanism at which generative AI works, which could have resulted in breach of data.[8]

Human biasness

  • A generative AI gives result based on the information, which is fed by a human being, which it can further analyse based on algorithms. There is high chance of human biasness which the generative AI has capability to adopt and produce results inclusive of such biasness.
  • Recently, AI was asked to generate an image of a CEO. Of all the images it generated, none of them was of a woman, when around 15% of CEOs around the world are female.[9]
  • Another tech giant’s computer algorithms detected commonalities in individuals’ applications after reviewing resumes for a decade. The majority came from men, reflecting the industry’s male dominance. The algorithm discovered that male candidates were favoured. As a result, it punished resumes that suggested the candidate was female. It also downgraded applications from women who attended one of two all-female universities. However, these programs were later changed to be neutral to these keywords.[10]

Copyright issues:

  • According to World Intellectual Property Organization (WIPRO) “AI is discipline of computer science that is aimed at developing machines and systems that can carry out tasks considered to require human intelligence.[11]. Popular generative AI tools are trained on massive image and text databases from multiple sources, including the Internet. When these tools create images or generate lines of code, the data’s source could be unknown. Reputational and financial risks could also be massive if one company’s product is based on another company’s intellectual property.
  • The South African Patent Office recently granted a patent for a patent application relating to “food container based on fractal geometry” with an Artificial Intelligence (AI) system named “DABUS” (Device for Autonomous Bootstrapping of Unified Sentience). The applications listing DABUS as the inventor was claimed in the name of a natural person as an applicant. It purportedly came up with two distinct inventions without any help from humans, and as a result, it was listed as an inventor on patent applications for both inventions.
  • The South African Patent Office and the Federal Court of Australia are the two jurisdictions where the DABUS application was filed that have so far recognized and accepted DABUS as an inventor.[12]

In India, an artwork named “Suryast” inspired by the artwork of Vincent Van Gogh’s “Starry Night” created by an AI application, namely “RAGHAV” was given copyright in year 2020 as a co-other. According to the copyright office’s website, the artwork is still registered (ROC No.- A-135120/2020).

Accessibility to unauthorized/harmful/illegal content

  • Generative AI has the capability to automatically provide prompt responses as per the algorithms, which improves productivity. However, in absence of any control over such output, it is likely that users may get access to unauthorized/illegal/harmful content. Especially, for minors/children’s using generative AI having access to such content would be detrimental.
  • Recently, the Wall Street Journal reported that another tech / phone manufacturing giant had blocked updates of its GPT application over the concerns that it could generate inappropriate content for the children.[13]
  • In another recent development, AI is being used for generating child sexual abuse material online. Internet Watch Foundation (IWF) has warned that these images would lead to normalising real life child sexual abuse. The real victims could fall between the cracks, and opportunities to prevent real life abuse could be missed [14]

Imagine a world where AI governs our decision, autonomous weapons whack among us and devastating cyber-attacks are launched regularly. It may sound dystopian, but without proper regulation of AI, sooner or later this dystopia may become a reality. AI becomes increasingly pervasive, the need for regulation becomes paramount. Striking a balance between promoting innovation and ensuring ethical and responsible AI use is crucial. By addressing the ethical concerns, privacy issues, liability challenges, safety risks, and economic disruptions associated with AI, regulations can guide the development and deployment of AI technologies in a manner that benefits society while minimizing potential harm. Any potential laws which will govern the use of AI should be based on the principles of accountability, bias prevention, transparency, self-control, and for protecting human life and rights.

Overall, establishing comprehensive AI regulations will create a framework that fosters innovation, protects individuals’ rights, and ensures the responsible and beneficial use of AI across industries and borders. It is required that both the law-making authority and Big Tech Giants who are involved in the development of AI must come together to propose and formulate a legal framework which not only protect human life but could facilitate in making human life valuable and safe. The risk associated with AI are being witnessed globally. Currently, laws regulating AI are in the development stage throughout the world. While there are countries like China, Brazil, The United Kingdom, Canada, European Union etc, which have already started working to bring legislations to regulate AI. In India, there is no separate law in development for regulating AI; except The Information Technology Act, 2000. There not doubt that are is a need for all countries across the globe to develop laws to regulate AI to mitigate the risk that may scaling up from the misuse of AI.

**The views expressed are solely those of the author and should not be attributed to the author’s firm or clients or any other entity or organization or person.









[9] AI’s Got Some Explaining to Do – Techopedia






About the Author