AI- Criminal liability and Financial Crimes

While people are recipients of the myriad benefits of Artificial Intelligence (AI) in their everyday life knowingly and unknowingly, they are also exposed to unimaginable criminal activities. As AI continues to advance, there are three  sets of people relentlessly working in this sphere- one the researchers and developers working towards creating more sophisticated and ethical (most of the time) AI systems that can benefit society, the other being the regulators and policy makers who wish to minimise the potential risks and third being those who want to use AI to manipulate and obscure the unlawful activities, making detection and prevention more challenging.

Deepfakes- Identity theft & Money Laundering

AI models have become more sophisticated and their potential misuse in financial crimes, particularly money laundering, has alarmingly increased.

The new kid on the block, the use of AI-generated fake content, such as deepfakes with the potential to manipulate evidence, tarnish reputations, and spread misinformation, significantly impact the integrity of legal proceedings and raises new challenges for the legal system.

Additionally, AI voice deep fakes have been an emerging threat in the context of financial frauds. The NY Times article “Voice Deepfakes Are Coming for Your Bank Balance”1 highlights the escalating sophistication of technology capable of replicating and manipulating human voices with remarkable accuracy.

Identity theft poses consequential risks to even mundane processes like online banking where bank accounts are opened with minimal oversight (video KYC), job interviews being done virtually, creating counterfeit/ artificial identities and businesses, automating and disguising financial transactions to make tracing illicit funds challenging especially in case of crypto- currencies, thus having a wider impact not just financial but geographical too.

People of the likes of Sam Altman who are rapidly releasing into commercial use the system responsible for those very harms, themselves display concern. But this is based on the belief that what is being built has enormous potential but comes with enormous danger.

Challenges in attributing Criminal liability to AI

One of the most significant challenges faced by the legal system is the detection and attribution of AI-driven crimes. Who is responsible if an AI algorithm engages in criminal activities?

  1. Should the developer or operator bear exclusive responsibility?
  2. Should users who trained the AI with data resulting in criminal behavior also be considered accountable?

One of the fundamental elements of criminal liability is the concept of mens rea, which refers to the mental state or intention of the accused at the time of committing the offense. How does one attribute any liability to AI based on mens rea? Several approaches are being discussed currently.

  1. Strict Liability: Developer or operator can be held liable for AI-driven actions, regardless of whether there was any intent or knowledge of the offense. This approach prioritizes the accountability of human actors involved in deploying AI, emphasizing the need for responsible AI development and deployment practices to prevent AI-driven criminal activities.
  2. Caution based approach: Where criminal liability is attributed to human actors who directly caused the AI to commit the criminal act.
  3. Legal identity approach: Granting legal personhood to AI systems where the AI will have its own identity like in case of corporates and the individual who manages or controls the AI will bear responsibility for any offense that the AI commits.

AI Regulation

The Indian government tasked the NITI Aayog, its apex public policy think tank, with establishing guidelines and policies for the development and use of responsible AI. In 2018 and 2021, the NITI Aayog released the National Strategy for Artificial Intelligence which brought out the Principles for Responsible AI and Operationalizing Principles for Responsible AI.

Additionally, India is a member of the Global Partnership on Artificial Intelligence (GPAI), which aims at producing deliverables that can be integrated into Members’ national strategies to ensure the inclusive and sustainable development of AI. India currently lacks dedicated legislation addressing Artificial Intelligence but has been applying existing legislations which offer both civil and criminal remedies.

Like the Information Technology Act, 2000, addresses deepfake crimes related to privacy violations, malicious use of communication devices or computer resources and prosecutes publishing or transmitting obscene deepfakes. The Act also mandates social media platforms to swiftly remove such content, risking loss of ‘safe harbour’ protection2 otherwise i.e the risk of losing the coveted immunity from liability for illegal user-generated content.

Further, Indian Penal Code provides recourse for deepfake-related cybercrimes under insulting modesty of a woman, criminal defamation and spreading hate on communal lines, among others. On the other hand, the Copyright Act and the Digital Personal Data Protection Act, 2023 address cases involving the unauthorized use of copyrighted material for creating deepfakes thus breaching personal data and violating right of privacy of an individual.

The Indian judiciary has paved the way for controlling the misuse of deepfakes in Anil Kapoor v. Simply Life India3, where Indian actor, Mr Anil Kapoor was granted an injunction restraining the defendants from using technological tools such as AI, machine learning, deepfakes, and face morphing, either for monetary gain or otherwise.

In the most recent case, Scarlett Johansson says OpenAI copied her voice for its generative AI model – after she turned down an invitation to voice it.4 Though OpenAI said the voice was not designed to mimic Johansson’s voice, it removed it immediately. Thus, making it very clear that AI companies are a worried lot too.

EU now has the distinction of being the first to pass a comprehensive legislation regulation to govern AI. Passed on 13th March 2024, it covers AI across a broad range of sectors, with exceptions for AI systems used solely for military, national security, research and non-professional purposes. But even there, the rigorous protection relating to biometrics use, failed to ultimately make it into the law. 

The EU AI Act has tried to live by the motto: “Innovation has to be Responsible, and Regulation has to be by Design.” by regulating AI based on the risk categories it falls within- Unacceptable risk, High-risk, General-purpose AI, Limited risk and Minimal risk. While this has been highly appreciated for its risk-based approach to managing AI liability and might be replicated by other countries, a lot will depend on the implementation of the legislation and its impact on ground.

Conclusion

Not only do we need AI regulations which cater to the AI development, but regulations to address the growing misuse in every sphere. The intersection of AI and criminal liability presents a complex and evolving landscape that demands careful consideration from legal experts, policymakers, and technologists. As AI technology continues to advance, its involvement in criminal activities poses significant legal implications and challenges that require proactive and adaptive responses.

Digitization in finance is a trend that cannot be reversed. Despite leading to fraud and, consequently, money laundering, digitization also offers tools and solutions to these problems. While technologies such as analytics-based risk monitoring and AI-powered KYC are indeed powerful tools to combat money laundering, the appropriate regulatory posture is equally important. In India, regulators such as the RBI and SEBI must help bolster innovation while balancing customer protection for a comprehensive defense against the ills of digitization. The recent PMLA amendments are a good example of such regulations5.

Given the transnational nature of cybercrimes and AI-related offenses, collaboration between countries is critical in exchanging information, evidence, and intelligence related to AI crimes. This has been seen recently as the Pig butchering scam was uncovered in Thailand but involved not just people being duped but also people trafficked to work for the scamsters from various nationalities.6 AI created images were used to gain trust of people to lure them into various investment schemes.

Sam Altman has been vocal that if this technology goes wrong, it can go quite wrong and hence believes AI should be regulated by an agency like the Atomic Energy Commission at a Global level as it transcends borders, which will issue licenses for the development of large-scale A.I. models, safety regulations and tests that A.I. models must pass before being released to the public.

Any fears regarding AI can be allayed through ethical considerations at the time of AI development, promote explainability in AI systems through clear documentation and audit trails and Dat privacy and Security.

What is important is AI and its avatar has got people talking, and that is what is needed right now to help us get from questions to answers when it comes to thinking about where, when, and how we think AI should (not can) change our lives.

Written by Manjushree Somasundara, Partner – Banking Law and Practice, Risk Management.

[1] Voice Deepfakes Are the Latest Threat to Your Bank Balance – The New York Times (nytimes.com)

[2] Section 79 of the Information Technology Act, 2000

[3] Anil Kapoor vs Simply Life India & Ors on 20 September, 2023 (indiankanoon.org)

[4] Scarlett Johansson was ‘shocked, angered’ by ChatGPT voice that sounded like her (nbcnews.com)

[5] India Announces Amended Rules for Prevention of Money Laundering (india-briefing.com)

[6] How South-East Asia’s pig butchering scammers are using artificial intelligence technology – ABC News

 

MORE FROM AARNA LAW