0

Exploring the Legal and Ethical Dimensions of AI in Criminal Justice

Abstract

Artificial Intelligence (AI) is developing various sectors, including the criminal justice system, by enhancing efficiency and decision-making processes. However, the integration of AI into criminal justice raises significant legal and ethical concerns that must be thoroughly examined. This article delves into the multifaceted legal and ethical dimensions of AI applications in criminal justice, focusing on predictive policing, judicial decision-making, and forensic analysis. The discussion begins with an overview of current AI applications in criminal justice, highlighting their potential benefits such as increased accuracy, reduction of human bias, and improved resource allocation. It then transitions to the legal dimensions, exploring existing legislation, privacy concerns, and issues of accountability and liability associated with AI-generated decisions. Ethical considerations are also critically analysed, with emphasis on the risks of algorithmic bias, the necessity for transparency and explain ability in AI processes, and the importance of maintaining human oversight. Through detailed case studies, the article illustrates real-world examples of AI implementation and the accompanying legal and ethical challenges.

Moreover, the article addresses the broader challenges and controversies, including resistance to AI integration and the technical limitations of current AI technologies. Finally, it offers future directions and recommendations, advocating for robust policy frameworks, comprehensive ethical guidelines, and continued research and development to ensure that AI serves justice equitably and responsibly.

Introduction

Artificial Intelligence (AI) is transforming the criminal justice system, offering innovative solutions for predictive policing, judicial decision-making, and forensic analysis. However, these advancements come with significant legal and ethical challenges that need careful consideration. This article delves into the complexities of integrating AI into criminal justice, examining its benefits, the potential for bias, privacy concerns, and the need for transparent, accountable systems. As AI continues to evolve, understanding these dimensions is crucial for ensuring that technology enhances justice while upholding ethical standards.

Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. AI systems are designed to perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. The different applications of AI span across various sectors, including:

Healthcare: AI is used for diagnosing diseases, personalizing treatment plans, and enhancing medical imaging.

Finance: AI helps in fraud detection, algorithmic trading, and personalized banking services.

Retail: AI powers recommendation engines, inventory management, and customer service chatbots.

Transportation: AI is crucial for the development of autonomous vehicles and traffic management systems.

Manufacturing: AI optimizes production lines, predictive maintenance, and quality control.

Criminal Justice: AI assists in predictive policing, risk assessment, and forensic analysis.

These applications demonstrate AI’s transformative impact, enhancing efficiency and decision-making across industries.

How AI Use in the Criminal Justice System?

 Artificial Intelligence (AI) is revolutionizing the criminal justice system by providing innovative tools that enhance efficiency and decision-making. Few  significant impact are:

  1. Predictive Policing: AI algorithms analyse crime data to predict potential crime hotspots, allowing law enforcement to allocate resources more effectively and prevent crimes before they occur.
  2. Judicial Decision-Making: AI-driven risk assessment tools assist judges by evaluating the possibility of a defendant reoffending, which helps in making more informed decisions regarding bail, sentencing, and parole.
  3. Forensic Analysis: AI technologies, such as facial recognition and DNA analysis, improve the accuracy and speed of forensic investigations, aiding in the identification and prosecution of criminals.
  4. Surveillance: AI-powered surveillance systems monitor public spaces and analyse video feeds in real-time to detect suspicious activities and identify suspects more quickly.
  5. Legal Research: AI helps legal professionals by automating the research process, quickly sifting through vast amounts of legal documents and case law to find relevant information.
  6. Fraud Detection: AI helps identify and prevent fraudulent activities by analysing patterns and anomalies in data, which is particularly useful in financial crimes and cybercrime investigations.

These applications of AI in the criminal justice system not only enhance operational efficiency but also aim to reduce human biases, ensure fairer outcomes, and improve public safety. However, they also raise important legal and ethical questions that need to be carefully addressed to ensure justice is served responsibly and equitably.

Advantages and Disadvantages of AI in Criminal Justice System

 Artificial Intelligence (AI) offers both advantage and disadvantage when applied to the criminal justice system. These are-

Pros:

Increased Efficiency: AI streamlines processes, such as predictive policing and case management, leading to faster resolution of cases and improved resource allocation.

Enhanced Accuracy: AI algorithms analyse vast amounts of data with precision, aiding in evidence analysis, risk assessment, and decision-making, potentially reducing errors and wrongful convictions.

Bias Reduction: AI has the potential to mitigate human biases in decision-making by relying on data-driven analysis rather than subjective judgments, fostering fairness and impartiality.

Cost Savings: Automation of tasks, such as document processing and analysis, can lead to cost savings for criminal justice agencies, allowing them to allocate resources more efficiently.

Improved Safety: AI-powered surveillance systems and predictive analytics help identify potential threats and prevent crimes, enhancing public safety and security.

Cons:

Algorithmic Bias: AI systems can inherit biases present in the data used to train them, leading to discriminatory outcomes, particularly against marginalized communities.

Lack of Transparency: The complexity of AI algorithms makes it challenging to understand how decisions are made, raising concerns about transparency, accountability, and the right to due process.

Privacy Concerns: The use of AI for surveillance and data analysis raises privacy concerns, as individuals’ personal information may be collected and analyzed without their consent, potentially infringing on civil liberties.

Legal and Ethical Dilemmas: The application of AI in criminal justice raises complex legal and ethical questions regarding liability for AI-generated decisions, the right to fair trial, and the use of predictive analytics in sentencing.

Overreliance on Technology: Excessive reliance on AI systems without adequate human oversight may lead to errors, misuse of technology, and erosion of trust in the criminal justice system.

 These advantages and disadvantages are essential for policymakers, legal professionals, and stakeholders to navigate the ethical and legal implications of integrating AI into the criminal justice system responsibly.

Ethics and Fairness in AI Criminal Liability in India

When we talk about ethics and fairness in AI and criminal liability in India, we’re looking at how artificial intelligence is used in law enforcement and the legal system, and how the Indian Penal Code, 1860 (IPC) is framed to address AI-related issues. The IPC is a set of laws that defines crimes like theft, fraud, assault, murder, rape etc.  and their punishments in India. It provides guidelines on what is considered criminal behaviour and how offenders should be punished.

AI technology can be a powerful tool, helping police predict crime or judges make decisions about bail or sentencing. But there are important ethical questions to consider. For example, how do we make sure AI systems are fair and don’t discriminate against certain groups? How do we ensure that these systems are transparent and can be understood by everyone, not just experts?

Another concern is about who is responsible if something goes wrong. If an AI system makes a mistake that leads to someone being wrongly accused or punished, who should be held accountable? These are complex issues that require careful thought and consideration to ensure that AI is used ethically and fairly in our criminal justice system.

Navigating Challenges in Assigning Criminal Liability to AI in India

In India, assigning criminal liability to artificial intelligence (AI) poses several challenges due to the unique nature of AI technology. AI operates based on complex algorithms, making it difficult to comprehend how it arrives at decisions. This complexity poses a challenge in determining who should be held responsible if the AI system makes a mistake or commits a crime. Currently, there are no specific laws in India addressing the criminal liability of AI systems. This absence of a legal framework makes it challenging to hold AI accountable for its actions within the existing legal system. While AI systems may operate autonomously, they are created, programmed, and managed by humans. Determining the level of human involvement and responsibility in AI-related crimes presents a challenge in attributing liability. Criminal liability often requires proving intent or mens rea, which refers to the guilty mind or intention to commit a crime. With AI, proving intent becomes complicated as AI lacks consciousness or subjective intentions.

AI systems can inherit biases from the data used to train them, leading to discriminatory outcomes. Identifying and addressing bias within AI algorithms poses a challenge in ensuring fairness and accountability. AI systems often rely on vast amounts of personal data for training and decision-making. Protecting individuals’ privacy rights while using AI in criminal justice processes is a challenge due to the risk of unauthorized access or misuse of sensitive information. AI systems’ lack of transparency makes it challenging to understand how they arrive at decisions. Ensuring transparency and explain ability in AI processes is crucial for establishing accountability and trust. Navigating these challenges requires a comprehensive approach involving legal, technological, and ethical considerations to ensure that AI is used responsibly and fairly within the Indian legal system.

International approaches to criminal liability for artificial intelligence

Internationally, countries are exploring various approaches to address criminal liability in the context of artificial intelligence (AI). Some countries are developing new laws and regulations specifically adapted to govern AI use. These regulations outline the responsibilities of individuals and organizations involved in developing, deploying, and overseeing AI systems. For example, they may specify that developers are liable for any harm caused by their AI systems, regardless of intent. International organizations and industry groups are creating guidelines and principles to promote responsible AI use. These documents offer recommendations for developers and users to follow, emphasizing transparency, fairness, and accountability in AI systems. Many countries stress the importance of human oversight in AI decision-making. This means that, even as AI systems become more autonomous, humans should retain ultimate control and responsibility for the actions of AI. Given the global nature of AI, international cooperation is crucial in addressing AI-related crimes. Countries are collaborating to share information, harmonize regulations, and establish common standards for AI use. Ethical principles such as fairness, transparency, and non-discrimination are key factors in determining AI criminal liability. Countries are taking ethical considerations into account when developing laws and regulations related to AI.

Example of Countries with Strict Liability for AI:

Germany: Germany has implemented strict liability for AI under its Product Liability Act. This means that if an AI system causes harm, the manufacturer or operator of the AI system can be held liable for damages, regardless of fault.

France: France has introduced strict liability for AI through its Civil Code. Under French law, AI developers and users can be held liable for any damage caused by AI systems they have created or deployed.

United Kingdom: The UK has proposed legislation to establish strict liability for AI under its AI Regulation Act. This legislation hold AI developers and operators accountable for any harm caused by their AI systems, regardless of intent.

These demonstrate how countries are implementing strict liability for AI to ensure accountability and protect individuals from potential harm caused by AI systems.

Existing Legal Frameworks and Regulations for AI in the Criminal Justice System

India is now beginning to integrate artificial intelligence (AI) into its criminal justice system like any other countries. However, the legal frameworks and regulations specifically addressing AI in this context are still in the nascent stages. Indian  current legal landscape is:

  1. General Legal Frameworks

Indian Penal Code (IPC), 1860: The IPC provides the foundational legal framework for defining crimes and prescribing punishments in India. While it does not explicitly address AI, its principles apply to actions performed using AI technologies.

Information Technology Act, 2000: This act regulates cyber activities and electronic data management. It addresses issues such as data protection, privacy, and cybercrimes, which are relevant when AI systems handle sensitive information.

2. Data Protection and Privacy

Personal Data Protection Bill, 2019: This proposed bill aims to protect individual privacy by regulating the collection, storage, and processing of personal data. It has implications for AI systems that use personal data in criminal justice, ensuring that these systems comply with privacy standards.

3. AI and Machine Learning Guidelines

NITI Aayog’s National Strategy for AI: NITI Aayog, the government’s policy think-tank, has outlined a strategy for AI adoption in India, including its use in law enforcement. While not legally binding, these guidelines encourage ethical AI development and use, emphasizing fairness, transparency, and accountability.

4. Judicial Oversight

Supreme Court Judgments: The Indian judiciary has started to acknowledge the role of AI in legal contexts. For example, the Supreme Court has emphasized the importance of fairness and transparency in the use of technology in legal proceedings.

5. Sector-Specific Regulations

Law Enforcement Agencies: Individual law enforcement agencies, such as the police, are beginning to adopt AI tools for tasks like predictive policing and forensic analysis. These agencies operate under general legal principles but lack specific AI regulations.

6. Ethical Considerations

Ethics Guidelines: Various governmental and non-governmental organizations are developing ethical guidelines for AI use. These guidelines focus on preventing biases, ensuring transparency, and maintaining accountability in AI-driven decisions.

Challenges and Future Directions

While India has several general legal frameworks that indirectly govern the use of AI in the criminal justice system, there is a demanding need for specific regulations and guidelines to address the unique challenges posed by AI technologies. Few challenges are-

Lack of Specific Legislation: Currently, there is no comprehensive legislation specifically governing the use of AI in criminal justice. This creates challenges in addressing accountability, transparency, and bias in AI systems.

Need for AI-Specific Laws: There is a growing recognition of the need for AI-specific laws and regulations to address unique challenges posed by AI technologies in the criminal justice system.

Interdisciplinary Approach: Effective regulation will require collaboration between technologists, legal experts, ethicists, and policymakers to create a robust framework that ensures the ethical and fair use of AI in criminal justice.

Conclusion

The integration of artificial intelligence (AI) into the criminal justice system offers significant potential for enhancing efficiency, accuracy, and fairness in various processes, from predictive policing to judicial decision-making. However, it also brings forth complex legal and ethical challenges that require careful consideration and regulation. One of the foremost concerns is the potential for AI systems is to perpetuate or even exacerbate existing biases within the criminal justice system. Ensuring that AI technologies are developed and deployed in a manner that is free from bias is crucial for maintaining public trust and upholding justice. Many AI systems work in ways that are hard to understand, which makes it difficult to see how decisions are made. We need laws that require these systems to be clear and understandable, so people can trust and verify their fairness. The current legal frameworks in India and globally are often inadequate to address the unique issues posed by AI. Developing specific AI regulations that address these challenges while promoting ethical use of AI is essential. This includes creating laws that ensure fairness, non-discrimination, and respect for human rights. The judiciary must play a proactive role in overseeing the deployment of AI within the criminal justice system. This includes scrutinizing AI-based decisions to ensure they meet the standards of fairness and justice. The use of AI in India’s criminal justice system presents both opportunities and challenges. While AI can enhance efficiency and decision-making, it also raises significant legal and ethical issues. Recent discussions emphasize the need for comprehensive regulations, transparency, and accountability to ensure that AI technologies are used responsibly and justly. As AI continues to evolve, it is crucial for India to develop robust legal frameworks that address these challenges and uphold the principles of justice and fairness.

“PRIME LEGAL is a full-service law firm that has won a National Award and has more than 20 years of experience in an array of sectors and practice areas. Prime legal fall into a category of best law firm, best lawyer, best family lawyer, best divorce lawyer, best divorce law firm, best criminal lawyer, best criminal law firm, best consumer lawyer, best civil lawyer.”

Written By- Antara Ghosh

0

Impact of Technology on Access to Justice in India: Opportunities and Challenges.

Impact of Technology on Access to Justice in India: Opportunities and Challenges.
Abstract:
Technology and Law have pretty much developed in the present times. Back in the days it was very impractical to even think of having the hearings of the Court without seeing the concerned parties and counsels in person. As time passed, the Indian judiciary has joined hands with the technology to enhance the process of conducting Trials and delivering judgements. Though, the practicality of the same if questioned but the COVID pandemic, made all of us sit and realise the Technological innovations in legal system can indeed definitely be practically applied.
Key words:
e-courts, e-filing, technology, virtual court hearings, e-courts portal, e-payment, electronic display system (EDS), document management system (DMS), block-chain method.
Introduction:
In today’s fast-paced world technology and innovation have played a vital role in our lives, the Legal world is no exception to that. Legal technology or also called as legal tech/law tech, is been playing a significant role in the Indian legal system. The Indian judiciary also has been a guest to this present wave of technological advancements. Since, a very long time the Indian judiciary has been grappling with a significant backlog and pendency to litigation. According to the, National Judicial Data Grid (NJDG) gives us a picture of the backlog of cases haunting India. There are about 4.38 crore cases lay pending before the Taluka courts and District while 60.9 lakh cases (60,90,891 cases) are pending before the High Courts. Thus, the backlog of cases have crossed the 5 crore mark with 5,00,39,981 cases pending before the various Courts/Tribunals across the Country as of June1st of 2023. The said issue being addressed through e-courts and Digital India. The Indian Judiciary is amalgamating technology into the traditional courts to reduce pendency and fasten the justice delivery.
Use of Technology in Justice Delivery Mechanism-
In India the relationship between law and technology has been growing rapidly and has gained quite the importance as innovative technologies have been changing the country’s justice delivery mechanism. The legal fraternity is also duly benefitted by the innovative technological entry into the Legal fora as the advent of technology enables us to access the case laws, legislation and legal commentary through these online platforms. The lawyers, through this method, have been able to communicate with their clients, stakeholders, co-counsel and thereby decreasing the in-person meetings. Technology, basically allows legal practitioners and also the judicial stakeholders to operate more effectively by making sure that the time-consuming activities like the document management, scheduling and legal research are addressed through technology.
Technology through various steps, enunciated below, has played key role in reducing the administrative costs, increasing the productivity, and further to develop the ability to manage the caseloads:
e-Courts Project
The e-courts machine Mode Project is a pan- India project, monitored and funded by the department of justice, Ministry of Law and Justice, Government of India for the District courts across the country. It has a vision to transform the Indian Judiciary by ICT (Information and communications technology) enablement of Courts.
The Development of e Courts:
e-courts have now become a new tool in order to access justice to the Indian Legal system. Back in the days we could not even comprehend court hearings without hearing the counsels and the parties concerned. Though the pandemic brought the world to a stand still but these technological innovations ensured that the court proceedings were carried out without hampering the judicial process be it hearing the parties thereby ensuring the essentials principles of natural justice. Indian e-courts ensure to provide efficient and transparent services to litigants. Given below are some of the initiatives:
1) Virtual court systems- In the said system, court proceedings are conducted virtually by means of video conferencing. This ensures easy access to justice and reduces the pendency of cases.
2) e-Courts portal- The e-Courts portal encompasses the interest of all the litigants, advocates, government agencies, police and citizens. In fact, this system is so helpful that being anywhere, anybody can access the portal and get the details of the cases as first-hand information.
3) e-filing- The facility of filing court cases electronically ensures benefits such as saving time and money, automatic digitization thereby reducing the paper consumption which is a necessary step, to be taken as an environmental cause. The step towards e filing reduces the physical hardship of being physically present before the Courts.
4) e-payment of court fees and fines- The ability to make online payments of court fees and fines reduces the need for carrying physical cash, stamps and cheques etc. thereby, integrating with the state’s specified vendors for convenience.
5) Court Management System (CMA): This is a web-based system that manages whole court process from case delivery to judgement that ensures easy flow of information across various stakeholders and departments.
6) Document Management System (DMS): In this system, the documents can be digitally saved from any location and anytime, thereby saving the physical spaces.
7) Electronic Display System (EDS): This system displays court procedures, such as case status, case lists and cause lists, on electronic screens around the court complex. This annihilates needless physical travel.
8) The Court Recording and Transcription System (CRTS): In this system, they use the records and transcribes as evidence, thereby reducing handwritten notetaking.
9) Use of AI and Machine Learning: Artificial learning and machine learning help in analyzing vast amounts of data, identifying patterns and predicting the outcomes which enhances efficiency of judicial delivery system for instance the SUVAS and SUPACE tools implemented by Supreme Court and High Courts.
10) Blockchain for Secure Record-Keeping: This technology basically helps in ensuring security and transparency of court records. The use of block chain method prevents the records from being tampered and ensures the court records are secured.

Challenges with Application of Emerging Technologies in the Judicial System:
1) Data security: As the data involved being quite the sensitive kind, it’s very important to keep the data secured.
2) Bias and discrimination: With emerging AI, there is quite the bias and discrimination within the algorithms which poses high risk in inequalities in judicial system.
3) Privacy concerns: With use of technology, there are high risks with privacy rights, which can result in violation of the private right of an individual.
4) Cost: The high costs involved in implementation of the technological reforms needs to be taken care of as the judicial systems may not have sufficient resources.
5) Lack of understanding: Many legal professionals may not have enough understanding towards the emerging technology which may result in inequalities in justice system.
Way Forward:
Data Privacy and Security: Technology heavily rely on data collection; hence it is important to consider that data collected and used endures security and privacy.
Accessibility: The judicial system must ensure that there are no barriers to the accessibility of data.
Transparency and Accountability: The judicial system should ensure that there is transparency and accountability with the emerging technology to ensure just and fair usage.
Training and Education: The judiciary to ensure that the lawyers, stakeholders and the judges are properly trained to keep in pace with the emerging technology.
Conclusion:
In conclusion, this article basically talks about the way the Indian judicial system has developed in the present times with the advent of Technology. The way judiciary enhanced the court proceedings through the means of technology. Like, the e-filing, e-portal, e-payment etc. By this the court also ensured that there is accessibility and transparency to the court proceedings to its citizens. There are positive aspects to this but it also has certain negative aspects such as the technology can also increase the divide particularly for marginalized communities. This erodes the idea of equal justice and worsens the unequal allocation of legal services. The digital divide also negatively impacts attorneys, who are often neither technologically literate nor have access to digital tools and resources. This article gives a bird’s eye view to all the techno-legal developments.
“PRIME LEGAL is a full-service law firm that has won a national award and has more than 20 years of experience in an array of sectors and practice areas. Prime legal fall into a category of best law firm, best lawyer, best family lawyer, best divorce lawyer, best divorce law firm, best criminal lawyer, best criminal law firm, best consumer lawyer, best civil lawyer”.
Written by- Parvathy P.V.

References:
1) www.drshtiias.com
2) hindupost.in
3) organiser.org
4) www.linkedin.com






0

Deepfakes and AI : How the government plans to execute a regulatory framework as per the IT rules and IT Act,2000

Introduction

Deepfakes are manipulated versions of pictures of video where techniques like morphing is used to falsely pretend as someone else. It is a tool of misrepresentation which has an abundance of consequences behind it. Recently, many popular personalities have become victim of the deep fake use.

Currently, there are no concentrated regulations to stop these types of innovations. One of the primary legislations for preventing and prohibiting deepfakes currently in India is the IT Act, 2000. It is under the scope of violation of privacy to circulate or publish of a person’s images in mass media[1].

However, the IT act is not sufficient to tackle the specific need for a Deepfake or AI regulation in the country.

Existing Regulation :

The IT Act, 2000[2] and IT Rules[3] specify provisions for the violation of privacy against an individual and also the appropriate punishments.

Punishments :

Section 66A provides that any person who sends offensive or false or misleading information through message is punishable with 3 years along with fine.

Section 66C of the Act provides that if any person does an act which impersonates another person through signature or unique identification feature, shall be punished for 3 years.

Section 66E provides for the violation of privacy against any person. It includes publishing or captures obscene images without consent, shall be punished for three years alongside a fine of upto Rs.2,00,000.

The punishment fir publishing any form of obscene material shall be punished under Section 67 and Section 67A provides punishment for publishing any sexually explicit act. Section 67B punishes any person for transmitting or publishing any obscene media of children.

IT rules :

Rule 3(1)(b) gives directions to intermediaries. Intermediaries are the controllers of data and stores data for their internet application or websites.

Intermediaries are defined under Section 2(w) as:

“―intermediary, with respect to any particular electronic records, means any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecom service providers, network service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online-market places and cyber cafes”

Rule 3(1)(b) states that :

No intermediary shall host, display, modify, publish, transmit, store, update or share any information that :

  • Belongs to another person
  • The content is obscene, invasive of bodily privacy of another person, encourages money laundering etc.
  • Such content is harmful to a child
  • It infringes any intellectual property
  • It misleads the viewer about the origin of the message or communicates any misinformation through its interface.
  • Impersonates another person
  • Is a public threat to the security and sovereignty of India.

However, the IT Act provides that the intermediaries will not be liable under Section 79(1) of the Act :

Notwithstanding anything contained in any law for the time being in force but subject to the provisions of sub-sections (2) and (3), an intermediary shall not be liable for any third party information, data, or communication link made available or hosted by him.”

MeitY Advisory notification :

The Ministry of Electronics and Information Technology issued an advisory notification governing the intermediaries to follow the IT rules in prevention of circulating deepfakes.

The advisory notification mentions that the users needs to be specifically mentioned about the type of content which is prohibited under the IT rules. The communication should be in precise language and must be easily interpreted. The ministry also advised to set up regular reminders to the users on the prohibited content, for example during every login or while registering as a new account in the interface[4].

The users must be informed about the penal provisions attracted to the violation of Rule 3(1)(b) of the IT rules. The penal laws attracting Rule 3(1)(b) are the IPC and the IT Act. It specifies that in the terms and conditions of the application, the intermediaries must clearly highlight that intermediaries/platforms are under obligation to report legal violations to the law enforcement agencies under the relevant Indian laws applicable to the context[5].

 The advisory also emphasised on the Rule 3(1)(b)(v) which states that :

Any content which :

“deceives or misleads the addressee about the origin of the message or knowingly and intentionally communicates any misinformation or information which is patently false and untrue or misleading in nature” shall be removed by the intermediary as a part of their duty.

Furthermore, it added that It is the responsibility of platforms to make reasonable measures to stop users from hosting, displaying, uploading, altering, publishing, sending, storing, updating, or distributing any content that is forbidden on digital intermediaries or information connected to any of the 11 mentioned user harms[6].

Does AI and deepfake need separate legislation :

The IT Act and IT rules no doubt provide an extensive scope for covering AI and deepfake violations. However, the regulations are ex-post regulations. It means that the scope for preventing these issues beforehand is not covered by the Act. The Act and the Rules only provide for the remedy after the damage has been done.

Innovations and new technological advancements is not the drawback of AI and deepfakes, it is the misuse of such which is causing a huge gap between privacy and technological advancement. The mechanism of “cure after damage” should be changed.

It is advised that the technological framework should be of ex-ante regulation which follows the mechanism of preventing a wrong to happen.

It was also stated that the MEITY has no legal enforcement and therefore the big companies and intermediaries are not legally binding to follow the advice [7].

Conclusion :

Things are not what they seem. Especially with the developing technology, it is hard to identify the origin of a particular media format. It is essential for a country like India which has a huge number of internet users to have specific legislation which extends the scope of regulating artificial intelligence and deepfakes.

It is important to recognize the need for artificial intelligence in the existing work culture of the country, however, it should not be done at the expense of violation of one’s right to privacy which is protected under the Constitution along with the new DPDP Act, 2023

“PRIME LEGAL is a full-service law firm that has won a National Award and has more than 20 years of experience in an array of sectors and practice areas. Prime legal fall into a category of best law firm, best lawyer, best family lawyer, best divorce lawyer, best divorce law firm, best criminal lawyer, best criminal law firm, best consumer lawyer, best civil lawyer.”

Written by- Sanjana Ravichandran

[1] Abha Shah and Nitika Nagar, The deepfake Dilemma : Navigating truth and deception in Today’s digital era, MONDAQ (Dec 14, 2023) https://www.mondaq.com/india/new-technology/1401876/the-deepfake-dilemma-navigating-truth-and-deception-in-todays-digital-era#:~:text=Deepfake%20technology%20refers%20to%20a,of%20deep%20learning%20and%20fake.

[2] The information Technology Act, 2000 (Act. No 21 of 2000)

[3] The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, G.S,R. 139(E), published in the Gazette of India.

[4] Deep Fake issue : IT ministry tells social media platforms to comply with rules or face action, MINT (Dec 26,2023) https://www.livemint.com/technology/tech-news/govt-ministry-deepfake-advisory-content-not-permitted-it-rules-must-be-clearly-communicated-to-users-11703598291391.html

[5]PIB Delhi  MeitY issues advisory to all intermediaries to comply with existing IT rules, PIB

( 26 DEC 2023 6:34PM) https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1990542#:~:text=The%20directive%20specifically%20targets%20the,clearly%20and%20precisely%20to%20users.

[6] India: MeitY set to introduce regulations on deepfakes, ONE TRUST DATAGUIDANCE (Nov 23, 2023) https://www.dataguidance.com/news/india-meity-set-introduce-regulations-deepfakes

[7] Aaratrika Bhaumik, Regulating deepfakes and generative AI in India | Explained, THE HINDU (Dec 4, 2023) https://www.thehindu.com/news/national/regulating-deepfakes-generative-ai-in-india-explained/article67591640.ece