Legal Safeguards against ‘Deepfakes’ and AI-Generated Misinformation:
Introduction:
The rapid evolution of Artificial Intelligence has transformed modern communication, enabling the creation of highly realistic digital content. Among the most controversial developments is the emergence of deepfakes—synthetic media in which a person’s likeness is digitally altered to depict actions or speech that never occurred. While such technology may have legitimate uses in entertainment and innovation, its misuse presents serious legal and constitutional challenges. Deepfakes are increasingly being used to spread misinformation, commit financial fraud, damage reputations, and violate individual privacy.
In India, the legal response to deepfakes is still developing. There is no single codified legislation specifically governing AI-generated misinformation. Instead, the regulatory framework consists of a combination of existing statutes such as the Information Technology Act, 2000, the Bharatiya Nyaya Sanhita, 2023, intermediary guidelines, and constitutional protections. This article provides an in-depth analysis of these legal safeguards, the role of courts, procedural remedies available to victims, and the constitutional implications of regulating deepfakes.
Conceptual Understanding of Deepfakes in Law:
Deepfakes are created using advanced machine learning algorithms, particularly deep learning techniques such as Generative Adversarial Networks (GANs). These systems analyze large datasets of images, videos, or voice samples to produce realistic but fabricated content. From a legal standpoint, deepfakes are problematic not because of the technology itself, but because of their potential misuse.
The misuse of deepfakes can give rise to multiple legal wrongs. For instance, creating a fake video of a person making defamatory statements may constitute defamation. Similarly, using someone’s likeness without consent may amount to a violation of privacy or personality rights. When deepfakes are used for financial fraud, they fall within the domain of cybercrime. Thus, deepfakes are not a separate category of offence but rather a technological tool that facilitates existing offences in new and more dangerous ways
Categories of Deepfakes
Deepfake content broadly falls into the following categories:
- Face Manipulation Videos – Altering facial features or expressions
- Voice Cloning – Replicating a person’s speech patterns
- Synthetic Political Content – Fabricated speeches or statements
- Non-Consensual Explicit Content – Often targeting women
AI-generated misinformation extends beyond deepfakes to include fabricated news articles, manipulated images, and misleading narratives designed to influence public opinion.The primary concern lies in their credibility and vitality, making them difficult to detect and control once circulated.
India’s Legal Framework:
India currently does not have a dedicated statute regulating deepfakes. Instead, it relies on a combination of cyber law, criminal law, and intermediary regulations.
1. The Information Technology Act, 2000: The Core Cyber Law
The Information Technology Act, 2000 forms the backbone of India’s cyber regulatory regime.
Identity Theft and Impersonation
Sections 66C and 66D criminalize identity theft and cheating by personation through electronic means. Deepfakes that replicate a person’s face or voice for fraudulent purposes fall squarely within these provisions.
Privacy Violations
Section 66E penalizes the violation of privacy, including the unauthorized capture or transmission of private images. Deepfake pornography is a direct violation of this provision.
Obscenity and Explicit Content
Sections 67 and 67A prohibit the publication and transmission of obscene and sexually explicit material in electronic form. These provisions are increasingly invoked in deepfake abuse cases.
2. Intermediary Liability and IT Rules, 2021: Holding Platform Accountable
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, issued under the IT Act, 2000, represents a major shift from passive to active compliance for digital platforms in India. These rules require intermediaries to perform specific “due diligence” to retain their Safe Harbour protection- the legal immunity that prevents them from being held liable for content posted by their users.
The Core of Intermediary Liability (Section 79)
Under Section 79 of the IT Act, intermediaries are generally not liable for third-party information they host. However, the 2021 Rules make this immunity conditional on strictly following these mandates:
- Due Diligence: Intermediaries must not only inform users about prohibited content but also take “reasonable efforts” to prevent its upload.
- Takedown Timelines:
- 36 Hours: Access to unlawful content must be disabled within 36 hours of a court or government order.
- 24 Hours: Content depicting nudity, sexual acts, or impersonation (including morphed images) must be removed within 24 hours of a user complaint.
- Data Retention: Intermediaries must now preserve user registration records and deleted information for 180 days (increased from 90) to assist in investigations.
Significant Social Media Intermediaries (SSMIs)
Platforms with over 5 million (50 lakh) users in India are classified as SSMIs and face heightened obligations:
- India-Based Officers: They must appoint a Chief Compliance Officer (liable for any non-compliance), a Nodal Contact Person (24×7 law enforcement liaison), and a Resident Grievance Officer.
- Monthly Compliance Reports: They must publish reports detailing complaints received and actions taken, including proactive content removals.
- Traceability (Rule 4(2)): Messaging platforms must be able to identify the “first originator” of a message if ordered by a court or government for serious offences (punishable by 5+ years). This provision is notably challenged by WhatsApp for potentially breaking end-to-end encryption.
- Proactive Filtering: SSMIs are expected to use automated tools to identify and filter content like Child Sexual Abuse Material (CSAM).
Regulatory Framework for Digital Media (Part III)
Administered by the Ministry of Information and Broadcasting (MIB), this part extends rules to news publishers and OTT platforms:
- Code of Ethics: Digital news must follow Press Council of India standards
- OTT Self-Classification: Content must be classified into 5 age categories (U, U/A 7+, 13+, 16+, and A) with mandatory parental locks for 13+ and age verification for Adult content.
- Three-Tier Redressal:
- Level I: Self-regulation by the publisher (appointing a Grievance Officer).
- Level II: Self-regulation by an industry body headed by a retired judge.
- Level III: An Oversight Mechanism by the Central Government via an Inter-Departmental Committee.
3.Bharatiya Nyaya Sanhita, 2023 (BNS)
The Bharatiya Nyaya Sanhita replaces the Indian Penal Code and modernizes criminal law in India. It includes provisions that can be applied to deepfake-related offences, such as cheating, forgery, and defamation.
Forgery provisions are particularly relevant when deepfakes are used to create false evidence or documents. Defamation provisions apply when deepfake content harms a person’s reputation. The law recognizes that digital acts can have real-world consequences, thereby expanding the scope of criminal liability.
Relevant provisions include:
- Cheating (Section 318 BNS)
- Defamation (Section 356 BNS)
- Forgery (Sections 336–340 BNS)
Deepfakes used for deception, reputation harm, or fraud fall within these provisions.
4.Indian Evidence Act and Digital Evidence
Deepfakes present unique challenges in the law of evidence. Courts must determine whether digital content is authentic or manipulated. The admissibility of electronic evidence depends on compliance with procedural requirements, including certification under Section 65B.
Expert testimony plays a critical role in identifying deepfakes. Digital forensic analysis is increasingly necessary to verify the integrity of audio-visual material. This highlights the need for judicial training and technological expertise within the legal system.
Major International Standpoints & Legal Frameworks
- European Union (EU): The EU AI Act, with transparency obligations taking effect in August 2026, classifies deepfakes as “limited risk” and mandates they be labeled. It requires AI providers to mark content in machine-readable formats and deployers to disclose when content is synthetic. Additionally, the Digital Services Act (DSA) compels platforms to quickly remove illegal deepfakes (such as defamation or non-consensual pornography) to avoid fines up to 6% of global turnover.
- United States: The U.S. approach is a patchwork of federal and state laws.
DEFIANCE Act & Take It Down Act (2025/2026): Creates civil causes of action for victims of non-consensual intimate imagery (NCII) and mandates reporting processes on social platforms.
- State Laws: Several states (e.g., Texas, California) have enacted laws specifically prohibiting deceptive AI in election campaigns.
- China: Under the Deep Synthesis Regulations (2022), China requires services to verify user identity, label synthesized content, and maintain records to prevent the spread of illegal information.
- South Korea: Maintains some of the world’s strictest laws, criminalizing the possession and viewing of non-consensual deepfake pornography, with penalties reaching 7 years in prison.
Legal Safeguards Against Misinformation
- Mandatory Watermarking/Labeling: Regulations commonly require that AI-generated audio or video be labeled “prominently” or include a spoken disclaimer, making it hard to pass off synthetic content as real.
- Platform Liability (Safe Harbor Loss): Intermediaries (Meta, X, Google) that fail to detect or remove deepfakes after receiving notice can lose their “safe harbor” protections, making them legally liable for the user-generated content.
- Provenance Metadata: New standards, such as C2PA, are becoming mandated to embed technical provenance data that survives editing, proving the content’s origin.
International Trends & Challenges
- Cross-border Enforcement:A major gap exists where a deepfake is produced in a country with weak laws but harms someone in another, prompting discussions for international treaties.
- Balancing Act: Laws strive to distinguish between malicious deepfakes and legitimate artistic, satirical, or creative expression, which are usually exempted if clearly labeled.
- Focus on Gender-Based Violence: Because over 90% of deepfakes are non-consensual pornography, laws are increasingly treating this as a severe form of digital sex crime.
Constitutional Framework and Deepfakes: In India
Right to Privacy under Article 21
The right to privacy, recognized as a fundamental right, is central to the regulation of deepfakes. Unauthorized use of a person’s image or voice violates their autonomy and dignity. Deepfakes can distort an individual’s identity, leading to psychological harm and social stigma.
The Supreme Court has emphasized that privacy includes the right to control personal information. Deepfakes undermine this control by creating false representations without consent.
Freedom of Speech under Article 19(1)(a)
The regulation of deepfakes must be balanced against the right to freedom of speech. Not all AI-generated content is harmful. Satire, parody, and artistic expression may involve synthetic media and are protected forms of speech.
However, Article 19(2) permits reasonable restrictions in the interests of defamation, public order, morality, and national security. Deepfakes that spread misinformation or harm individuals fall within these exception.
Doctrine of Proportionality
Courts apply the doctrine of proportionality to ensure that restrictions on speech are justified. Any regulation must be necessary, suitable, and the least restrictive means of achieving the objective. This doctrine is essential in evaluating laws aimed at controlling deepfakes.
Means need to identify three questions :
- Is restriction necessary?
- Is it least restrictive?
- Does harm outweigh freedom?
Judicial Approach: Cases
1. Justice K.S. Puttaswamy v. Union of India (2017)10 SSC 1
- Supreme Court declared privacy a fundamental right under Article 21.
- Recognized informational self-determination.
- Held that individuals control their personal data and identity.
- Any unauthorized use violates constitutional rights.
- Forms the foundation for deepfake-related privacy claims.
2. Shreya Singhal v. Union of India (2015)5 SCC 1
- Struck down Section 66A IT Act for vagueness.
- Affirmed importance of free speech online.
- Distinguished between discussion, advocacy, and incitement.
- Allowed regulation of harmful online content under valid laws.
- Guides limits of deepfake regulation.
3. Modern Dental College v. State of Madhya Pradesh (2016)7 SSC 353
- Established the Doctrine of Proportionality.
- Any restriction must be reasonable and necessary.
- Applied to fundamental rights limitations.
- Ensures balanced regulation of speech.
- Relevant for evaluating AI regulation laws.
4. Anil Kapoor v. Simply Life India (2023) SSC
- Delhi High Court protected personality rights of the actor.
- Restrained misuse of name, image, and voice using AI tools.
- Recognized commercial exploitation via deepfakes.
- Extended protection beyond traditional IP law.
- Landmark for celebrity deepfake protection.
5. Amitabh Bachchan Personality Rights Case (2022)SSC
- Court granted injunction against unauthorized use of persona.
- Recognized right to control one’s identity.
- Covered voice, image, and likeness.
- Prevented digital misuse and impersonation.
- Strengthened personality rights jurisprudence.
6. State of Tamil Nadu v. Suhas Katti (2004)
- One of India’s first cybercrime convictions.
- Accused posted obscene messages online.
- Court applied IT Act provisions effectively.
- Established precedent for online harassment cases.
- Relevant for early digital abuse jurisprudence.
7. Rashmika Mandanna Deepfake Incident (2023)
- Viral deepfake video triggered public outrage.
- Government acknowledged regulatory gap.
- Led to stronger enforcement discussions.
- Highlighted vulnerability of women online.
- Became a catalyst for policy reform.
8. Rituparna Sengupta Deepfake Case (2023)
- Actress targeted with AI-generated content.
- Raised issues of consent and dignity.
- Showed misuse of publicly available images.
- Reinforced need for stricter cyber laws.
- Demonstrated real-world impact of deepfakes.
9. US Deepfake Fraud Case (2020)
- AI voice cloning used to impersonate CEO.
- Fraudulent transfer of large funds executed.
- Court treated AI impersonation as fraud.
- Highlighted financial risks of deepfakes.
- Influenced global regulatory thinking.
10. UK Deepfake Pornography Case (2021)
- Non-consensual deepfake sexual content prosecuted.
- Court treated it as serious sexual exploitation.
- Emphasized consent and dignity.
- Expanded interpretation of existing laws.
- Set precedent for strict penalties.
Complaint Filing and Legal Remedies (Step-by-Step):
Victims of deepfakes have multiple legal remedies in India.
1. Filing a Cyber Complaint
Step 1: Visit National Cyber Crime Portal
Website: https://cybercrime.gov.in
Choose category: “Women/Child related crime” or “Other cybercrime”
Step 2: Submit Evidence
- Screenshots
- URLs
- Device details
Step 3: FIR Registration-Police may register FIR under:
- IT Act
- BNS provisions
- IPC (if applicable earlier)
2.Filing FIR at Police Station
You can directly approach:
- Local police station
- Cyber crime cell
Mention:
- Nature of deepfake
- Harm caused
- Identity of accused (if known)
3. Platform Complaint Mechanism
File complaint with:
- Social media grievance officer
Under IT Rules:
- Platforms must respond within 24 hours (in urgent cases)
- Remove content quickly
4. Civil Remedies
Victim may file:
- Defamation suit
- Injunction petition (to stop circulation)
- Claim damages
5.Approach High Court (Writ Petition)
Under Article 226, victim can seek:
- Immediate takedown
- Protection of fundamental rights
- Directions to authorities
6.Criminal Proceedings
Charges may include:
- Identity theft
- Cheating
- Obscenity
- Defamation
Punishment varies from:
- Fine
- Imprisonment
Procedural Challenges and Enforcement Issues
Despite the availability of legal remedies, several challenges hinder effective enforcement. Identifying the originator of a deepfake is often difficult due to anonymity and cross-border dissemination. Digital evidence can be easily altered, making verification complex.
There is also a lack of technical expertise among law enforcement agencies, which affects investigation and prosecution. Delays in content takedown can result in irreversible harm, as deepfakes spread rapidly across platforms.
The Way Forward: Towards a Comprehensive Framework
India must adopt a proactive approach to regulate deepfakes.
- Dedicated Legislation-A specific law addressing AI-generated content is essential.
- Technological Solutions-Investment in detection tools and digital verification systems is crucial.
- Platform Accountability-Stricter obligations on intermediaries can reduce the spread of harmful content.
- Public Awareness-Educating users is key to preventing misinformation.
Conclusion:
Deepfakes challenge the very concept of reality in the digital world. They blur the line between truth and fabrication, threatening individual rights and societal trust. While India’s current legal framework provides a foundation for addressing these issues, it must evolve to keep pace with technological advancements.
The ultimate goal is to strike a balance—protecting innovation and free speech while safeguarding dignity, privacy, and truth. In this battle between technology and law, the preservation of human rights must remain paramount.
Bibliography
A. Cases
- Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 S.C.C. 1 (India).
- Shreya Singhal v. Union of India, (2015) 5 S.C.C. 1 (India).
- Modern Dental Coll. & Research Ctr. v. State of Madhya Pradesh, (2016) 7 S.C.C. 353 (India).
- Anil Kapoor v. Simply Life India & Ors., 2023 SCC OnLine Del 6914.
- Amitabh Bachchan v. Rajat Nagi & Ors., 2022 SCC OnLine Del 4116.
- State of Tamil Nadu v. Suhas Katti, C.C. No. 4680 of 2004 (Addl. C.M.M., Egmore, Chennai).
B. Statutes and Rules
- Information Technology Act, 2000 (India).
- Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (India).
- Bharatiya Nyaya Sanhita, 2023 (India).
- Indian Evidence Act, 1872 (India).
- Constitution of India.
C. Government Publications
- Press Information Bureau, Government of India, Measures to Combat Deepfakes and AI-Generated Misinformation (2025).
- Press Information Bureau, Government of India, Advisories on Safe and Trusted AI (2025).
D. Books and Academic Literature
- SSRN, Regulating AI-Generated Content and Deepfakes (2024), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5153296
- Asian Institute of Research, Deepfake Technology in India and the World: Legal Challenges and Responses (2024).
E. Articles and Online Sources
- DataSecure India, Deepfake Technology in India: Legal Risks and Safeguards, https://datasecure.ind.in/blogs/deepfake-india/
- LiveLaw, AI-Generated Content and Deepfakes: Legal Issues and Regulatory Challenges, https://www.livelaw.in/articles/ai-generated-content-deepfakes-524064
- Indian Express, Rashmika Mandanna Deepfake Controversy Raises Legal Concerns (2023).
- Times of India, Deepfake Misuse and Celebrity Cases in India (2023).
- BBC News, UK Plans to Criminalise Deepfake Pornography (2023).
- Wall Street Journal, AI Voice Scam Highlights Deepfake Fraud Risks (2019).
F. Web Resources
- National Cyber Crime Reporting Portal, Government of India, https://cybercrime.gov.in
- Ministry of Electronics and Information Technology (MeitY), Government of India, https://www.meity.gov.in