Fact-checked by Grok 2 weeks ago

Deepfake pornography

Deepfake pornography refers to computer-generated explicit videos or images produced using artificial intelligence algorithms, such as generative adversarial networks, to superimpose a targeted individual's facial features onto the body of another person engaged in sexual acts, almost invariably without the target's consent or knowledge. This technology utilizes deep learning models trained on vast datasets of images to create hyper-realistic fabrications, enabling the mass production of such content via accessible software tools like those derived from open-source projects. Emerging prominently in 2017 through anonymous online forums, deepfake pornography rapidly proliferated as user-friendly applications democratized the creation process, shifting from niche experimentation to a widespread phenomenon driven by demand for customized explicit material. The content overwhelmingly targets women, with studies indicating that 96–98% of all deepfakes constitute non-consensual intimate imagery and 99–100% of victims in such cases are female, reflecting patterns of sexual objectification amplified by algorithmic scalability. Perpetrators, often motivated by gratification, harassment, or extortion, leverage marketplaces and forums to distribute these materials, exacerbating harms including severe emotional distress, social ostracism, and erosion of personal autonomy for victims ranging from celebrities to ordinary individuals. Empirical analyses reveal no equivalent scale of male-targeted deepfakes, underscoring a causal asymmetry rooted in prevailing consumer preferences within pornography markets rather than technological bias alone. Legal responses have accelerated amid documented proliferation, with U.S. federal legislation like the 2025 Take It Down Act enabling victims to demand removal of explicit deepfakes and imposing criminal penalties on creators and distributors, supplementing state-level prohibitions in jurisdictions such as Florida and New York that classify non-consensual deepfake porn as felonies. These measures address evidentiary challenges posed by the medium's indistinguishability from reality, though enforcement lags behind technological evolution, highlighting ongoing tensions between innovation in AI and protections against digitally facilitated exploitation.

Definition and Technology

Core Concepts of Deepfakes in Pornography

Deepfake pornography involves the application of deep learning algorithms to superimpose a target's facial likeness onto the body of a performer in existing sexually explicit videos or images, typically without consent, producing realistic non-consensual depictions of sexual acts. This form of synthetic media differs from traditional image editing by dynamically handling motion, expressions, and lighting across video frames, achieving a level of seamlessness that challenges human perception. Empirical analyses have found that 96% of deepfake videos circulating online consist of pornography, with the majority targeting female celebrities or public figures using publicly available footage for training data. The foundational techniques rely on deep neural networks, including autoencoders and generative adversarial networks (GANs). Autoencoders function by encoding a source face into a compressed latent space via an encoder network, then decoding it through a target-specific decoder trained on the victim's images, enabling precise face reconstruction and swapping while preserving pose and expression. GANs augment this adversarial process, where a generator network creates forged frames and a discriminator network evaluates their authenticity against real data, iteratively refining outputs to minimize detectable artifacts like unnatural blending at face edges. In pornographic applications, models are trained on datasets of 1,000 to 10,000 frames of the target face sourced from videos or photos, requiring hours to days of computation on consumer-grade GPUs, after which the swapped face is composited onto the source video using post-processing for color matching and temporal smoothing. Key limitations in realism stem from challenges in generalizing to varied angles, occlusions, or rapid movements, often resulting in telltale signs such as mismatched eye reflections, inconsistent teeth textures, or desynchronized lip sync despite algorithmic alignment. Unlike static forgeries, deepfake porn exploits video's temporal continuity, training recurrent layers to predict frame sequences and mimic micro-expressions, which enhances immersion but amplifies harm through perceived authenticity. These concepts underscore the causal mechanism: accessible open-source implementations, such as those based on TensorFlow or PyTorch frameworks, lower barriers to entry, enabling non-experts to generate content that evades casual scrutiny.

Creation Techniques

Deepfake pornography is predominantly created through face-swapping techniques that leverage deep learning models to superimpose a target's facial features onto the body of a performer in existing adult videos. The core methods rely on artificial neural networks trained on large datasets of images or video frames, enabling the generation of realistic manipulations that alter identities while preserving body movements and expressions. Early deepfake techniques, originating around 2017, primarily utilized autoencoders, which consist of an encoder-decoder architecture that compresses facial data into a latent representation and reconstructs it. In face-swapping applications, separate autoencoders are trained—one on the source face (e.g., a celebrity's images) and one on the target video (e.g., a pornographic clip)—with their latent spaces swapped to map the source identity onto the target. This approach requires hundreds to thousands of source images for training to capture variations in lighting, angles, and expressions, often sourced from public media. Limitations include artifacts from imperfect reconstruction, particularly in dynamic video sequences. Subsequent advancements incorporated generative adversarial networks (GANs), which improve realism through an adversarial process involving a generator that produces synthetic faces and a discriminator that critiques them until fakes become indistinguishable from real ones. GANs outperform basic autoencoders in handling occlusions, poses, and fine details, making them prevalent in high-quality deepfake pornography. Hybrid models, such as those in DeepFaceLab—the leading open-source tool for deepfakes—combine stacked autoencoders (e.g., SAEHD architecture) with optional GAN components to enhance edge definition and texture. Training these models demands significant computational resources, often taking days on consumer GPUs, with parameters like resolution (up to 640 pixels) and batch size (4-16) tuned for output quality. The creation workflow typically follows these steps:
  1. Data Acquisition and Preprocessing: Collect source images/videos of the target face and a destination adult video; extract and align faces using landmark detection to create datasets of 5,000+ frames per set.
  2. Face Extraction and Masking: Isolate faces via automated tools, applying semantic segmentation (e.g., XSeg masks) to define boundaries and exclude non-facial elements like hair or shadows.
  3. Model Training: Train the neural network on paired datasets, iterating over epochs to minimize reconstruction errors or adversarial losses; source and destination faces should match in shape and demographics for optimal blending.
  4. Synthesis and Merging: Generate swapped frames by applying the trained model, then blend with original video using overlay modes to match skin tones and lighting.
  5. Post-Processing: Refine outputs with video editing software to correct artifacts, synchronize lip movements if needed, and export as MP4.
Accessible software like DeepFaceLab or Faceswap lowers barriers, requiring only basic programming knowledge and free tutorials, though professional results demand expertise in hyperparameter tuning. In pornography contexts, these methods exploit abundant source material from celebrities or private individuals, with 96% of deepfakes online reported as pornographic in 2019 analyses, though proportions may have shifted with broader AI adoption. Detection challenges arise as creators adapt models to evade forensic tools, perpetuating an arms race.

Detection and Verification Methods

Detection of deepfake pornography relies primarily on machine learning classifiers trained to identify artifacts introduced during synthesis, such as inconsistencies in facial geometry, lighting reflections, and motion blur. Convolutional neural networks (CNNs) like ResNet50 and hybrid models combining EfficientNet with gated recurrent units (GRU) analyze spatial and temporal patterns in video frames, achieving detection accuracies exceeding 90% on benchmark datasets like the Deepfake Detection Challenge (DFDC) under controlled conditions. These methods exploit generative adversarial network (GAN) limitations, including unnatural eye blinking frequencies or remote photoplethysmography (rPPG) signals failing to match authentic heart rate variability. Forensic techniques further scrutinize pixel-level anomalies, such as frequency domain discrepancies via discrete cosine transforms or edge detection mismatches in swapped facial regions, which are particularly evident in pornographic content where body proportionality and skin texture blending often reveal seams. Audio-video desynchronization analysis, using lip-sync metrics, complements visual checks in explicit videos, as synthetic speech cloning introduces subtle phase shifts detectable through spectrogram comparisons. However, these approaches falter against advanced diffusion models post-2023, which minimize artifacts, resulting in real-world detection rates dropping below 60% for cross-dataset generalization. Verification of authenticity emphasizes provenance tracking over post-hoc detection, employing standards like Content Authenticity Initiative (C2PA) metadata embedded during capture to certify unaltered media via cryptographic signatures. Watermarking techniques, including invisible perceptual hashes or blockchain-ledgered hashes, enable traceability, though adoption remains low in amateur pornography production. In forensic contexts, side-channel analysis compares suspect media against original sources for hash mismatches or compression artifacts, but the private nature of victim footage often precludes direct originals, amplifying verification challenges in non-consensual cases. Human-assisted verification yields inconsistent results, with studies showing detection accuracy near chance (around 50%) even among trained observers, due to perceptual adaptation to synthetic realism. Ensemble methods integrating multiple detectors mitigate single-model vulnerabilities to adversarial perturbations, yet ongoing arms-race dynamics—where generators evolve faster than discriminators—underscore the need for multimodal forensics combining visual, audio, and behavioral biometrics. Peer-reviewed evaluations highlight that while lab accuracies impress, deployment in pornography moderation faces domain shifts from compressed web uploads, reducing efficacy by up to 30%.

Historical Development

Origins in AI Research (Pre-2017)

The development of deepfake technology originated from foundational advances in machine learning and computer vision, particularly generative models capable of synthesizing realistic human faces. In June 2014, Ian Goodfellow and colleagues at the University of Montreal published the seminal paper introducing Generative Adversarial Networks (GANs), a framework comprising a generator neural network that produces synthetic data and a discriminator that evaluates its authenticity against real samples. This adversarial training process improved the fidelity of generated images, marking a pivotal shift toward scalable, high-quality synthetic media production, though initial applications targeted general image synthesis rather than targeted face manipulation. Building on earlier neural network concepts like autoencoders—which compress and reconstruct data for tasks such as dimensionality reduction—researchers in the mid-2010s adapted these for facial feature learning. Autoencoders, formalized in the 1980s but revitalized with deep architectures around 2010, enabled encoding identity-specific traits from face datasets, a core mechanism later exploited for swapping. Pre-2017 experiments demonstrated their use in reconstructing facial images from latent representations, providing the blueprint for separating facial identity from expressions or poses. These efforts, documented in academic venues like NeurIPS and CVPR, emphasized efficiency in unsupervised learning but did not yet integrate video dynamics at scale. Key demonstrations of facial manipulation predating widespread deepfake misuse included non-deep-learning systems that influenced AI pipelines. In 2016, Justus Thies and colleagues presented Face2Face at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), a real-time method for reenacting target faces in RGB videos by transferring source expressions via dense correspondence tracking and optimization. This work, requiring specialized hardware like depth sensors in prototypes, achieved photorealistic results for short clips and highlighted challenges in preserving identity while altering expressions—issues later addressed through deep networks. Similarly, lip-synchronization techniques, such as those explored in audio-driven facial animation, laid groundwork for seamless video forgery by aligning synthetic mouths to arbitrary speech. These pre-2017 innovations, driven by goals in film visual effects and virtual reality, were published in peer-reviewed outlets like SIGGRAPH and prioritized technical fidelity over ethical misuse, with no documented applications to non-consensual pornography at the time.

Emergence of Pornographic Applications (2017-2019)

In late 2017, an anonymous Reddit user under the handle "deepfakes" developed and publicly shared open-source code utilizing generative adversarial networks (GANs) to swap faces in video footage, initially applying it to superimpose celebrities' faces onto performers in existing pornographic videos. The first widely noted example emerged in December 2017, when a video featuring actress Gal Gadot's face swapped onto a pornographic scene circulated online, as reported by Motherboard. This marked the initial public demonstration of deepfake technology for non-consensual pornography, driven by hobbyist experimentation with accessible AI tools rather than institutional research. The r/deepfakes subreddit, created shortly thereafter in late 2017, rapidly became a hub for users to share tutorials, datasets, and resulting videos targeting high-profile women such as Scarlett Johansson and Kristen Bell, amassing tens of thousands of subscribers within months. By early 2018, the community had grown to nearly 100,000 members, fostering the exchange of deepfake pornographic content that emphasized realism through iterative training on large image sets of targets' faces. The proliferation was facilitated by the release of user-friendly graphical interfaces like FakeApp in early 2018, which simplified the process for non-experts by automating much of the computationally intensive face-swapping workflow previously requiring custom scripting. Platform responses intensified in 2018 amid growing ethical concerns over involuntary pornography. On February 7, 2018, Reddit banned r/deepfakes and related communities for violating updated policies against non-consensual explicit content involving real individuals' likenesses, citing the potential for harassment and abuse. Shortly after, Pornhub and Twitter implemented bans on deepfake pornography, though enforcement challenges persisted due to the technology's ease of replication across decentralized forums. By September 2019, a DeepTrace analysis revealed that 96% of all deepfake videos online were pornographic and predominantly non-consensual, with the total volume reaching approximately 14,678 videos—doubling from earlier in the year—and overwhelmingly featuring female celebrities as subjects. This period underscored the technology's primary application in sexual exploitation, with empirical data indicating minimal non-pornographic uses at the time.

Acceleration and Widespread Adoption (2020-2025)

The period from 2020 to 2025 marked a sharp escalation in deepfake pornography production and dissemination, driven by advancements in accessible AI tools and open-source models that lowered barriers to entry for creators. In October 2020, researchers documented over 100,000 computer-generated non-consensual nude images of women, highlighting early surges facilitated by refined generative adversarial networks (GANs) and face-swapping software. This growth coincided with heightened online activity during the COVID-19 pandemic, enabling platforms like dedicated deepfake forums to proliferate content with minimal technical expertise required. By 2022, the advent of diffusion-based models, such as Stable Diffusion released in August of that year, further accelerated adoption by allowing users to generate hyper-realistic images from simple text prompts and source photos, with deepfake-specific models downloaded nearly 15 million times since November 2022. Empirical data underscores the scale of this expansion: deepfake video files ballooned from approximately 500,000 in 2023 to a projected 8 million by 2025, with 96-98% comprising non-consensual intimate imagery predominantly targeting women. Over 90% of online deepfakes by 2022 were pornographic clips of women, amassing over 57 million instances, reflecting widespread platform hosting on sites cataloging thousands of victims, including nearly 4,000 female celebrities. Adoption extended beyond elites to everyday users, with reports of incidents rising 19% in Q1 2025 alone compared to all of 2024, fueled by user-friendly apps and web-based generators that bypassed prior computational hurdles. Legislative responses, such as New York's November 2020 law criminalizing deceptive deepfakes and subsequent state measures in Virginia and Texas, acknowledged the threat but failed to stem proliferation, as creators shifted to decentralized tools and jurisdictions with lax enforcement. This era's causal drivers—improved AI fidelity, reduced costs, and anonymity on dark web and mainstream-adjacent sites—entrench deepfake pornography as a democratized form of image-based abuse, with production volumes outpacing detection capabilities.

Prevalence and Empirical Scale

Production and Distribution Volumes

A 2023 analysis by cybersecurity firm Home Security Heroes identified 95,820 deepfake videos circulating online, marking a 550% increase from the 14,678 videos documented in 2019. Of these, 98% were pornographic, with 99% featuring the likeness of women without consent. This dominance of pornography in deepfake content aligns with prior findings from Sensity AI, which reported 90-95% of deepfakes as non-consensual sexual material as early as 2018-2021. Production volumes for deepfake pornography specifically grew by 464% between 2022 and 2023, driven by accessible AI tools enabling rapid face-swapping and image generation. These figures derive from scans of dedicated websites and platforms, where creators upload content using open-source models like Stable Diffusion or proprietary apps, often requiring minimal technical expertise. Distribution relies heavily on specialized pornography sites and forums, with the top 10 such platforms attracting 34,836,914 visits in 2023 alone. Content proliferates via peer-to-peer sharing, social media embeds, and dark web repositories, evading moderation through encryption and rapid re-uploads. Projections indicate an exponential trajectory, with total deepfake files anticipated to surge from 500,000 in 2023 to 8 million by the end of 2025, predominantly maintaining the pornographic skew observed historically.

Demographic Patterns Among Victims and Creators

Victims of deepfake pornography are predominantly female, with estimates indicating that 99-100% of such content targets women. Women and girls face heightened vulnerability, particularly in cases involving public figures or celebrities, where thousands of female targets have been cataloged across major deepfake pornography sites. Among female lawmakers, targeting rates are significantly higher than for males, as evidenced by reports of over two dozen U.S. politicians affected, with women disproportionately represented. In a qualitative study of 15 victims, 11 were women (73%) with ages ranging from 18 to 49 years (mean 33.5 years). Among broader populations, victimization patterns show variation. A multinational survey of 16,693 adults found 2.2% overall deepfake pornography victimization, with men reporting higher rates of threats (relative risk 1.91 for men). For youth aged 13-20, prevalence was around 6-8%, with roughly equal rates between genders (7% for both boys and girls), though younger teen boys (13-14) reported 10% victimization compared to 4% for girls in that subgroup, and 10% for young women aged 18-20. Creators of deepfake pornography are predominantly male. In the same multinational survey, men were 2.31 times more likely to engage in creation (1.8% overall perpetration rate). A scoping review of image-based sexual abuse, including deepfakes, confirmed higher perpetration rates among men (e.g., 21.1% vs. 8.9% for women in one study). Qualitative interviews with 10 perpetrators revealed 8 men (80%), aged 22-53 years (mean 36.9 years). Younger adults (16-39 years) exhibit elevated perpetration risks, often linked to tech familiarity. Youth creator demographics differ slightly, with 2% of 13-20-year-olds admitting creation, split evenly by gender (2% boys, 2% girls), and lower rates (1%) among 13-17-year-olds versus 2% for 18-20-year-olds; however, 74% targeted females, including 36% minors. Perpetrators in related non-consensual imagery often include individuals with heterosexual orientations and diverse ethnic backgrounds, though males predominate across studies.

Notable Cases

High-Profile Celebrity Incidents

In late January 2024, explicit AI-generated deepfake images depicting singer Taylor Swift in pornographic poses proliferated across social media platforms, originating from anonymous posts on 4chan and rapidly spreading to X, where they garnered tens of millions of views within hours. The content, created using accessible AI tools to superimpose Swift's face onto existing pornography, prompted X to temporarily block search results for her name and remove offending posts, though users bypassed restrictions via variations like "Taylor Swift AI". This incident highlighted the ease of production, as the images were generated in minutes using free software, and fueled bipartisan calls in the U.S. Congress for federal legislation criminalizing non-consensual deepfakes. Actress Scarlett Johansson faced similar exploitation as early as 2018, when deepfake videos superimposing her face onto performers in explicit adult content circulated widely online, with dozens of such clips available on dedicated sites. Johansson publicly addressed the issue, noting the technology's inevitability given the internet's scale, stating that "nothing can stop someone from cutting and pasting my image" into fabricated scenarios, and emphasizing the lack of effective recourse at the time. These videos, among the first high-profile examples of celebrity-targeted deepfakes, relied on early algorithms like those from the Reddit community r/deepfakes, which popularized face-swapping techniques before the subreddit's 2018 ban. Other prominent cases include actress Emma Watson, whose likeness appeared in deepfake pornography alongside Johansson's in promotional ads and videos as early as 2017, with content swapping their faces into sexualized footage advertised on platforms like Facebook and Instagram as recently as 2023. Such incidents underscore a pattern where female celebrities in entertainment are disproportionately targeted, comprising the vast majority of victims in documented deepfake porn outputs.

Cases Involving Non-Celebrities and Minors

In the United States, incidents of deepfake pornography targeting minors have frequently occurred in school settings, where peers use accessible AI tools to generate non-consensual nude images of classmates. In March 2024, five male students at Beverly Vista Middle School in Beverly Hills, California, were expelled after creating and distributing AI-generated nude images of female classmates using applications that superimposed faces onto pornographic bodies. Similar cases emerged in New Jersey high schools in late 2023, where a 14-year-old girl discovered AI-altered nude images of herself and other female students circulating among peers, prompting advocacy for federal legislation by the victim and her family. These peer-driven episodes, often involving boys targeting girls, have led to emotional distress, school disruptions, and calls for AI-specific policies, with reports indicating dozens of such incidents across U.S. states by mid-2024. Beyond schools, adults have faced prosecution for creating or possessing deepfake child sexual abuse material depicting minors. In April 2024, David Tatum, a child psychiatrist in Charlotte, North Carolina, was sentenced to 40 years in federal prison for using generative AI to alter clothed photographs of identifiable children into explicit nudes, which he produced and stored over 30,000 times. Tatum's case highlighted the technology's role in evading traditional child pornography detection, as the images mimicked real abuse material without physical harm to victims. In May 2024, a recidivist sex offender in another federal case received a sentence for possessing deepfake CSAM, where AI tools fabricated explicit content from minor victims' likenesses, underscoring law enforcement's increasing focus on synthetic imagery under existing child pornography statutes. For non-celebrity adults, deepfake pornography often stems from personal vendettas or opportunistic misuse of social media images, though documented cases are less publicized than those involving minors due to privacy concerns. In December 2024, 15-year-old Elliston Berry testified before Congress about her experience as a victim when a publicly posted Instagram photo was AI-manipulated into pornographic content and shared online, illustrating how everyday users' images become targets without consent. Such incidents, prevalent since AI tools like Stable Diffusion became widely available around 2022, frequently involve ex-partners or acquaintances, with victims reporting long-term psychological harm including anxiety and social withdrawal, though prosecutions remain rare absent distribution to third parties. Legal responses have relied on general non-consensual pornography laws, as seen in state-level suits against deepfake-hosting sites, but federal gaps persist for purely synthetic adult non-celebrity cases.

Motivations and Behavioral Drivers

Incentives for Creation and Sharing

Creators of deepfake pornography are primarily driven by sexual gratification, seeking to fulfill personal fantasies by superimposing the likenesses of desired individuals—often non-consenting women—onto pornographic content. A 2025 qualitative study of 10 perpetrators found that curiosity about technological capabilities provided a "God-like buzz" during creation, blending experimentation with arousal from customizing explicit imagery of ex-partners or acquaintances. Similarly, a content analysis of 390 Reddit posts on AI-generated pornography revealed that producers generated deepfakes of known persons for personal sexual pleasure, emphasizing the ease of accessing tailored erotic content without real-world constraints. Revenge and power dynamics motivate many instances, particularly in cases targeting former romantic partners after perceived slights like breakups or rejection. Perpetrators in the aforementioned study described creation as retribution, aiming to humiliate victims and reassert control, with one noting the target "deserved this" for emotional wrongs. This aligns with broader patterns in image-based sexual abuse, where deepfakes extend traditional revenge pornography by enabling scalable, anonymous degradation without physical evidence of the victim's involvement. Empirical data indicate perpetrators are overwhelmingly male (80% in sampled cases), heterosexual, and aged 22-53, reflecting gendered asymmetries in targeting female victims for dominance. Sharing incentives often stem from hedonistic enjoyment and social reinforcement within online communities, where creators distribute content for validation and entertainment value. The Reddit analysis highlighted sharing commissioned deepfakes for economic gain, with some producers monetizing custom requests on platforms dedicated to synthetic pornography. Perpetrators reported peer acclaim as a driver, gaining status when associates praised the technical prowess or explicit novelty, normalizing the behavior in male-dominated forums like deepfake marketplaces focused on celebrity targets. In celebrity cases, dissemination amplifies notoriety, leveraging viral potential for indirect profit through traffic to illicit sites, though personal vendettas remain prevalent in non-public incidents.

Consumption Patterns and Psychological Factors

Surveys conducted across multiple countries reveal that consumption of deepfake pornography remains relatively low but detectable among general populations. In a 2023 international study of 16,693 respondents from 10 countries, 6.2% reported viewing deepfake pornography featuring celebrities, while 2.9% reported viewing such material involving non-celebrities. A UK survey of 1,403 adults found higher exposure rates, with 18.8% reporting encounters with non-consensual deepfake pornographic images or videos. These figures indicate sporadic rather than widespread habitual use, often encountered incidentally online rather than actively sought in large volumes. Demographic patterns show consumption skewing toward males and younger individuals. Men were 3.5 times more likely than women to report viewing celebrity deepfake pornography and 2.66 times more likely for non-celebrity content in the international survey. Similarly, in the UK study, men and those aged 18-25 exhibited the highest exposure rates to deepfake pornography. Perpetration rates for creation or sharing remain lower, at around 1.8% globally, suggesting viewing outpaces production among users. Psychological factors influencing consumption include gender-based differences in harm perception and traits like psychopathy. Men tend to view deepfake pornography as less deserving of criminalization than women, with mean agreement scores on harm statements differing by 0.58 points (Cohen's d = 0.41). Higher psychopathy scores correlate with greater proclivity toward deepfake-related activities, including more lenient judgments of such content, particularly when victims are celebrities or males. Belief in a just world also predicts tolerance for deepfakes, potentially rationalizing consumption as harmless fantasy. Despite broad public concern—90.4% in the UK sample expressed worry over deepfake proliferation—viewers may prioritize sexual gratification or novelty, akin to general pornography drivers, over ethical qualms. Women report heightened fear of deepfake pornography (127.4% more likely than men), underscoring divergent psychological responses by gender.

Impacts and Causal Effects

Direct Consequences for Individuals

Victims of deepfake pornography endure profound psychological trauma, with 93% reporting severe emotional and psychological damage akin to that experienced in cases of image-based sexual abuse. This includes high rates of depression, anxiety, shame, humiliation, and self-doubt, often manifesting immediately as nausea, heart palpitations, or tonic immobility during discovery. Long-term effects encompass intrusive nightmares, altered self-perception, and social withdrawal, with victims describing the harm as "torture for the soul" due to the perpetual, uncontrollable dissemination of altered imagery. Approximately 51% of affected individuals experience suicidal ideation, and some contemplate suicide, underscoring the intensity of the distress. Social repercussions involve reputational harm and relational strain, as deepfakes erode trust and provoke humiliation within personal networks; for instance, victims report family members viewing the content, leading to isolation and severed ties. Doxing accompanies 100% of cases in some survivor cohorts, amplifying vulnerability to harassment, rape threats, and unsolicited sexual advances. The non-consensual nature fosters a loss of bodily and identity autonomy, with women comprising over 90% of targets, experiencing continuous abuse compared to episodic incidents for men. Professionally, 57% of victims fear career derailment, with documented instances including job termination, such as a teacher dismissed after students accessed her deepfake video. Concentration difficulties and customer-facing challenges exacerbate employment instability, particularly in service roles. Sextortion emerges as a direct tactic, where perpetrators threaten dissemination unless demands—often financial—are met, compounding economic vulnerability. These effects persist due to the technology's scalability, with deepfake pornography comprising 93% of all deepfakes as of 2020 data.

Broader Societal and Cultural Ramifications

Deepfake pornography contributes to the erosion of public trust in visual media, as the proliferation of convincingly fabricated sexual content raises doubts about the authenticity of online imagery and videos more broadly. A 2019 analysis highlighted that deepfakes, including pornographic variants, could undermine societal confidence in evidence-based verification, potentially extending to non-sexual contexts like journalism and legal proceedings by fostering a "liar's dividend" where genuine content is dismissed as fake. Empirical assessments indicate limited but growing evidence of this effect, with studies noting that while detection technologies exist, public skepticism toward digital media has increased amid deepfake exposure, though causal links remain under-researched due to the technology's recency. On gender dynamics, deepfake pornography disproportionately targets women, with reports estimating that 95% to 100% of such content features female victims, often superimposing their faces onto explicit material without consent, thereby amplifying existing patterns of online sexual harassment and reinforcing patriarchal power imbalances. This has been linked to deepened societal gender divides, as seen in South Korea where deepfake porn incidents in 2023-2024 correlated with heightened anti-feminist backlash and reduced female participation in digital spaces, according to cybersecurity analyses identifying the country as the most affected globally. Culturally, it perpetuates the objectification of women by normalizing synthetic non-consensual depictions, which some scholars argue entrenches misogynistic tropes in media consumption, though critics note that underlying drivers stem from broader pornographic industry incentives rather than AI alone. Broader cultural ramifications include a potential normalization of blurred boundaries between consent and fantasy, as AI-generated content—comprising 98% pornographic deepfakes per 2023 data—facilitates scalable production of imagery that evades traditional accountability, influencing perceptions of privacy and bodily autonomy in digital societies. Victim testimonies and policy reviews describe psychological harms akin to real sexual violence, scaling to societal levels through viral dissemination, which may deter public engagement by women and minorities fearing targeted fabrication. However, quantitative longitudinal studies on these cultural shifts remain sparse, with existing research emphasizing individual harms over aggregate societal transformation, underscoring the need for causal analysis beyond anecdotal reports.

Controversies and Viewpoint Spectrum

Free Speech Protections Versus Claims of Harm

The debate over regulating deepfake pornography centers on the tension between First Amendment protections for expressive content and asserted harms to individuals from non-consensual depictions. In the United States, deepfakes, including those of a sexual nature, are broadly shielded as forms of speech, encompassing fabricated representations that do not inherently qualify as unprotected categories such as obscenity, defamation, or true threats. Courts have historically protected even false or misleading speech absent direct ties to criminal conduct, viewing outright bans on deepfake creation as risking broader censorship of satirical, artistic, or political expressions enabled by advancing technology. Legal scholars argue that content-based restrictions on deepfake pornography would face strict scrutiny and likely fail, given precedents safeguarding offensive or simulated sexual content short of obscenity. Proponents of restrictions emphasize harms including psychological distress, reputational damage, and erosion of personal autonomy, with surveys indicating that affected individuals perceive such content as profoundly invasive. For instance, non-consensual deepfake videos are rated as highly harmful in public opinion studies, potentially exacerbating gender-based violence through digital means. However, empirical evidence remains sparse, relying largely on self-reported experiences or perceptual analyses rather than longitudinal causal studies linking deepfakes specifically to measurable outcomes like increased suicide rates or long-term mental health declines, distinct from harms of traditional non-consensual imagery. Critics of expansive harm claims note that existing tort remedies for invasion of privacy, defamation, or intentional infliction of emotional distress could address individual cases without categorical speech suppression. Legislative efforts, such as the federal TAKE IT DOWN Act passed in 2025, criminalize the knowing distribution of non-consensual intimate deepfake images, aiming to mandate platform removals while carving exceptions for consensual or protected speech. Yet, free speech advocates have successfully challenged overbroad state laws, including a 2025 California ruling striking down restrictions on deepfake dissemination as violative of expressive freedoms due to vagueness and chilling effects on innovation. Opponents warn that equating deepfakes with unprotected categories like child pornography overlooks doctrinal limits, as virtual or simulated adult content enjoys robust safeguards, potentially inviting slippery slopes toward regulating other AI-generated media. This viewpoint spectrum underscores ongoing constitutional scrutiny, with regulations surviving only if narrowly tailored to imminent harms rather than speculative societal impacts.

Debates on Gender Dynamics and Regulatory Overreach

Deepfake pornography disproportionately targets women, with studies indicating that approximately 98% of deepfake videos are pornographic and 90% of those depict women without consent. This imbalance has fueled debates framing the phenomenon as an extension of misogyny, where non-consensual synthetic imagery reinforces gendered power imbalances and objectification, particularly affecting female celebrities, politicians, and ordinary individuals. For instance, women in the U.S. Congress face deepfake victimization at rates 70 times higher than men, often tied to efforts to harass or silence female public figures. Proponents of this view, including researchers examining AI ethics, argue that such content normalizes harm against women by blending real identities with fabricated explicit scenarios, exacerbating online gender-based violence. Counterarguments emphasize that deepfake technology is gender-neutral in capability, with harms stemming from misuse rather than inherent misogyny, and note that while female victims predominate, male targets exist, albeit in smaller numbers. From a first-principles perspective, the synthetic nature of deepfakes distinguishes them from physical assault or traditional revenge porn, raising questions about whether psychological distress—such as reputational damage or anxiety reported by victims—warrants equating them to real-world violence without stronger causal evidence linking consumption to offline behaviors. Critics, including those wary of victimhood narratives amplified by biased media coverage, contend that overemphasizing gender dynamics risks pathologizing male sexual expression or fantasy, potentially ignoring broader patterns in consensual pornography where demand drives supply across genders. Empirical data on real-world effects remains limited, with reviews finding insufficient proof of widespread desensitization or escalation to physical harm, though individual cases document severe emotional tolls like suicidal ideation among teen victims. Regulatory responses, such as the U.S. TAKE IT DOWN Act passed in 2025, which mandates removal of non-consensual intimate deepfakes, have sparked concerns over overreach into protected speech. Free speech advocates, including the Electronic Frontier Foundation, warn that broad mandates for content takedowns could enable platforms to censor satirical or artistic works preemptively, chilling First Amendment rights under the guise of harm prevention. Legal analyses highlight the tension: while non-consensual deepfakes may qualify as unprotected defamation or privacy invasions if they cause identifiable injury, synthetic media's fictional essence complicates strict scrutiny, risking slippery slopes toward regulating other AI-generated content like political satire or virtual erotica. In jurisdictions like Texas, amendments to deepfake laws have narrowed scope to sexual content to mitigate free speech risks, yet skeptics argue that enforcement relies on subjective "harm" assessments prone to abuse, particularly given institutional biases in content moderation favoring progressive sensibilities over neutral liberty. Proposals for outright bans, as seen in some state legislatures, face criticism for failing to address root causes like accessible AI tools while potentially expanding government surveillance of digital expression, with little evidence that prohibitions reduce creation rates compared to targeted civil remedies. These debates underscore a core tradeoff: curbing verifiable individual harms without eroding expressive freedoms essential to technological innovation and discourse.

Countermeasures and Responses

Technical Innovations for Mitigation

AI-based detection systems represent a primary technical innovation for identifying deepfake pornography, leveraging machine learning models to analyze visual and temporal inconsistencies in media. These models examine artifacts such as unnatural blinking patterns, mismatched lighting reflections, or irregular heartbeat signals derived from facial blood flow variations, which generative adversarial networks (GANs) and diffusion models often fail to replicate perfectly. For example, the LightFakeDetect model, a lightweight convolutional neural network, achieved 98.2% accuracy on the Deepfake Detection Challenge dataset and Celeb-DF v2, outperforming heavier architectures in resource-constrained environments suitable for platform-scale deployment. Similarly, specialized tools like Sensity AI employ multimodal analysis of videos and images to detect synthetic alterations, including those used in non-consensual pornography, by cross-referencing facial geometry and audio-visual synchronization. Proactive watermarking techniques embed imperceptible digital signatures into original images or videos, enabling forensic verification of authenticity and detection of manipulations specific to deepfake creation workflows. Identity watermarking, as proposed in research from the Winter Conference on Applications of Computer Vision, proactively protects source media by associating fragile markers with facial identities; these markers survive benign edits but degrade under face-swapping operations common in pornography deepfakes, allowing downstream detection with minimal false positives. Complementary approaches, such as semi-fragile watermarking frameworks, integrate with social media pipelines to authenticate user-uploaded content while flagging tampering, robust to compression but sensitive to adversarial edits. The U.S. Government Accountability Office has noted that such authentication methods, including cryptographic hashing of media provenance, help trace alterations but require widespread adoption to counter evolving generation techniques. Despite these advances, detection efficacy remains challenged by adversarial training in deepfake generators, with state-of-the-art open-source detectors experiencing up to 50% performance drops on novel variants as of 2024. Hybrid systems combining biological signal analysis (e.g., eye vergence or micro-expressions) with blockchain-ledgered content provenance are emerging to enhance robustness, though empirical validation on pornography-specific datasets lags behind general deepfake benchmarks. Peer-reviewed systematic reviews emphasize that while pixel-level fingerprints from generation models aid detection, real-time mitigation demands integrated hardware-software solutions, such as edge-computed forensics in devices. Ongoing research prioritizes scalable, low-latency innovations to address the fourfold rise in deepfake incidents from 2023 to 2024, particularly in fraud-adjacent applications like explicit content manipulation. In the United States, the federal TAKE IT DOWN Act, enacted on May 19, 2025, represents the first comprehensive national framework criminalizing the non-consensual distribution of intimate visual depictions, including AI-generated deepfakes, with penalties encompassing mandatory restitution to victims and potential imprisonment. This legislation addresses a prior gap where federal law focused more on real imagery under revenge porn statutes, extending prohibitions to synthetic content that depicts identifiable individuals in sexually explicit acts without consent. By 2025, 47 states had implemented specific deepfake-related legislation, often building on existing non-consensual pornography laws to impose civil and criminal liabilities, such as fines up to $50,000 and prison terms for offenses involving minors. For instance, New Jersey's April 2025 law mandates removal of such content and creates a dedicated deepfake technology unit for investigations. Internationally, regulatory approaches vary, with the European Union's AI Act, effective from August 2024, classifying deepfake pornography generators as high-risk systems subject to transparency requirements and bans on manipulative uses, though enforcement relies on member-state implementation of broader non-consensual image abuse directives. In the United Kingdom, the Online Safety Act 2023 empowers Ofcom to require platforms to remove non-consensual deepfake intimate images, treating them akin to revenge pornography with criminal sanctions up to two years' imprisonment, while Australia's 2024 amendments to image-based abuse laws explicitly include AI-altered content. These frameworks emphasize victim redress and platform accountability, yet lack uniformity, complicating cross-border cases where content originates in jurisdictions with laxer rules, such as certain Asian countries hosting deepfake tools. Enforcement faces substantial hurdles, including the anonymity afforded by tools like VPNs and decentralized platforms, which obscure perpetrator identities and hinder attribution in over 90% of reported cases according to law enforcement analyses. Prosecution rates remain low, with U.S. federal cases under prior statutes numbering fewer than a dozen annually before 2025 expansions, due to evidentiary challenges in proving intent and distinguishing synthetic from authentic media amid evolving AI sophistication. Jurisdictional fragmentation exacerbates this, as deepfakes often traverse international servers, overwhelming under-resourced agencies like the FBI, which report organizational strains from the need for specialized AI forensics training. Additionally, platforms' inconsistent moderation—despite self-imposed policies—creates delays in content takedowns, with victims facing prolonged exposure before judicial remedies, underscoring a gap between legislative intent and practical deterrence.

Platform and Industry Self-Regulation

In response to rising concerns over non-consensual deepfake pornography, major online platforms have implemented policies prohibiting such content, though enforcement varies. Pornhub's Community Guidelines explicitly ban uploads depicting an individual's likeness without consent, including deepfakes or other AI-generated or manipulated media, as part of broader restrictions on non-consensual intimate imagery. Similarly, X (formerly Twitter) maintains a policy against sharing intimate photos or videos produced or distributed without consent, encompassing synthetic media like deepfakes. These measures build on earlier actions, such as Pornhub and Twitter's 2018 bans on face-swap deepfake porn videos, prompted by public backlash over non-consensual alterations of celebrities' images. Despite these policies, empirical audits reveal significant gaps in implementation. A 2024 study auditing X found that deepfake nudes reported under the non-consensual nudity policy often remained online, while identical content flagged for copyright infringement was promptly removed, suggesting prioritization of legal liabilities over victim protection. Platforms' reliance on user reports and automated detection struggles with the scale and sophistication of AI-generated content, leading to persistent availability of deepfake pornography even after policy announcements. Reddit, for instance, shuttered deepfake-specific subreddits in 2018 but continues to face challenges with scattered uploads under revised community rules. Industry-wide self-regulation remains fragmented, with no centralized body coordinating tech or adult content sectors. Tech firms providing AI tools, such as those enabling image synthesis, have introduced voluntary safeguards—like content filters in generative models—but these do little to curb downstream distribution on third-party sites. Adult industry players emphasize performer verification for consensual uploads, yet deepfakes evade these by mimicking real footage without involving originals. Critics argue this patchwork approach inadequately addresses the technology's accessibility, as open-source deepfake software proliferates despite platform bans, underscoring reliance on reactive moderation over proactive standards.

References

  1. [1]
    Science & Tech Spotlight: Deepfakes | U.S. GAO
    Feb 20, 2020 · Deepfakes are usually pornographic and disproportionately victimize women. However, deepfakes can also be used to influence elections or incite ...Missing: history | Show results with:history
  2. [2]
    [PDF] Characterizing the MrDeepFakes Sexual Deepfake Marketplace
    Aug 15, 2025 · lished in the peer-reviewed journal Pattern Recognition in. 2023. The delta between the two versions largely focuses on potential abuses of ...
  3. [3]
    [PDF] DEEPFAKE PORNOGRAPHY AND THE PATH TO LEGAL ...
    The first widespread application of deepfake technology, nonconsensual pornography, has proliferated rapidly since its emergence in 2017. 16 According to a ...Missing: key | Show results with:key
  4. [4]
    Deepfake Statistics & Trends 2025 | Key Data & Insights - Keepnet
    Sep 24, 2025 · 96–98% of all deepfake content online consists of non-consensual intimate imagery (NCII). 99–100% of victims in deepfake pornography are female.<|control11|><|separator|>
  5. [5]
    Sexualized Deepfake Abuse: Perpetrator and Victim Perspectives ...
    Sep 9, 2025 · To the best of our knowledge, this is the first peer-reviewed research to report on interviews with perpetrators and victims of sexualized ...Missing: statistics | Show results with:statistics
  6. [6]
    The Role of Deepfake Pornography in the Perpetuation of Digital ...
    May 21, 2025 · Therefore, for this comment piece we reviewed existing peer-reviewed articles and data from websites to show that deepfakes have become a new ...Missing: prevalence | Show results with:prevalence
  7. [7]
    Victims of explicit deepfakes can now take legal action ... - CNN
    May 19, 2025 · But laws protecting adult victims varied by state and didn't exist nationwide. ... deepfake porn. Here's how to protect yourself. The law passed ...
  8. [8]
    Florida makes deepfake A.I. porn a felony as teen victim shares her ...
    Oct 2, 2025 · In the case of “deepfake porn,” an innocent photo can be manipulated to strip away clothing or place a person's face onto explicit material.
  9. [9]
    [PDF] Not Her Fault: AI Deepfakes, Nonconsensual Pornography, and ...
    May 21, 2025 · Some states have passed laws that address deepfake pornography, but a federal statute is necessary because victims should have access to civil ...Missing: "peer | Show results with:"peer
  10. [10]
    Deepfakes, explained | MIT Sloan
    Jul 21, 2020 · ... pornographic videos that used open source face-swapping technology. ... The study found that 96% of deepfake videos are pornography, and nearly ...
  11. [11]
    Social, legal, and ethical implications of AI-Generated deepfake ...
    ... deepfake pornography. Study Design, Original peer-reviewed research articles published in English. Non-peer-reviewed sources such as conference papers ...
  12. [12]
    What Is Deepfake: AI Endangering Your Cybersecurity? | Fortinet
    A major threat that deepfake poses is nonconsensual pornography, which accounts for up to 96% of deepfakes on the internet. Most of this targets celebrities.Missing: core | Show results with:core
  13. [13]
    Deepfake and AI: To Be or Not To Be - Copperpod IP
    Feb 11, 2020 · Deepfakes Uses ML Techniques known as Autoencoder and Generative Adversarial Networks (GANs). An autoencoder is an artificial neural network ...
  14. [14]
    The Rise of Generative Adversarial Networks (GANs) in Deepfake ...
    Feb 13, 2024 · GANs employ two neural networks - a generator and a discriminator - that work against each other to refine the creation of fake images or videos ...
  15. [15]
    Face Deepfakes - A Comprehensive Review - arXiv
    The principal aim of this survey is to contribute a thorough theoretical analysis of state-of-the-art face deepfake generation and detection methods.
  16. [16]
    The workflow of Autoencoders and GAN in the creation of Deepfakes
    A breakthrough in the emerging use of machine learning and deep learning is the concept of autoencoders and GAN (Generative Adversarial Networks).
  17. [17]
    How Deepfakes Are Made: AI Technology, Process & Detection Guide
    Jun 16, 2025 · GAN technology powers most deepfake creation, using two competing neural networks: one generates fake content while another tries to detect it, ...Missing: techniques | Show results with:techniques
  18. [18]
    Journalist Emanuel Maiberg Addresses AI and the Rise of Deepfake ...
    Apr 22, 2024 · ” Deepfake porn is synthetic pornography created using AI “deep ... percent of online deepfake videos were pornographic and nonconsensual.Missing: Sensity | Show results with:Sensity
  19. [19]
    [PDF] GAO-20-379SP, Science & Tech Spotlight: Deepfakes
    A deepfake is a video, photo, or audio recording that seems real but has been manipulated with AI. The underlying technology can replace faces, manipulate ...
  20. [20]
    Deepfakes: Face synthesis with GANs and Autoencoders - AI Summer
    Jun 2, 2020 · Deepfakes are usually based on Generative Adversarial Networks (GANs), where two competing neural networks are jointly trained. GANs have had ...
  21. [21]
    DeepFaceLab 2.0 Guide - DeepfakeVFX.com
    DeepFaceLab (DFL) is the leading deepfake creation software. Most high-quality deepfakes are made using DeepFaceLab. DFL provides an end-to-end solution for ...
  22. [22]
    Explainable Deepfake Video Detection using Convolutional Neural ...
    ... pornography was used which defamed many celebrities. According to the Deeptrace report, 96% of deepfake videos online were pornographic. Unfortunately, the ...
  23. [23]
    Science & Tech Spotlight: Combating Deepfakes | U.S. GAO
    Mar 11, 2024 · Deepfakes are videos, audio, or images that have been manipulated using artificial intelligence (AI), often to create, replace, or alter faces ...
  24. [24]
    Advancements in detecting Deepfakes: AI algorithms and future ...
    May 7, 2025 · This paper discussed the Deepfake-detection using Neural Networks (NNNs), such as ResNet50 and LSTM, testing their accuracy, and developing a ...
  25. [25]
    Deepfake video detection methods, approaches, and challenges
    Deepfake technology creates highly realistic manipulated videos using deep learning models, which makes distinguishing between authentic and fake content ...Review · 5. Deepfake Video Detection... · 5.1. 1. Model-Based Methods
  26. [26]
    a survey of digital forensic methods for multimodal deepfake ... - NIH
    May 27, 2024 · This research tackles this knowledge gap by providing an up-to-date systematic survey of the digital forensic methods used to detect deepfakes.
  27. [27]
    A Comprehensive Review of Deepfake Detection Methods and ...
    May 3, 2025 · This study offers a thorough examination of deepfake detection techniques and how they are used in digital forensics. We examine the most recent ...<|separator|>
  28. [28]
    Unmasking digital deceptions: An integrative review of deepfake ...
    Presents an in-depth review of deepfake generation and detection, highlighting AI methods such as GANs, face synthesis, and speech cloning.
  29. [29]
    What Journalists Should Know About Deepfake Detection in 2025
    Mar 11, 2025 · These studies make one thing clear: deepfake detection tools cannot be trusted to reliably catch AI-generated or -manipulated content.
  30. [30]
    Deepfake Video Traceability and Authentication via Source Attribution
    Jul 13, 2025 · Artificial intelligence (AI) techniques are used to create convincing deepfakes. The main counter method is deepfake detection. Currently, most ...<|separator|>
  31. [31]
    [PDF] NONCONSENSUAL DEEPFAKES: DETECTING AND REGULATING ...
    This paper surveys the emerging threat of deepfake technology, largely in relation to nonconsensual deepfake pornography. Part I of this Article.<|separator|>
  32. [32]
    Human performance in detecting deepfakes: A systematic review ...
    Overall deepfake detection rates (sensitivity) were not significantly above chance because 95% confidence intervals crossed 50%.
  33. [33]
    Deepfake Media Forensics: State of the Art and Challenges Ahead
    Traditional Deepfake detection methods may fall short as they often focus on either audio or video data in isolation. However, Deepfakes may involve ...Missing: peer- | Show results with:peer-<|control11|><|separator|>
  34. [34]
    [PDF] A Comprehensive Evaluation of Deepfake Detection Methods
    3 Deepfake detection: A review of methods and techniques. This chapter presents an overview of mainstream deepfake detection methods, including those based ...
  35. [35]
    What Is a Deepfake? Definition & Technology | Proofpoint US
    2014: Ian Goodfellow introduced Generative Adversarial Networks (GANs), a breakthrough in deep learning that would eventually enable sophisticated deepfakes.
  36. [36]
    A Brief History of Deepfakes - Reality Defender
    The concept of deepfakes (or deepfaking) can be traced back to efforts starting in the 1990s, when researchers used CGI in attempts to create realistic images ...
  37. [37]
  38. [38]
    [PDF] Increasing Threat of DeepFake Identities - Homeland Security
    When deepfakes were first developed several years ago, their creation required a high level of skill in AI, training, and technology, along with advanced.
  39. [39]
    Reddit bans 'deepfakes' AI porn communities - The Verge
    Feb 7, 2018 · The r/deepfakes subreddit was created after Motherboard reported on the phenomenon of AI-generated porn late last year. Around the time of ...
  40. [40]
    Reddit bans 'deepfakes' face-swap porn community - The Guardian
    Feb 8, 2018 · Social news site blocks subreddit where fake AI-created clips were first created, which had almost 100,000 users.<|separator|>
  41. [41]
    Reddit Bans Community for Deepfake Sex Tape Software - Variety
    Feb 16, 2018 · Reddit has banned a community dedicated to Fakeapp, an artificial intelligence (AI) video editing application that has gained some notoriety ...
  42. [42]
    Reddit Just Shut Down the Deepfakes Subreddit - VICE
    Feb 7, 2018 · As of Wednesday around 1PM EST, Reddit appears to have suspended r/deepfakes, the subreddit dedicated to creating fake porn videos using a machine learning ...Missing: date | Show results with:date
  43. [43]
    The rise of accessible non-consensual deepfake image generators
    Jun 23, 2025 · These deepfake models have been downloaded almost 15 million times since November 2022, with the models targeting a range of individuals from ...<|separator|>
  44. [44]
    Deepfake Statistics 2025: AI Fraud Data & Trends - DeepStrike
    Sep 8, 2025 · ... deepfake pornography" or "revenge porn." Estimates consistently show that 96 98% of all deepfake videos online fall into this category. This ...Missing: 2020-2025 | Show results with:2020-2025
  45. [45]
    Deepfakes: A Real Threat to a Canadian Future - Canada.ca
    Jul 14, 2025 · Such examples of deepfake porn are not uncommon. Over 90 per cent of deepfakes available online are non-consensual pornographic clips of women; ...Missing: growth | Show results with:growth
  46. [46]
    Deepfake Statistical Data (2023–2025) - Views4You
    May 27, 2025 · Major deepfake pornography sites have cataloged thousands of victims (almost 4,000 female celebrities were found across the top deepfake porn ...
  47. [47]
    New York's Right to Publicity and Deepfakes Law Breaks New Ground
    Dec 17, 2020 · On November 30, 2020, New York Governor Andrew Cuomo signed a path-breaking law addressing synthetic or digitally manipulated media.
  48. [48]
    2023 State Of Deepfakes: Realities, Threats, And Impact
    Our 2023 report explores deepfake tech & aims to empower responsible navigation through thorough research on 95820 videos, 85 channels, and 100 websites.
  49. [49]
    [PDF] state-of-deepfake-infographic-2023.pdf - Security Hero
    Total traffic across top 10 dedicated deepfake porn websites. 34,836,914. Total deepfake videos online in 2023. 95,820. 2019. 2023. 550%. Deepfake pornography ...
  50. [50]
    [PDF] Children and deepfakes - European Parliament
    Misuse of deepfake technology includes financial crimes, extortion, harassment and the creation of pornographic deepfakes. Moreover, deepfakes pose particular ...
  51. [51]
    Dozens of lawmakers victims of sexually explicit deepfakes: Report
    Dec 11, 2024 · More than two dozen lawmakers have been the victims of deepfake pornography, with female lawmakers significantly more likely to be targeted.
  52. [52]
    Non-Consensual Synthetic Intimate Imagery: Prevalence, Attitudes ...
    AI-generated NSII is more colloquially known as “deepfake pornography.” The consumer creation of deepfakes (a portmanteau of “deep learning” and “fake” [72, 81]) ...<|separator|>
  53. [53]
    [PDF] Deepfake Nudes & Young People - Thorn.org
    Among the subsample, deepfake creators reported being most likely to have created deepfake nude imagery of an adult (62%), with roughly. 1 in 3 (36 ...
  54. [54]
    Image-Based Sexual Abuse Perpetration: A Scoping Review - PMC
    Jul 30, 2024 · Scholarly and gray empirical literature (quantitative and qualitative studies) on IBSA perpetration, including peer-reviewed articles, theses, ...
  55. [55]
    Taylor Swift deepfakes spark calls in Congress for new legislation
    Jan 27, 2024 · US politicians have called for new laws to criminalise the creation of deepfake images, after explicit faked photos of Taylor Swift were ...
  56. [56]
    Inside the Taylor Swift deepfake scandal: 'It's men telling a powerful ...
    Jan 31, 2024 · AI-generated porn, fuelled by misogyny, is flooding the internet, with Taylor Swift the latest high-profile casualty ... deepfake pornography. “ ...
  57. [57]
    ABC NEWS: Taylor Swift and No AI Fraud Act: How Congress plans ...
    Jan 30, 2024 · Taylor Swift's likeness was used for nonconsensual, seemingly AI-generated, deepfake pornography, which spread across the Internet like wildfire ...
  58. [58]
    Scarlett Johansson on fake AI-generated sex videos: 'Nothing can ...
    Dec 31, 2018 · Johansson, one of the world's highest-paid actresses, spoke to The Washington Post in an exclusive interview: "The Internet is just another ...
  59. [59]
    Scarlett Johansson fights back against 'deep-fake' porn - ABC News
    Jan 3, 2019 · What women should know about the new technology that superimposes women's faces on porn stars' bodies.
  60. [60]
    Sexual deepfake ads using Emma Watson's face ran ... - NBC News
    Mar 7, 2023 · A deepfake app advertised itself on Meta platforms using the faces of actresses Emma Watson and Scarlett Johansson swapped into sexual ...<|separator|>
  61. [61]
    Nearly 4000 celebrities found to be victims of deepfake pornography
    Mar 21, 2024 · In the first three-quarters of 2023, 143,733 new deepfake porn videos were uploaded to the 40 most used deepfake pornography sites – more than ...Missing: high- profile incidents<|separator|>
  62. [62]
    Beverly Hills school expels students over deepfake nude photos
    Mar 8, 2024 · Five middle school students in Beverly Hills who were accused of using AI to create fake nude images of their classmates have been expelled.
  63. [63]
    Teen girls are being victimized by deepfake nudes. One N.J. family ...
    Dec 4, 2023 · A mother and her 14-year-old daughter are advocating for better protections for victims after AI-generated nude images of the teen and other female classmates ...
  64. [64]
    Spurred by Teen Girls, States Move to Ban Deepfake Nudes
    Apr 22, 2024 · Legislators in two dozen states are working on bills, or have passed laws, to combat A.I.-generated sexually explicit images of minors.
  65. [65]
    Charlotte Child Pornography Case Shows 'Unsettling' Reach of AI ...
    Apr 29, 2024 · Child psychiatrist David Tatum was sentenced to 40 years in prison for using generative artificial intelligence to digitally alter clothed ...<|separator|>
  66. [66]
    Recidivist Sex Offender Sentenced for Possessing Deepfake Child ...
    May 1, 2024 · ... deepfake child sexual abuse material (CSAM) ... child pornography and one count of accessing with the intent to view child pornography.
  67. [67]
    Teen victim of AI-generated "deepfake pornography ... - CBS News
    Dec 18, 2024 · Elliston Berry's life was turned upside down after a photo she posted on Instagram was digitally altered online to be pornographic.
  68. [68]
    Minors Are On the Frontlines of the Sexual Deepfake Epidemic
    Oct 10, 2024 · ... of deepfake abuse involving private citizens (or non-celebrities) to date. ... of pornography involving juveniles, and two counts of ...
  69. [69]
    City Attorney sues most-visited websites that create nonconsensual ...
    Aug 15, 2024 · Worse yet, victims of nonconsensual deepfake pornography have found virtually no recourse or ability to control their own image after ...
  70. [70]
    Experiences with AI-Generated Pornography: A Quantitative Content ...
    Sep 18, 2025 · A content analysis of 36 websites for the generation of AI pornography found that most enable the generation of still images (80.6%), and many ...Missing: volumes | Show results with:volumes
  71. [71]
    [PDF] Artificial Intelligence-Altered Videos (Deepfakes), Image-Based ...
    Mar 22, 2023 · This means there are varied motivations behind image-based abuse which could span from revenge by different perpetrators like ex-partners to ...
  72. [72]
    Characterizing the MrDeepFakes Sexual Deepfake Marketplace
    The prevalence of sexual deepfake material has exploded over the past several years. ... peer-reviewed journal Pattern Recognition in 2023. The delta between the ...
  73. [73]
    [PDF] Behind the Deepfake: 8% Create; 90% Concerned
    Jun 5, 2024 · First, on average, 15% of people report exposure to harmful deepfakes, including deepfake pornography, deepfake frauds/scams and other ...
  74. [74]
    Celebrity status, sex, and variation in psychopathy predicts ...
    Deepfake judgements are generally more lenient if victims are celebrities and/or male. Psychopathy and belief in a just world predict proclivity to generate ...
  75. [75]
    [PDF] Sexual Deepfakes and Image-Based Sexual Abuse: Victim-Survivor ...
    Mar 12, 2023 · In her documentary, Deepfake Porn: Could You Be Next? Jess Davies said these words to describe the real and present dangers presented by AI ...
  76. [76]
  77. [77]
    Full article: The tensions of deepfakes - Taylor & Francis Online
    Jul 13, 2023 · In fact, it has been reported that 100% those targeted and harmed in deepfake pornography are women, the main reason being that the algorithms ...<|separator|>
  78. [78]
  79. [79]
    Deepfake videos could destroy trust in society – here's how to restore it
    Feb 6, 2019 · ... deepfake revenge porn purporting to show people cheating on their partners won't be far behind. But more than becoming a nasty tool for ...<|separator|>
  80. [80]
    Can deepfakes manipulate us? Assessing the evidence via a critical ...
    May 2, 2025 · This study is a scoping review of peer-reviewed literature on the topic of deepfake effects on beliefs, memories and behaviours and therefore ...Missing: statistics | Show results with:statistics
  81. [81]
    'Another Body' documentary exposes harm of deepfake technology
    Jan 25, 2024 · A 2019 report by Sensity, a company that detects and monitors sexual deepfakes, found that 95% of all online deepfake videos are nonconsensual ...
  82. [82]
    In South Korea, rise of explicit deepfakes wrecks women's lives and ...
    Oct 3, 2024 · The U.S. cybersecurity firm Security Hero called South Korea “the country most targeted by deepfake pornography” last year.Missing: effects relations
  83. [83]
    The Alarming Rise of Deepfake Porn and Its Devastating Effects
    Apr 13, 2025 · The study also found that 99% of all deepfake porn targeted females. Become A Fighter. The Harms of Deepfake Pornography. The increase of this ...<|separator|>
  84. [84]
    How Deepfake Pornography Violates Human Rights and Requires ...
    Aug 13, 2025 · Testimonies of victims of deepfake pornography often show that they experience similar psychological harms to those affected by sexual abuse.Missing: drivers non- consensual
  85. [85]
    What we know and don't know about deepfakes - Sage Journals
    May 22, 2024 · The term “deepfake” first appeared in 2017, coined by a Reddit user to describe pornographic content apparently featuring the faces of famous ...
  86. [86]
    Dealing with deepfakes: What the First Amendment says
    Jul 10, 2024 · Deepfakes are protected under the First Amendment as a form of free expression. Deepfakes are essentially lies, which, without criminal behavior, are protected ...
  87. [87]
    Why the First Amendment Likely Protects the Creation of ...
    The First Amendment likely protects the creation of pornographic deepfakes by Bradley Waldstreicher, Volume 42 Issue 2.
  88. [88]
    [PDF] Deepfakes, Synthetic Media, and the First Amendment
    Jun 24, 2025 · A will examine exceptions to free speech to determine if they could uphold synthetic media legislation, including obscenity, defamation,.
  89. [89]
    Adverse human rights impacts of dissemination of nonconsensual ...
    If the pornography performer in the deepfake can be identified, nonconsensual sexual deepfake can undermine their rights, as well as those of the person ...
  90. [90]
    Deepfake Privacy: Attitudes and Regulation
    In our main study, a representative sample of the U.S. adult population perceived nonconsensually created pornographic deepfake videos as extremely harmful and ...
  91. [91]
    Are Deepfakes Protected by the First Amendment? - Freedom Forum
    May 21, 2024 · The prevailing view is that the First Amendment protects the people who use AI to create deepfakes. The First Amendment protects speech, not speakers.
  92. [92]
    Free Speech Advocates Express Concerns As TAKE IT DOWN Act ...
    Feb 21, 2025 · Tech policy experts have debated the First Amendment implications of anti-deepfake legislation for years, but concerns about regulatory ...
  93. [93]
    Free speech rights secure a legal victory over California's restrictive ...
    Aug 21, 2025 · Free speech rights recently secured an important legal win against one of California's overly broad deepfake laws. The case underscores the ...
  94. [94]
  95. [95]
    Gender bias, AI, and deepfakes are promoting misogyny online
    Jan 9, 2025 · Currently, AI, deepfake pornography, and internet culture exacerbate gender bias and misogyny. However, these challenges also present ...
  96. [96]
    Cyberbullying girls with pornographic deepfakes is a form of misogyny
    Nov 28, 2023 · ... issues as well. Read more: How to combat the unethical and costly use of deepfakes. Deepfake porn cyberbullying. In the Almendralejo incident ...Missing: debates gender dynamics
  97. [97]
    Women lawmakers are 70 times more likely to be deepfake victims
    Dec 17, 2024 · Women in Congress are more likely to be victims of deepfake pornography than their male counterparts, according to a national organization that combats ...
  98. [98]
    How Model Personalization Normalizes Gendered Harm - arXiv
    May 7, 2025 · 2 From Avocado Chairs to Deepfake Porn: The Evolution of Text-to-Image Technologies ... Multimodal datasets: Misogyny, pornography, and malignant ...Missing: debates | Show results with:debates
  99. [99]
    New Law Regarding Deepfakes Says, “Take It Down”
    Sep 16, 2025 · While the distribution of child pornography is already criminalized under U.S. federal law, proponents of TAKE IT DOWN argue that the law fills ...
  100. [100]
  101. [101]
    Texas amends non-consensual sexual deepfake law to include ...
    Jun 10, 2025 · Unlike broadly worded political deepfake laws that may infringe on free speech, bills that limit their scope to non-consensual sexual content ...Missing: concerns | Show results with:concerns
  102. [102]
  103. [103]
    LightFakeDetect: A Lightweight Model for Deepfake Detection in ...
    The model is evaluated using the Deepfake Detection Challenge (DFDC) and Celeb-DF v2 datasets, demonstrating impressive performance, with 98.2% accuracy and a ...
  104. [104]
    Top 10 AI Deepfake Detection Tools to Combat Digital Deception in ...
    These videos have been altered using four primary deepfake techniques: DeepFakes, Face2Face, FaceSwap, and NeuralTextures. Additionally, it hosts the Deep Fake ...
  105. [105]
    [PDF] Proactive Deepfake Defence via Identity Watermarking
    The watermark acts as the anti-Deepfake label to protect the user's authenticity of these images. Once im- ages with similar identities to the watermarked ...
  106. [106]
    Social Media Authentication and Combating Deepfakes Using Semi ...
    Dec 9, 2024 · Our proposed watermarking framework is designed to be fragile to facial manipulations or tampering while being robust to benign image-processing ...
  107. [107]
    A systematic review of deepfake detection and generation ...
    Oct 15, 2024 · TITLE-ABS-KEY ((“synthetic media” OR “Deep Fake” OR “Deepfakes” OR “Deep fakes” OR”Deepfake Images”OR”Fake video*” OR”Deepfake video*” OR “ ...
  108. [108]
    Trump signs bill cracking down on explicit deepfakes - NBC News
    May 19, 2025 · The Take It Down Act makes publishing such content illegal, subjecting violators to mandatory restitution and criminal penalties such as prison, ...
  109. [109]
    The State of Deepfake Regulations in 2025 - Reality Defender
    Jun 18, 2025 · The TAKE IT DOWN Act, signed into law on May 19, 2025, criminalizes knowingly publishing or threatening to publish non-consensual intimate ...
  110. [110]
    The TAKE IT DOWN Act: A Federal Law Prohibiting ... - Congress.gov
    May 20, 2025 · On April 28, 2025, Congress passed S. 146, the TAKE IT DOWN Act, a bill that criminalizes the nonconsensual publication of intimate images.Missing: deepfake | Show results with:deepfake
  111. [111]
    Complete Guide to U.S. Deepfake Laws: 2025 State and Federal ...
    Sep 2, 2025 · Federal TAKE IT DOWN Act signed by President Trump in May 2025 creates first federal framework; 47 states now have enacted deepfake legislation ...
  112. [112]
    Threats and regulatory challenges of non-consensual pornographic ...
    The proliferation of non-consensual pornographic deepfakes has raised ethical, legal, and social concerns worldwide. This form of gender-based violence ...
  113. [113]
    [PDF] Facing reality? Law enforcement and the challenge of deepfakes
    In a December 2020 study, Sensity, an Amsterdam-based company that detects and tracks deepfakes online, found 85 047 deepfake videos on popular streaming ...
  114. [114]
    Organisational Challenges in US Law Enforcement's Response to AI ...
    The rapid rise of AI-driven cybercrime and deepfake fraud poses complex organisational challenges for US law enforcement, particularly the Federal Bureau of ...
  115. [115]
    Community Guidelines - Pornhub Help
    Depicts an individual's likeness without their consent (including "deepfakes" or other forms of AI-generated or manipulated content). Exposes private and ...Missing: deepfake | Show results with:deepfake
  116. [116]
    X's non-consensual nudity policy - Help Center
    You may not post or share intimate photos or videos of someone that were produced or distributed without their consent. Sharing explicit sexual images or videos ...Missing: deepfake | Show results with:deepfake
  117. [117]
    Study: Reports of nonconsensual nude images are ignored on X
    X removed deepfake nudes when researchers reported them for copyright, but they remained up when reported for nonconsensual nudity.Missing: rules | Show results with:rules
  118. [118]
    Reporting Non-Consensual Intimate Media: An Audit Study of ... - arXiv
    Sep 18, 2024 · The non-consensual nudity policy prohibits posting or sharing intimate photos or videos produced or distributed without consent. including ...
  119. [119]
    When non-consensual intimate deepfakes go viral: The insufficiency ...
    Jul 4, 2024 · An industry report based on the analysis of 14,678 deepfake videos online indicates that 96 % of them were non-consensual intimate content and ...
  120. [120]
    Tech Bros, Big Platforms, and Poor Regulation: Who Enables ...
    Oct 5, 2025 · “How can society effectively stop the epidemic of deepfake porn videos, which constitute 96% of deepfakes and target 99% of women?” He answered ...
  121. [121]
    Governing Image-Based Sexual Abuse: Digital Platform Policies ...
    The extent to which platform policies and guidelines explicitly or implicitly cover “deepfakes,” including deepfake pornography, is a relatively new governance ...