Fact-checked by Grok 2 weeks ago

Elsagate

Elsagate refers to a class of videos produced primarily between 2016 and 2018 that targeted young children by featuring familiar characters from children's media, such as Elsa from Disney's or , engaged in disturbing scenarios including , scatological humor, simulated surgeries, consumption of bodily fluids, and . These videos often evaded 's content filters by incorporating child-friendly thumbnails, titles, and keywords while embedding age-inappropriate themes, leading to algorithmic recommendations that exposed millions of views to potentially harmful material. Empirical analyses of large video corpora indicate that such content constituted a small but persistent fraction of children's uploads, with one study finding disturbing elements in approximately 1.1% of over 233,000 Elsagate-tagged videos. The phenomenon highlighted vulnerabilities in platform moderation, where creators exploited recommendation algorithms for through ad revenue, often from low-cost overseas production hubs with minimal oversight. Research employing models has demonstrated feasibility in automated detection, achieving high accuracy (e.g., 92.6%) by training on visual and textual cues like unnatural character behaviors or mismatched audio-visual elements, underscoring the scalability of machine-based interventions over human review alone. Despite YouTube's subsequent policy adjustments, including enhanced classifiers and stricter guidelines for kids' content, residual concerns persist regarding algorithmic "hypes" that amplify borderline material, as evidenced by ongoing studies of viewer commentary and content evolution. Elsagate's broader implications extend to causal factors in digital culture, where opaque incentives—prioritizing metrics over —foster boundary-pushing genres that challenge traditional distinctions between educational and exploitative media. Academic scrutiny, drawing from corpus analyses rather than anecdotal reports, reveals patterns of sparse but impactful inappropriateness, prompting interdisciplinary efforts in , forensics, and to mitigate recurrence. While not indicative of systemic predation, the empirically validates critiques of profit-driven platforms' self-regulation, emphasizing the need for transparent, evidence-based safeguards in algorithmically curated environments.

Overview and Definition

Core Phenomenon and Terminology

Elsagate denotes a category of videos ostensibly designed for young children, featuring licensed characters from popular media such as Elsa and Anna from Disney's , , or , but depicting them in scenarios involving simulated , , medical trauma like injections or tooth extractions, or other elements unsuitable for minors. These videos employ deceptive thumbnails and titles mimicking legitimate children's entertainment, such as toy unboxings or play reenactments, to attract views from algorithmic recommendations tailored to search patterns and viewing histories. The nomenclature "Elsagate" originated as a portmanteau of "Elsa" and the "-gate" suffix, evoking scandals like Watergate, due to the prevalence of Elsa-centric content in early examples; the term emerged in online discussions and media reports circa as awareness of the pattern grew. Content creators, often operating from regions like or , produced these low-budget animations or live-action skits without official licensing, prioritizing rapid monetization through ad over child welfare or narrative quality. This phenomenon differs from conventional children's media, which typically adheres to age-appropriate educational or whimsical themes under regulatory oversight, by instead leveraging YouTube's recommendation engine to inadvertently expose unsupervised minors to content evoking or without parental intent. It also contrasts with adult-oriented shock videos, such as creepypastas or gore, which lack child-targeted and do not exploit family-friendly keywords to infiltrate kid-centric feeds. The core lies in the causal mismatch between surface-level appeal and underlying disturbance, driven by profit motives rather than deliberate predation, though empirical studies note potential psychological impacts like desensitization from repeated exposure.

Scope and Scale

Elsagate videos proliferated extensively on before major platform interventions in late 2017, with hundreds of channels generating thousands of such uploads targeted at young audiences. These channels often featured low-barrier production methods, such as basic software and repurposed popular thumbnails, enabling rapid output without significant investment. Top Elsagate channels collectively amassed billions of views, fueled by 's recommendation algorithms that prioritized high-engagement content for children. The phenomenon extended beyond English-language content, with variants produced by creators in regions including and other non-Western countries, often in local languages to evade detection and expand reach. Despite YouTube's filters, these videos frequently appeared in the app, which was intended to curate age-appropriate material but struggled with algorithmic loopholes allowing disguised uploads. By mid-2017, reports highlighted the scale's acceleration, attributing it to minimal oversight and the platform's incentives that rewarded volume over quality scrutiny. Global distribution amplified the issue, as channels operated from various jurisdictions with differing content regulations, complicating enforcement.

Historical Development

Early Emergence (2013–2016)

The phenomenon of Elsagate-style videos began emerging on around 2014, coinciding with the massive popularity of Disney's following its November 2013 theatrical release, which introduced the character Elsa to a global audience of young children. Early content typically featured unlicensed mashups of Elsa with superheroes like , often in simplistic narratives such as romantic encounters or adventures, as seen in the video "Frozen Elsa Dates !" uploaded to the DisneyCarToys channel in 2014. These videos capitalized on evergreen intellectual properties without official licensing, produced via low-cost animations that required minimal resources, allowing rapid proliferation by creators seeking ad revenue through high-engagement children's views. YouTube's recommendation during this period heavily favored metrics like watch time and click-through rates, which children's naturally amplified due to repetitive viewing patterns among young audiences. Searches for popular characters like Elsa or would surface these bootleg videos alongside official clips, driving organic discovery and without platform intervention, as moderation priorities focused more on overt violations than subtle thematic escalations. By late 2016, users noted the ubiquity of such , including live-action skits with adults in costumes depicting superheroes in bizarre scenarios, further fueled by the 's emphasis on session duration over suitability. Much of this early production originated from overseas "content farms," particularly in regions like and , where teams generated high volumes of videos using cheap software to exploit trending keywords and evade detection. These operations prioritized quantity for monetization via YouTube's Partner Program, which rewarded views from automated or looped plays, leading to unmonitored saturation of kids-targeted feeds. The lack of age-gating or algorithmic filters for emerging allowed the content to evolve from benign crossovers to increasingly provocative themes, setting the stage for later excesses, all while generating substantial revenue—some channels amassed millions of views per video—before drawing scrutiny.

Public Awareness and Peak Controversy (2017)

In early 2017, Elsagate transitioned from niche online discussions to widespread public scrutiny following high-profile media investigations into YouTube's child-targeted content. On March 27, 2017, the BBC published an exposé detailing thousands of videos that superficially resembled innocuous cartoons—featuring characters like Elsa from Disney's Frozen or Spider-Man—but depicted graphic scenarios, including injections with syringes, dismemberment, and other violent or pseudo-medical acts unsuitable for young viewers. These videos, often produced by low-effort content farms, evaded detection by mimicking legitimate children's programming while exploiting YouTube's recommendation algorithms to reach impressionable audiences, prompting parental outrage over unintended exposure. Awareness escalated through viral parent testimonials on platforms like and , where accounts of toddlers encountering the material led to organized backlash. Independent YouTubers, including creators like Matt Watson (of the channel MattsWhatItIsLike), released investigative videos in mid-2017 dissecting specific channels, such as those racking up millions of views with skits involving character torture or , further amplifying the issue to mainstream audiences. This momentum translated into formal complaints to regulators, including the U.S. , highlighting concerns over child safety and platform liability, though substantive investigations materialized later. YouTube's initial handling reflected internal hesitancy, with public statements recognizing problematic content but lacking immediate algorithmic overhauls or mass removals. By November 2017, under sustained pressure, announced updates to flag, review, and restrict videos breaching child safety guidelines without violating broader terms—such as those simulating harm to branded characters—resulting in the deletion of over 150,000 clips deemed exploitative. Critics, including affected parents and media analysts, pointed to delays in prioritizing the issue, attributing them to reliance on ad revenue from high-viewership kids' content and challenges in automated moderation for borderline material.

Decline and Adaptation (2018–2019)

In response to mounting scrutiny, escalated enforcement against Elsagate-style content through demonetization and channel terminations, which curtailed the profitability and visibility of many overt producers. By early , these measures, combined with automated flagging and human reviews, resulted in the removal of thousands of videos and accounts featuring disturbing child-targeted material, though exact figures specific to Elsagate remain undisclosed in platform reports. The introduction of refined in the app on April 25, 2018, enabled users to curate approved channels and topics, effectively filtering out flagged inappropriate videos and contributing to a correlated drop in exposure to such content within the app ecosystem. Subsequent policy updates, including broader child safety reviews, further suppressed recommendations of borderline videos, with data indicating that only about 20.5% of identified disturbing videos were proactively removed by mid-2019. Creators adapted by rebranding and softening themes to evade detection, employing innocuous thumbnails, titles mimicking legitimate children's programming (e.g., references to or ), and overlapping tags with benign content to blend into algorithmic feeds. This shift toward subtler variants—featuring less explicit or injection scenarios but retaining elements—allowed persistence, as evidenced by a 2018 crawl uncovering 2,447 inappropriate videos among 233,337 Elsagate-related uploads, many of which remained online for over two years on average by May 2019. Such tactics underscored the limitations of platform moderation in fully eradicating incentivized content farms.

Resurgence with AI and New Formats (2020s)

In 2025, Elsagate-style content reemerged on through generative , enabling the mass production of disturbing animations that mimic child-targeted videos while incorporating and elements. A WIRED investigation on May 2, 2025, identified dozens of channels using tools to generate videos featuring popular characters like Minions, , and anthropomorphic cats subjected to beatings, , dismemberment, and , often set to nursery rhymes or upbeat music to appeal to young viewers. These outputs, characterized by distortions and rapid iteration, frequently bypass legacy moderation systems trained on human-created media, as the synthetic visuals lack familiar patterns of flaws. Content farms leveraged for scaled deployment, integrating Elsagate tropes with slop-style formats that repurpose gaming-adjacent IPs and viral memes to exploit algorithmic recommendations. Tubefilter's on May 2, 2025, framed this as YouTube's "next Elsagate," noting 's role in flooding feeds with low-effort videos of Elsa, Minions, and animal proxies in scenarios, produced at volumes unattainable pre- due to of scripting, voicing, and rendering. Channels often operate from regions with lax oversight, uploading hundreds of variants daily to accumulate views from unsupervised child audiences, prioritizing monetization via ads over content ethics. By mid-2025, independent analyses confirmed the trend's acceleration, with creators documenting "rampant" returns of pseudo-educational gore clips disguised as kid-friendly entertainment, amplified by AI's ability to evade demonetization through subtle . This phase underscores AI's dual-edged impact: democratizing creation but intensifying risks from unfiltered, profit-driven outputs that evade detection longer than manual equivalents.

Characteristics of Elsagate Videos

Content Themes and Tropes

Elsagate videos recurrently feature popular children's characters, such as Elsa from Disney's and , subjected to simulated medical procedures including injections via syringes filled with brightly colored liquids or mock surgeries using improvised tools. These motifs often escalate to depict accidents causing graphic injuries, like falls or collisions leading to exposed wounds or dismemberment simulations, integrated into otherwise familiar cartoon settings. Horror pranks form another core , where characters prank each other with sudden scares, chases, or elements, such as Elsa being pursued by monstrous versions of or wielding weapons. is stylized but explicit, including beatings, stabbings, or with everyday objects repurposed as threats, frequently involving crossovers between unrelated franchises like tormenting Elsa. Subtle adult-oriented undertones appear disguised within child-targeted visuals, such as characters with torn clothing exposing undergarments while being leered at by antagonists like the or , or scenarios mimicking fetishistic scenarios under the guise of play. These elements blend with bodily function exaggerations, like excessive , , or simulations on childlike figures, creating incongruous parodies of innocent narratives. Content structures emphasize repetition over coherent storytelling, with looping sequences of setup-shock-resolution designed for brief, high-impact viewing, often lacking dialogue or resolution to exploit autoplay retention. Tropes recycle across videos, prioritizing visceral reactions through , , or breaches rather than educational or adventurous plots typical of licensed children's .

Production Techniques and Quality

Elsagate videos exhibit markedly low production values, relying on rudimentary techniques that prioritize rapid output over aesthetic or technical refinement, in stark contrast to the polished, studio-backed animations of legitimate children's programming. Creators frequently utilized basic and editing software to assemble scenes with luridly colored, neon-lit visuals and simplistic character movements, resulting in "hideously cheap" aesthetics that evoke an , disjointed quality. These efforts often involved cobbling together generic elements like segments or toy unboxings into incoherent narratives, yielding grainy, semi-automated animations that lack narrative coherence or visual polish. Audio elements further underscore the amateurish execution, with garbled dialogue, mismatched synchronization, and simplistic voiceovers—sometimes employing text-to-speech synthesis—producing robotic or unintelligible speech that amplifies the eerie, low-effort feel. Unlike professional , which features trained performers with lip-sync precision, these videos' appears hastily overlaid, contributing to an overall effect where characters' expressions and movements fail to align naturally. The scale of production relied on batch methods by operators, enabling thousands of uploads per channel through automated or template-based workflows, which facilitated high-volume dissemination without substantial investment in originality or . This approach, often evading stringent oversight, allowed for the unlicensed repurposing of character likenesses via ripped or freely available digital assets, bypassing the resource-intensive modeling and processes of commercial studios.

Targeted Characters and Branding

Elsagate videos frequently exploited intellectual properties from established children's media franchises to capitalize on brand familiarity among young audiences. Prominent examples include characters such as Elsa from , , and , as well as Marvel's and the independent series . These selections targeted high-search-volume terms associated with popular cartoons, enabling creators to hijack organic traffic from parents and children seeking official content. Thumbnails played a central role in deceptive branding by replicating the visual style of legitimate trailers or episodes, often depicting characters in innocuous or adventurous poses to entice clicks. For instance, images might show Elsa or in playful settings reminiscent of or promotions, concealing the videos' subsequent inappropriate elements. This mimicry extended to production aesthetics, with low-budget animations aping the color palettes and character designs of originals to foster initial trust and algorithmic promotion. Titles and descriptions employed tactics, incorporating repeated references to targeted IPs alongside misleading descriptors like "funny" or "surprise" to optimize for YouTube's search and recommendation systems. Examples include phrases such as " ELSA HUGE SNOT" or lists blending character names with unrelated terms (e.g., " egg, eggs, "). Such practices amplified visibility by aligning with common queries for educational or entertaining kids' videos, thereby sustaining view counts despite the content's divergence from branded expectations.

Underlying Mechanisms

YouTube Algorithm Dynamics

's recommendation system prioritizes metrics like watch time—the total duration viewers spend on videos—and click-through rates from thumbnails and titles, favoring content that maximizes session length and engagement over thematic suitability. This design amplified Elsagate videos, which incorporated repetitive, hypnotic animations and taboo elements (e.g., injections, surgeries) to sustain children's without prompting skips, creating auto-play loops that extended viewing by hours. Elsagate content evaded algorithmic silos by cross-pollinating recommendations from benign children's videos; producers replicated tags, titles, and thumbnails of popular series like or , achieving (e.g., shared keywords like "" in 82.6% of disturbing videos and "funni" in 47.8%). As a result, searches for toddler-appropriate terms led to chains where disturbing videos surfaced via "related" suggestions, with a 3.5% probability of reaching inappropriate material within ten recommendation hops from neutral queries. Pre-2017, the system's reliance on automated matching allowed unrated uploads—those not explicitly flagged for children's categories—to infiltrate feeds intended for kids, as videos from onward garnered hundreds of thousands of views through signals alone, without oversight filters. A targeted found that from Elsagate-adjacent benign videos, 0.6% of top-ten recommendations directly transitioned to disturbing ones, underscoring how probabilistic matching over content review perpetuated exposure.

Economic Incentives for Creators

Children's content on , including Elsagate videos, generated substantial revenue potential due to extended watch sessions and high repeat viewership among young , which increased ad impressions per viewer. Prior to stricter enforcement of the (COPPA) in , creators often avoided designating videos as "made for kids" to enable personalized , thereby accessing higher revenue per mille (RPM) rates comparable to general , estimated at $1–$5 per 1,000 views rather than the post-COPPA reduced rates of $0.30–$0.50 for child-directed material. Production barriers were minimal, relying on low-cost tools such as basic software, recycled public-domain assets, and inexpensive or , allowing rapid output without significant investment in originality or quality. This model emphasized volume production to accumulate views and qualify for YouTube's via , where thresholds could be met through sheer quantity of uploads rather than creative merit. Many operations exploited global economic disparities, with content farms in low-wage regions like parts of employing cheap labor to generate videos targeting high-value advertising markets, where CPM rates were elevated due to premium advertiser demand. Pre-2017 crackdowns, individual Elsagate-style channels with millions of views reportedly earned thousands of dollars monthly, underscoring the profitability before platform interventions curtailed such practices.

Content Farm Operations

Content farm operations producing Elsagate videos typically involve coordinated efforts to generate high volumes of content across multiple channels, prioritizing quantity over quality to capitalize on ad revenue from children's viewership. These operations often employ low-cost methods, including outsourced and templated scripts, enabling the rapid output of videos that mimic popular children's programming while incorporating disturbing elements. During the 2017 peak, such farms flooded the platform with thousands of similar videos, maintaining presence through decentralized channel management. To evade platform moderation, operators frequently utilize multiple accounts, proxies, and reuploads of banned material onto newly created channels, ensuring continuity despite removals or demonetization. For instance, channels reported for violations are quickly replaced with variants featuring slight modifications to thumbnails, titles, or , often tagged with innocuous keywords like "#familyfun" to bypass automated filters. This tactic allows persistent , as evidenced by individual channels accumulating millions of views before intervention. In the 2020s, these operations have evolved toward AI-assisted production for greater scalability, leveraging generative tools such as to automate animation and scenario creation, reducing reliance on manual labor and enabling even higher output rates. Investigations identified over 70 such AI-driven channels echoing Elsagate themes, with examples like "Go Cat" garnering nearly 25,000 subscribers and 7 million views through rapid deployment of surreal, violent content disguised as child-friendly. This shift has amplified the challenge of detection, as AI facilitates endless variations without proportional increases in overhead.

Impacts and Effects

Exposure Patterns Among Children

Elsagate videos predominantly target toddlers aged 1-5 years, with content designed to exploit the viewing habits of this demographic through familiar characters and simple animations. Exposure occurs primarily via unsupervised device access by young children, who often initiate sessions with innocent searches for established children's media, such as "" or similar keywords. These queries trigger YouTube's recommendation algorithm, which surfaces Elsagate-related videos through successive suggestions and autoplay sequences, simulating extended "random walks" in toddler browsing patterns. In algorithmic simulations, benign starting videos lead to inappropriate content with a 3.5% probability within 10 recommendation hops, highlighting the ease of chain exposure from legitimate entry points. Such patterns persist due to the platform's vast scale, where approximately 1.1% of Elsagate-related video searches yield disturbing results amid millions of uploads. Initial documentation of exposure patterns emerged in 2017-2018 reports centered on U.S. and users, though production and dissemination occur globally, including bot-generated content from regions like , contributing to ongoing availability beyond primary English-speaking markets.

Psychological and Developmental Concerns

Parental accounts frequently describe children experiencing nightmares and heightened anxiety following exposure to Elsagate videos, where familiar characters from shows like or engage in graphic scenarios involving syringes, surgeries, or dismemberment, creating a dissonance that undermines feelings of security associated with these figures. Such content, often viewed unsupervised in high volumes due to algorithmic recommendations, has been linked anecdotally to sleep disturbances persisting for days, with children verbalizing fears tied directly to video elements like injection simulations or body modifications. Mimicry behaviors represent another reported risk, as young viewers imitate depicted actions without grasping their dangers; for instance, children have been observed attempting to replicate injections on toys, siblings, or themselves using household objects, sometimes resulting in minor injuries or the normalization of invasive pretend play. These imitations stem from the videos' structure, which presents hazardous acts as routine within playful narratives, potentially embedding maladaptive scripts in impressionable minds during critical developmental windows for social learning and impulse control. From a developmental standpoint, repeated unfiltered viewing may foster desensitization to violence and , as low-stakes portrayals—lacking narrative consequences or moral framing—diminish emotional reactivity over time, mirroring patterns seen in broader media effects research on habitual exposure to aversive stimuli. This could erode boundaries between fantasy and , particularly in preschool-aged children whose cognitive schemas for are still forming, leading to distorted perceptions of or interpersonal aggression as commonplace. Experts caution that such normalization risks long-term attenuation of empathy or fear responses to genuine threats, though these outcomes remain inferred from rather than large-scale longitudinal tracking.

Empirical Evidence and Skeptical Perspectives

Empirical investigations into the psychological impacts of Elsagate content on children have yielded limited rigorous data, with few longitudinal or experimental studies directly linking exposure to measurable long-term harm. Most available evidence consists of parental anecdotes, reports, and correlational analyses of content exposure patterns, which fail to isolate causation amid variables such as family dynamics, overall , or heightened parental awareness from news coverage. Skeptical analyses highlight the absence of causal proof for widespread , noting parallels to historical panics over media like , where predicted epidemics of or desensitization did not materialize despite decades of and interventions. No indicate a surge in Elsagate-attributable disorders, such as post-traumatic or behavioral epidemics, in pediatric populations post-2017 exposure peaks. Developmental psychology research supports counterarguments that young children, typically by ages 3 to 5, demonstrate capacity to differentiate fantasy from reality, reducing the likelihood of conflating cartoonish depictions with real threats. This distinction, evidenced in tasks involving pretense, testimony evaluation, and possibility judgments, suggests many Elsagate videos—featuring exaggerated, non-literal scenarios—may not induce profound confusion or fear in cognitively mature preschoolers. From an evolutionary standpoint, controlled exposure to mild aversive stimuli, akin to thrilling play or fictional scares, may foster anti-phobic by simulating environmental challenges, enhancing emotional regulation without necessitating avoidance. Empirical observations of children's risky play behaviors align with this, showing adaptive benefits in inoculation rather than inevitable , though direct application to remains underexplored.

Responses and Interventions

YouTube's Policy Changes and Enforcement

In November 2017, YouTube updated its child safety policies to address content featuring disturbing themes targeted at children, such as simulated violence or adult scenarios disguised in kid-friendly animations, by expanding guidelines to demonetize, age-restrict, or remove videos that exploited familial entertainment characters without violating explicit prohibitions. This followed public scrutiny of Elsagate videos, prompting the platform to terminate over 50 channels and remove thousands of related videos in late 2017, alongside hiring thousands of additional human moderators to review flagged material. AI systems were deployed to detect gore, violence, and other inappropriate elements in purportedly child-directed content, though initial reliance on automated filters revealed limitations, leading to a hybrid approach incorporating more human oversight by early 2018. By February 2019, amid renewed criticism over predatory comments and exploitative videos, YouTube escalated enforcement by terminating more than 400 channels and disabling comments on millions of videos featuring minors to curb child endangerment risks. These measures coincided with broader tweaks to deprioritize borderline harmful content in recommendations. In September 2019, following a $170 million settlement over COPPA violations, YouTube mandated "made for kids" labeling for applicable content, effective January 6, 2020, which disabled personalized ads, comments, and on such videos to limit exposure to inappropriate material. Post-2020, expanded Content ID-like tools and AI-human hybrid reviews for proactive detection of violating kids' content, contributing to annual removals exceeding tens of millions of videos under child safety policies. Enforcement data indicate substantial reductions, with algorithm changes in 2019 limiting views of flagged harmful videos across platforms and transparency reports documenting over 99 million child safety-related comment removals in a single period, alongside billions of comments actioned cumulatively. However, efficacy remains partial, as Elsagate-style content persists in evolved forms, including AI-generated variants evading filters, suggesting policies curbed overt but failed to eradicate underlying incentives or detection gaps.

Creator and Channel Consequences

In November 2017, terminated the channel, operated by Gregory Chism, which featured videos of Chism's young daughters in simulated distressing situations such as fainting, seizures, and medical emergencies, amassing over 8.5 million subscribers and billions of views prior to shutdown. The termination resulted from violations of updated child endangerment policies, leading to immediate loss of monetization revenue and ad income for the creator, who had relied on the channel's popularity for earnings. Similar repercussions affected other Elsagate-associated channels producing content with themes of injection, restraint, or involving child-like figures, including permanent strikes that escalated to full bans and demonetization, severing access to YouTube's Program. Creators faced substantial financial hits, as high-view Elsagate videos had generated significant ad revenue through algorithmic before enforcement, with some channels losing millions in potential earnings upon removal. While many banned creators ceased operations on , others adapted by shifting to less provocative children's content or migrating to alternative platforms with laxer , though such pivots often yielded lower viewership and compared to pre-ban peaks. Criminal prosecutions remained rare, limited to isolated extreme instances potentially involving inducement of actual harm beyond simulated scenarios, with no widespread legal actions reported against Elsagate producers for content creation alone.

Parental and Educational Strategies

Parents employ 's Restricted Mode as a built-in filter to screen out potentially mature videos by analyzing metadata such as titles, descriptions, and age restrictions, though it relies on algorithmic signals and is not infallible in blocking all inappropriate content. Supervised accounts via enable setting content levels and time limits on devices, allowing monitored access to YouTube without necessitating outright bans. Co-viewing sessions provide direct parental oversight, facilitating immediate intervention and discussion of observed content to reinforce discernment. Third-party monitoring applications, such as those offering activity reports and app-specific restrictions, support granular control over usage while preserving some child autonomy, emphasizing proactive rather than reactive measures. Establishing device-level boundaries, including autoplay disablement and playlist curation of pre-approved videos, further mitigates algorithmic drift toward low-quality recommendations. Educational approaches center on fostering from an early age, instructing children to evaluate online content through criteria like reliability, factual , and , which cultivates independent judgment over passive consumption. Empirical guidelines from the advocate balanced screen exposure, limiting non-educational use to approximately one hour daily for children aged 2-5 years with parental co-engagement to promote and quality selection, while avoiding absolute prohibitions that may hinder adaptive digital navigation skills. For older children, consistent limits aligned with and needs—such as 9-12 hours of and over one hour of daily exercise—prioritize holistic development without over-reliance on screens.

Controversies and Debates

Platform Liability vs. User Responsibility

Platforms enjoy legal protections under of the of 1996, which immunizes them from civil liability for third-party , a safeguard designed to foster the growth of online services without imposing distributor liability for unforeseeable harms. This immunity has been pivotal for platforms like , where Elsagate-style videos emerged from creators exploiting algorithmic recommendations rather than direct platform curation, underscoring that liability for vast, decentralized uploads would require preemptive review of billions of items—an infeasible task given the scale of daily uploads exceeding 500 hours of video per minute as of 2019. Critics advocating stricter platform accountability argue that such protections enable negligence, yet empirical analyses indicate that imposing liability distorts incentives, often leading to over-censorship rather than targeted safety improvements, as platforms err toward removal to avoid risk. In contrast, emphasizing and parental responsibility aligns with causal mechanisms of dissemination, where primary gatekeepers—parents—hold direct over device access and viewing habits, rendering top-down mandates less effective than localized vigilance. Studies on online child highlight that voluntary parental , such as device restrictions and monitoring tools available since the early , correlate with reduced exposure when utilized, whereas regulatory mandates like age verification have yielded mixed results, often circumvented or underenforced due to costs and gaps. For instance, data from initiatives show that empowered parental engagement—through and tools—outperforms blanket platform obligations, as evidenced by lower harm rates in households employing active over passive reliance on algorithms. Market signals further decentralize accountability: reports, dislikes, and advertiser boycotts have historically prompted voluntary platform adjustments without eroding the open ecosystem, demonstrating that consumer-driven feedback loops address problematic more nimbly than litigation or statutes. This paradigm shift toward responsibility mitigates , where offloading oversight to platforms incentivizes creators to game systems while absolving families; real-world declines in similar exploitative post-2017 owe more to heightened parental and selective viewing than to threats, affirming that distributed —via , tools, and economic pressures—yields sustainable outcomes over centralized edicts prone to unintended overreach.

Censorship and Free Market Critiques

Critics of YouTube's post-Elsagate moderation escalation argue that measures like hiring 10,000 additional human reviewers and revising contributor guidelines to prohibit inappropriate uses of children's characters foster an environment of over-caution, potentially chilling among creators producing legitimate educational or content for youth. These interventions, while targeting exploitative videos, mirror broader patterns in platform enforcement where algorithmic and manual processes generate false positives, erroneously flagging or restricting innocuous material such as context-dependent discussions or visuals in kids' videos. In child-focused content specifically, expanded policies—such as the 2019 blanket suspension of comments on videos featuring minors to combat predation—have led to unintended collateral effects, including suppressed engagement on valid channels and heightened to avoid algorithmic errors. Such outcomes exemplify critiques of big tech's centralized , where uniform top-down controls prioritize over nuanced creator expression, echoing documented inefficiencies in scaling without proportional losses. Free market advocates counter that competitive dynamics among platforms, coupled with consumer-driven selection via reputation and parental tools, provide superior self-regulation mechanisms compared to dominant firms' internal . This perspective holds that diversified moderation ecosystems—where users migrate to alternatives offering lighter touch policies—better align content ecosystems with varied preferences, mitigating monopoly-induced overreach and spurring adaptive innovations absent in paternalistic monopolies.

Media Amplification and Moral Panic Claims

Media coverage of Elsagate intensified in late 2017, beginning with James Bridle's essay "Something is Wrong on the Internet," published on Medium in , which highlighted disturbing videos featuring characters like Elsa from and in inappropriate scenarios, garnering widespread attention and prompting reports from outlets including on November 22 and on November 4. This amplification, characterized by vivid descriptions of content involving simulated injections, surgeries, and violence, contributed to public outrage and advertiser pullbacks, pressuring to remove over 150,000 videos and terminate 270 channels by late 2017. Critics, however, contend that such reporting exaggerated the phenomenon's scale and intent, portraying algorithmic recommendations as predatory while overlooking that many videos were low-effort parodies or algorithm-exploiting content rather than coordinated malice, with prevalence inflated by viral sharing absent rigorous quantification of affected children. Skeptical analyses frame Elsagate as a , akin to 1980s fears over and , where media-fueled alarms of occult corruption or behavioral harm prompted but lacked empirical validation through longitudinal studies demonstrating causal links to real-world damage. In Elsagate's case, allegations of pedophilic grooming or , amplified on platforms like Reddit's r/ElsaGate subreddit (which grew to over 27,000 members by late ), often veered into unsubstantiated conspiracies involving mind control or color-coded signaling, with little evidence beyond anecdotal parental reports. Observers note that while some videos evaded filters, the panic overlooked children's innate curiosity for taboo topics, as evidenced by persistent viewership of edgy content predating algorithmic pushes, and failed to produce data on measurable developmental harm beyond isolated cases, such as distress in neurodiverse children. From conservative and parental-responsibility perspectives, media emphasis on YouTube's algorithms as the primary vector diverts scrutiny from underlying familial factors, including diminished supervision amid rising single-parent households and screen-time reliance as digital babysitting, which enabled unchecked access—issues unaddressed in mainstream coverage favoring platform accountability over of why millions of hours of such content accrued billions of views without universal intervention. This critique posits that exploits symptoms of broader societal breakdowns in child-rearing structures, where empirical risks from stem more from home environments than isolated video producers, as YouTube's post- purges and "made for kids" labeling reduced without resolving parental of oversight.

References

  1. [1]
    Combating the Elsagate phenomenon: Deep learning architectures ...
    Apr 18, 2019 · Elsagate is a phenomenon that depicts childhood characters in disturbing circumstances (e.g., gore, toilet humor, drinking urine, stealing).
  2. [2]
    [PDF] Combating the Elsagate Phenomenon: Deep Learning Architectures ...
    Elsagate is a phenomenon that depicts childhood characters in disturbing circumstances (e.g., gore, toilet humor, drinking urine, stealing).
  3. [3]
    [PDF] Characterizing and Detecting Inappropriate Videos Targeting Young ...
    (Ishikawa, Bollis, and Avila 2019) studied the Elsagate phenomenon and they propose a deep learning model for detecting Elsagate content on YouTube trained on a ...
  4. [4]
    [PDF] Disturbed YouTube for Kids: Characterizing and Detecting ...
    From our analysis on different subsets of the collected videos, we find that 1.1% of the 233,337 Elsagate-related, and 0.5% of the. 154,957 other children- ...
  5. [5]
    [PDF] EXAMINING THE “ELSAGATE” PHENOMENON
    The analysis reveals that across the reportage on the Elsagate phenomenon, anxieties constellate around the opaque motivations and intentions behind the ...
  6. [6]
    (PDF) What are kids watching at youtube? Elsagate detection ...
    PDF | Despite YouTube's efforts to block violent and pornographic content from its platform, it is not prepared to deal with the Elsagate phenomenon. As..
  7. [7]
    [PDF] The Elsagate Corpus - ACL Anthology
    While most research targeting YouTube focuses on either sentiment analysis or hate speech detection, since the rise of the Elsagate phenomenon in 2016, there ...
  8. [8]
    [PDF] The technological downside of algorithms: an 'ElsaGate' case study
    Aug 3, 2020 · Another phenomenon that YouTube must consider, are the current trends, often referred to as. 'hypes' or 'viral videos. Examples of hypes on ...
  9. [9]
    [PDF] EXAMINING THE “ELSAGATE” PHENOMENON
    Contemporary children are turning to online video streaming as an “alternative for TV” (Ha. 2018, 1) in increasing numbers (see Australian Communications ...
  10. [10]
    [PDF] Disturbed YouTube for Kids: Characterizing and Detecting ...
    [21] stud- ied the Elsagate phenomenon and they propose a deep learn- ing model for detecting Elsagate content on YouTube trained on a unannotated dataset ...
  11. [11]
    [PDF] Soustas, P., & Edwards, M. (2024). The Elsagate corpus
    While most research targeting YouTube focuses on. 071 either sentiment analysis or hate speech detection,. 072 since the rise of the Elsagate phenomenon in 2016 ...
  12. [12]
    The disturbing YouTube videos that are tricking children - BBC News
    Mar 27, 2017 · Thousands of videos on YouTube look like versions of popular cartoons but contain disturbing and inappropriate content not suitable for children.Missing: coined | Show results with:coined
  13. [13]
    YouTube's "Elsagate" Illuminates The Unintended Horrors Of The ...
    Nov 28, 2017 · If you're a parent to young children, and also an owner of an iPad, then I'd wager that your kids not only recognize the YouTube logo, ...Missing: coined | Show results with:coined
  14. [14]
    ElsaGate: The Problem With Algorithms
    Elsagate videos depict popular animation characters like Peppa Pig, Mickey Mouse, Elsa from Frozen, and many others engaging in bizarre and obscene acts...
  15. [15]
    The Elsagate situation-What is it and how it escalates today?
    Sep 6, 2021 · Elsagate is a phenomenon on Youtube of reoccurring themes, animations and videos of inappropriate topics, available and targeted at children.Missing: terminology | Show results with:terminology
  16. [16]
    YouTube: why kids become glued to inane amateur videos
    May 10, 2020 · I experienced my first full-on moral panic as a parent thanks to a hypnotically inane toy channel on YouTube. My son, Teddy, was coming up ...<|separator|>
  17. [17]
    YouTube Nearly Pulled Its Most Profitable Ads Over Freaky Elsagate ...
    Sep 6, 2022 · According to the new book "Like, Comment, Subscribe," a senior executive proposed a drastic solution to the eerie kids content raking in ...
  18. [18]
    China to clean up #Elsagate videos disguised as cartoons Chinese ...
    Jan 22, 2018 · ... English and children videos to attract more views. A Guangzhou-based Chinese company on Monday apologized for making some of the videos. The ...Missing: non- | Show results with:non-<|control11|><|separator|>
  19. [19]
    How YouTube's Obsession with Coupling Elsa and Spider-Man ...
    Jun 28, 2017 · The Elsa and Spider-Man trend seemed to start in 2014 with the video “Frozen Elsa Dates Spiderman!” which was uploaded onto DisneyCarToys. The ...<|separator|>
  20. [20]
    What's the deal with these random videos with people in superhero ...
    Dec 2, 2016 · The real unifying theme is that they all star very popular characters in children's media, especially Disney characters and superheroes.
  21. [21]
    The Ballad Of Elsa And Spiderman. Behind the YouTube pranksters…
    Feb 23, 2017 · A man in Vietnam was recently fined for uploading sexually suggestive videos of Elsa and Spiderman onto YouTube. Despite their obviously ...
  22. [22]
  23. [23]
    YouTube to clamp down on disturbing kids' videos such as dark ...
    Nov 10, 2017 · Site announces measures to flag, review and restrict content that is inappropriate for children but doesn't breach wider guidelines.Missing: response | Show results with:response
  24. [24]
    YouTube Has Deleted Hundreds Of Thousands Of Disturbing Kids ...
    Nov 28, 2017 · The company said it deleted 150000 videos after the discovery of exploitative content depicting children in revealing clothing, distress, ...
  25. [25]
    An update on our efforts to protect minors and families - YouTube Blog
    Jun 3, 2019 · See how YouTube is working to keep you safe. Get the latest update on their efforts to combat harmful content and protect users.<|separator|>
  26. [26]
    YouTube Kids, Criticized for Content, Introduces New Parental ...
    Apr 25, 2018 · Parents will be able to handpick the channels and topics their children can view on the app, which has been criticized for allowing ...
  27. [27]
    Dozens of YouTube Channels Are Showing AI-Generated ... - WIRED
    May 2, 2025 · A WIRED investigation found that dozens of YouTube channels are using generative AI to depict cartoon cats and minions being beaten, ...Missing: coined | Show results with:coined
  28. [28]
    Will gen AI make YouTube's next Elsagate? - Tubefilter
    May 2, 2025 · Elsagate. Sparked by a single 2016 article from The Guardian about a trend of disturbing animated videos aimed at kids, Elsagate snowballed into ...Missing: 2023-2025 | Show results with:2023-2025
  29. [29]
    ElsaGate is BACK - (In 2025) - YouTube
    May 6, 2025 · "ElsaGate" (a primarily 2016 and 2017 shock video phenomenon) was a pivotal moment in YouTube history. When the YouTube Kids app became ...Missing: presence multilingual
  30. [30]
    Elsagate: The disturbing YouTube trend that might be terrifying your ...
    Nov 23, 2017 · Dubbed Elsagate, the world's biggest video-sharing site has been flooded with videos featuring child favourites like Spider-Man, Peppa Pig and Elsa from Disney ...
  31. [31]
    What makes YouTube's surreal kids' videos so creepy? - The Verge
    Nov 21, 2017 · ” A member of Reddit's “Elsagate” forum expressed uneasy bafflement ... But alongside predictable choices like Spider-Man and Elsa, the ...
  32. [32]
    'Disturbing' children's YouTube genres and the algorithmic uncanny
    Oct 10, 2021 · Since late 2017, journalists, advocacy groups, and policy-makers have expressed serious concerns about popular genres of video content on ...
  33. [33]
    How Peppa Pig became a video nightmare for children - The Guardian
    Jun 17, 2018 · James Bridle's essay on disturbing YouTube content aimed at children went viral last year. Has the problem gone away – or is it getting worse?Missing: Elsagate | Show results with:Elsagate
  34. [34]
    YouTube steps up enforcement of content aimed at children - CBC
    Nov 23, 2017 · Text to Speech Icon. Listen to this article. Estimated 3 minutes ... A forum on the Reddit internet platform dubbed ElsaGate, based on ...
  35. [35]
    Unboxing, bad baby and evil Santa: how YouTube got swamped ...
    Sep 13, 2022 · The long read: When children first started flocking to YouTube, some seriously strange stuff started to appear – and after much outcry, ...Missing: origin | Show results with:origin
  36. [36]
    Behind the algorithm: YouTube's Recommendation System - Paul Kim
    Feb 28, 2019 · ... Elsagate was amplified through what was commonly referred to as YouTube's 'algorithm'. Many of these video recommendations were made in ...
  37. [37]
    Everyone knows what YouTube is — few know how it really works
    Sep 13, 2022 · Elsagate was the popular name. In late 2017, there were ... The shift they made in the recommendation algorithm to reward watch time ...
  38. [38]
    YouTube Kids. A Popular and Profitable Niche - Popsters
    Jun 2, 2025 · Millions of children worldwide actively watch children's content, including games, animations, educational videos, and more on YouTube. High ...
  39. [39]
    What is the click per rate for a YouTube kids channel? What ... - Quora
    Sep 11, 2021 · Switching to a kids channel brought the RMP down to $0.30-.050. You must decide, will your kids channel get millions of views per video?
  40. [40]
    How Much Can Children Make on YouTube Channel for Kids
    May 16, 2025 · For content classified as “Made for Kids,” the revenue generated per thousand views (RPM) is significantly lower than for general audience ...
  41. [41]
    Content Moderation Case Study: YouTube Deals With Disturbing ...
    Aug 25, 2021 · YouTube videos for kids are filled with low-effort, low-cost content – videos that use familiar songs, bright colors, and pop culture fixtures to attract and ...
  42. [42]
    10 reasons why your YouTube Kids channel can be demonetized
    Mar 18, 2025 · 9. Mislabeling Content Under COPPA ... We've seen it happen: a creator mislabels content as “Not Made for Kids” to access higher RPMs, only for ...<|separator|>
  43. [43]
    Photographer steps inside Vietnam's shadowy 'click farms' - CNN
    not the country's sprawling plantations or rice terraces but its “click farms.
  44. [44]
    Elsagate: The Disturbing Phenomenon Targeting Children on ...
    Elsagate refers to a wave of controversial YouTube videos that use popular characters from children's shows, such as Elsa from Disney's “Frozen,” to engage ...
  45. [45]
    Kids Are Being Tricked By Graphic, Disturbing Videos On YouTube
    Nov 26, 2017 · “Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatize, and abuse children ...
  46. [46]
    EXAMINING THE “ELSAGATE” PHENOMENON
    Oct 31, 2019 · The paper examines the journalistic commentary that constitutes the "Elsagate" phenomenon - the neologism used to describe public ...
  47. [47]
    Barbarians at the ElsaGate: Six Year Autopsy of a Moral Panic
    ElsaGate, in its truest form, refers to an exploitative ecosystem of weird, shoddy, creepy videos designed to milk views out of children. But we also have to be ...
  48. [48]
    50 Years of Video Games & Moral Panics
    Jul 18, 2019 · Moral panics about video games include concerns about violence, leading to political activism, congressional hearings, and calls for censorship ...
  49. [49]
    Revisiting the Fantasy-Reality Distinction: Children as Naïve Skeptics
    Mar 15, 2013 · This paper reviews research on children's reality status judgments, testimony use, understanding of possibility, and religious cognition.
  50. [50]
    [PDF] Do monsters dream? Young children's understanding of the fantasy ...
    Young children are often thought to confuse fantasy and reality. This study took a second look at preschoolers' fantasy/reality differentiation.
  51. [51]
    (PDF) Children's Risky Play from an Evolutionary Perspective
    This theoretical article views children's risky play from an evolutionary perspective, addressing specific evolutionary functions and especially the anti- ...
  52. [52]
    Children's Risky Play from an Evolutionary Perspective
    This theoretical article views children's risky play from an evolutionary perspective, addressing specific evolutionary functions and especially the anti- ...
  53. [53]
    YouTube is cracking down on videos and comments that exploit ...
    YouTube is taking steps to prevent disturbing videos from reaching children, after a wave of media reports showed how the platform was failing to keep ...Missing: evasion tactics proxies
  54. [54]
    YouTube announces changes to Kids platform after Elsa Gate
    Apr 25, 2018 · YouTube has now scaled back its use of machines to filter out drugs, violence, and sex from the kids platform, and added a new feature where ...Missing: 2017-2019 | Show results with:2017-2019
  55. [55]
    YouTube terminates more than 400 channels following controversy
    Feb 21, 2019 · YouTube has responded to a new controversy regarding child exploitation videos and predatory comments by deleting more than 400 channels and ...
  56. [56]
    Google and YouTube Will Pay Record $170 Million for Alleged ...
    Sep 4, 2019 · The settlement requires Google and YouTube to pay $136 million to the FTC and $34 million to New York for allegedly violating the Children's ...Missing: 2017 | Show results with:2017
  57. [57]
    YouTube Community Guidelines enforcement
    We have Community Guidelines that set the rules of the road for what we don't allow on YouTube. For example, we do not allow pornography, incitement to ...Missing: 2018 2019
  58. [58]
    YouTube: Algorithm change stopped harmful videos from spreading
    Oct 28, 2021 · In 2019, YouTube altered its recommendation algorithm to stop promoting videos the company deemed harmful, which limited how much they were viewed even on ...
  59. [59]
    YouTube Terminates Toy Freaks Channel Amid Broader ... - Variety
    Nov 17, 2017 · YouTube has shut down Toy Freaks, a channel that featured videos of a single dad and his two daughters in odd and upsetting situations.Missing: Elsagate | Show results with:Elsagate
  60. [60]
    YouTube Terminates Controversial Kids Channel With Over 8.5 ...
    Nov 17, 2017 · It has terminated several channels aimed at kids, including the controversial and heavily-trafficked Toy Freaks.Missing: shutdown Elsagate
  61. [61]
    YouTube terminates exploitive 'kids' channel ToyFreaks, says it's ...
    Nov 17, 2017 · Following consumer outrage over YouTube's handling of disturbing videos aimed at children on its network, the company has now banned one of ...Missing: Elsagate | Show results with:Elsagate
  62. [62]
  63. [63]
    YouTube Restricted Mode: Parental controls to protect your kids
    Jul 30, 2024 · Specifying your child's age is what really determines what your child sees.It is not 100% effective in the regular YouTube app, but it does seem ...<|control11|><|separator|>
  64. [64]
    Complete Guide to YouTube Parental Controls - Protect Young Eyes
    May 1, 2024 · Recommendation: create a Supervised Account for the child, then download the Family Link app and select the content level you want for YouTube.
  65. [65]
    Screen time and children: How to guide your child - Mayo Clinic
    For children ages 2 to 5, limit screen time to one hour a day of high-quality programming. As your child grows, a one-size-fits-all approach doesn't work as ...
  66. [66]
    How to Keep Kids Safe on YouTube in 2024: Complete Guide
    Apr 11, 2024 · The best way to protect kids on YouTube is to use a premium third-party parental app that can monitor and restrict children's access to YouTube ...
  67. [67]
    How to Limit Access to Negative Content on YouTube: Tips for Parents
    Aug 12, 2025 · By leveraging tools like YouTube Kids, enabling Restricted Mode, disabling autoplay, and using specialized parental control solutions such as ...
  68. [68]
    How to teach students critical thinking skills to combat ...
    Sep 1, 2024 · Efforts to improve digital literacy among youth will help protect the next generation from the spread of false information online and guide ...
  69. [69]
  70. [70]
    Where We Stand: Screen Time - HealthyChildren.org
    Dec 13, 2023 · The American Academy of Pediatrics (AAP) recommends minimizing or eliminating media exposure, other than video chatting, for children under the age of 18 ...
  71. [71]
  72. [72]
    Screen Time Guidelines for Kids, at Every Age: CHLA Experts Weigh ...
    Jul 11, 2024 · Experts recommend establishing clear boundaries around screen time that prioritize adequate sleep (9-12 hours) and physical activity (more than one hour).
  73. [73]
    Interpreting the ambiguities of Section 230 - Brookings Institution
    Oct 26, 2023 · Section 230 arose because of idiosyncrasies in how courts applied the common law of distributor liability to defamation claims against online ...Missing: Elsagate | Show results with:Elsagate
  74. [74]
    DEPARTMENT OF JUSTICE'S REVIEW OF SECTION 230 OF THE ...
    The US Department of Justice analyzed Section 230 of the Communications Decency Act of 1996, which provides immunity to online platforms from civil liability.Missing: Elsagate | Show results with:Elsagate
  75. [75]
    The Future of Section 230 | What Does It Mean For Consumers?
    Jul 21, 2023 · In short, by imposing liability on platforms for hosting certain content, the law, in effect, has led to widespread censorship online and other ...Missing: Elsagate | Show results with:Elsagate
  76. [76]
    Who Should Protect Children Online: Parents or the Government?
    Dec 10, 2024 · Parents, not governments, should lead the charge in protecting children online. Empowering families with the tools, resources, and knowledge to navigate ...
  77. [77]
    How to Address Children's Online Safety in the United States | ITIF
    Jun 3, 2024 · Protecting children from online harms requires a careful balance between ensuring safety and safeguarding free speech, user privacy, and parents' rights.<|control11|><|separator|>
  78. [78]
    Age Verification Laws vs. Parental Controls: Why the Legislatures ...
    Feb 5, 2025 · A balanced, multifaceted approach to protecting children online is more effective than relying on age-verification laws.
  79. [79]
    Balance, Not Mandates, Needed To Keep Kids Safe Online: Report
    Jun 4, 2024 · An effective approach to children's online safety needs to strike a balance between protecting kids' user privacy and free speech, ...
  80. [80]
    How Does YouTube Moderate Content? - NeoWork
    Feb 25, 2025 · 1. Establishing Clear Community Guidelines · 2. Leveraging Automated Moderation Systems · 3. Human Review and Decision-Making · 4. User Reporting ...
  81. [81]
    (PDF) Labeling in the Dark: Exploring Content Creators' and ...
    May 28, 2025 · ... false positives and ... In particular, recent studies have investigated the effectiveness of content moderation systems for children's ...
  82. [82]
    YouTube bans comments on all videos of children - Hacker News
    Feb 28, 2019 · Youtube catches a ridiculous amount of comments as false positives in their spam filter. I can't imagine this new category is going to be ...
  83. [83]
    Why Big Tech Can't Solve The Content Moderation Problem - Forbes
    Aug 28, 2024 · The White House's position reflects a paternalistic approach to content distribution, one that a different administration could easily exploit ...Missing: critique | Show results with:critique
  84. [84]
    Competition and Content Moderation | Cato Institute
    Jan 31, 2022 · Although the law applies to millions of websites of all sizes, critics often misconstrue it as a special exemption for “big tech” companies, ...Missing: paternalism critique
  85. [85]
  86. [86]
    On YouTube Kids, Startling Videos Slip Past Filters
    Nov 4, 2017 · The app has more than 11 million weekly viewers. But some disturbing knockoff videos have reached children, upsetting parents.Missing: root critiques
  87. [87]
    Elsagate - RationalWiki
    Elsagate is a phenomenon on YouTube involving supposed children's videos which actually contain content inappropriate for children.
  88. [88]
    Inside Elsagate, the conspiracy-fueled war on creepy YouTube videos
    most notably Elsa from Frozen, but also Spider-Man ...
  89. [89]