Fact-checked by Grok 2 weeks ago

Comments section

A comments section is an interactive digital feature integrated into online platforms, such as news websites, posts, and video-sharing sites, where users submit and view textual responses to primary content like or videos, enabling real-time public and user-generated feedback. These sections emerged prominently with the rise of interactivity in the early , transforming passive consumption of media into participatory environments that amplify diverse viewpoints and collective sense-making. While they have democratized access to debate—allowing non-experts to challenge narratives and foster —comments sections often devolve into arenas of hostility, with empirical studies documenting high rates of , including insults, , and polarized rhetoric that undermine civil exchange. challenges persist, as algorithmic and human interventions struggle to balance free expression against harms like , with research indicating that unmoderated sections correlate with disinhibition effects rooted in and low accountability, exacerbating echo chambers and over reasoned argument. Despite closures by some outlets citing irredeemable , evidence suggests that well-managed comments can enhance engagement and counter elite-driven narratives, though systemic biases in platform algorithms and institutional oversight frequently skew toward suppressing dissenting content under vague "" labels.

History

Origins in Pre-Web Internet Communities

The earliest precursors to modern comments sections emerged in (BBS), which facilitated asynchronous user interactions through message posting and replies. The first BBS, known as (Computerized Bulletin Board System), was developed by Christensen and Suess in and went online on February 16, 1978, during a that inspired its creation as a means to exchange computer hobbyist updates. Users connected via dial-up modems at speeds like 300 baud, posting short messages in designated areas that others could read and respond to, effectively creating threaded or sequential discussions without real-time chat. By the early 1980s, thousands of operated worldwide, often run by individual sysops (system operators) on personal computers, with message forums dedicated to topics such as software , announcements, or local events, where replies built upon original posts much like comments under articles today. Usenet, launched in 1979 by students Tom Truscott and Jim Ellis, extended these concepts into a distributed of newsgroups using the Unix-to-Unix Copy Protocol () for message propagation across academic and research computers. The system organized discussions into hierarchical newsgroups, starting with the inaugural net.general group, where users posted articles—initial messages on a topic—and others replied directly, generating threaded conversations that linked responses to parent posts for context. This structure supported moderated and unmoderated groups, with over 100 newsgroups by the mid-1980s, enabling broader, cross-site participation in debates on , politics, and culture, distinct from BBS's localized dial-up access. FidoNet, introduced in 1984 by Tom Jennings, further networked independent via periodic "echomail" exchanges, allowing messages and replies to propagate between systems without constant connectivity, thus scaling discussion threads across geographically dispersed communities. These pre-web systems prioritized text-based, permissionless replies to foster community feedback, laying the groundwork for the reply mechanisms central to later comments, though limited by constraints like single-line modems and capacities that capped participation to hundreds or thousands of users per .

Emergence in Early Web Platforms

Comments sections first appeared on web platforms in the late , transitioning discussion from pre-web bulletin boards to browser-accessible features under static or semi-dynamic pages. , founded in September 1997 by Rob Malda, integrated user comments as a core element from its inception, allowing registered users to debate technology news stories through a moderation system that scored contributions for insightfulness and . This setup, powered by early , enabled threaded replies and community-driven filtering, attracting a niche audience of programmers and enthusiasts who generated thousands of responses per article. In October 1998, Open Diary launched as an online diary service and promptly added reader commenting capabilities, permitting public responses to personal entries and pioneering feedback in proto-blogging environments. Unlike Slashdot's news-focused model, Open Diary emphasized interpersonal exchange, with comments fostering connections among diarists and readers through simple form submissions stored in basic databases. These early implementations often utilized Common Gateway Interface (CGI) protocols, established in 1993, where web servers executed scripts—typically in Perl—to handle form data, validate inputs, and dynamically generate comment displays without requiring full page reloads or advanced content management systems. Traditional media outlets soon followed, with newspapers like the enabling comments in 1998 to solicit digital equivalents of letters to the editor, expanding audience participation beyond print submissions. This proliferation reflected growing web adoption and server capabilities, though initial systems lacked robust , leading to unfiltered exchanges that mirrored Usenet's intensity while introducing via pseudonyms. By 1999, commenting spread to emerging tools like Blogger, solidifying its role in interactive .

Expansion During Web 2.0 and Peak Usage

The advent of , characterized by and interactive platforms following Tim O'Reilly's 2004 conference, significantly propelled the integration of comments sections across websites, transforming passive reading into participatory experiences. Blogging platforms exemplified this shift, with tools like Blogger (launched 1999 and enhanced post-Google acquisition in 2003) and (debuted 2003) embedding comment functionality as standard, enabling readers to append threaded discussions directly to posts. By the mid-2000s, blogging surged, with thousands of new blogs created daily, fostering vibrant comment ecosystems that amplified discourse on politics, culture, and personal narratives. News websites saw rapid adoption during this period, driven by the desire to replicate letters-to-the-editor traditions in digital form. In 2007, only 26% of newspaper sites featured comments on stories, but this jumped to 56% by , reflecting a more than doubling in implementation amid broader uptake from 24% to 58%. Third-party services like , launched on October 30, 2007, further accelerated expansion by offering embeddable, spam-resistant systems with user accounts and moderation tools, appealing to site owners lacking in-house capabilities. Video-sharing sites contributed to the boom; , founded in 2005, incorporated comments from its inception to facilitate viewer feedback on uploads, aligning with Web 2.0's emphasis on communal content curation. Peak usage materialized in the late 2000s, as comments sections became ubiquitous on high-traffic sites, generating substantial engagement volumes. The New York Times, for instance, amassed over 9.6 million comments from October 30, 2007, onward, underscoring the scale of interaction on major outlets. By 2008, approximately 75% of the top 100 circulating U.S. news sites included comments, up from prior years, with platforms leveraging them for audience retention and real-time debate. This era's proliferation stemmed from accessible web technologies and broadband growth, peaking before moderation challenges prompted reforms, yet it solidified comments as a cornerstone of online interactivity.

Backlash and Attempts at Closure or Reform

The proliferation of comments sections during facilitated widespread user interaction but also amplified , including , , and personal attacks, prompting significant backlash from publishers and users alike. Studies have linked exposure to negative comments with adverse effects, such as increased anxiety and diminished , due to the prevalence of hostile language in unmoderated spaces. A survey in 2020 found that 64% of Americans viewed social media's overall impact negatively, citing and online as primary concerns, effects exacerbated in comment threads where users often disseminated false claims or targeted individuals. Researchers have further documented how comment-driven correlates with real-world harms, including escalated and, in extreme cases, , as unchecked narratives spread rapidly without . In response, numerous news organizations opted for outright closure of comments sections to mitigate these issues and reduce moderation burdens. A notable wave occurred in 2015, with outlets like , , , , Mic, The Verge, and USA Today's FTW disabling reader comments, citing persistent toxicity and low-quality discourse that outweighed benefits. The Daily Dot followed on July 27, 2015, and on August 19, 2015, both attributing the decision to unmanageable abusive content. NPR terminated comments on NPR.org effective August 23, 2016, after years of engagement since 2008, with no plans for revival announced in 2017 due to sustained moderation challenges. More recently, shut down comments across most of its U.S. newspapers on February 1, 2023, reflecting broader industry fatigue with "garbage" inputs from polarized users. Such closures often led to measurable declines in user , as one 2020 study observed reduced site engagement after switching away from comment systems like Facebook's. Where full closure was avoided, platforms pursued reforms through enhanced protocols, including mandatory user registration, human oversight, and algorithmic interventions to curb anonymity-fueled abuse. Third-party systems like introduced granular controls, such as flagging and threading, to foster civil exchanges, though implementation varied by site. By the early 2020s, AI-driven tools gained traction for toxicity detection, analyzing text for , , and at scale—processing millions of comments daily to filter harmful content before publication. These systems employ models trained on vast sets to identify patterns of abuse, balancing removal rates with false positives, as platforms like experimented with user-generated "community notes" to contextualize disputed claims without relying solely on top-down . Despite these advances, challenges persist, as AI can overlook nuanced intent or amplify biases in , underscoring ongoing tensions between openness and safety.

Types and Implementations

Threaded and Hierarchical Formats

Threaded comment formats organize responses to initial posts or comments in a linear sequence with explicit reply linkages, while hierarchical formats display these replies in a nested, tree-like structure using visual cues such as indentation or branching to indicate depth and parent-child relationships. In threaded systems, replies are typically appended below their parent but may appear chronologically flat unless sorted by relevance or recency, whereas hierarchical displays enforce multi-level nesting to mimic conversational branching. Both approaches emerged from early forums to address the limitations of purely chronological, flat lists, which often buried context in long discussions. Implementation of these formats relies on database schemas where each comment includes a parent identifier, enabling recursive queries to build the tree structure for rendering. Websites render hierarchies through CSS indentation, collapsible sections for deep nests (e.g., limiting visibility to two or three levels before expansion prompts), and sorting options like "best" or "top" to prioritize high-engagement subthreads. For instance, Reddit employs indentation-based threading, where replies shift rightward under parents, supporting unlimited depth but often collapsing beyond five levels to prevent visual overload. Similarly, Slashdot pioneered hierarchical threading in the late 1990s, using it to score and nest user discussions under news articles, fostering focused debates. Adoption of threaded and hierarchical systems gained traction in platforms around 2012, as news sites sought to enhance engagement amid rising comment volumes. rolled out a nesting system on October 18, 2012, displaying direct responses indented beneath parents to improve readability over flat timelines. introduced single-level threading on October 29, 2012, for its environment section, expanding site-wide by November 22, which correlated with increased comment counts and thread lengths, as replies became more targeted and sustained. Empirical analysis of this rollout showed threading boosted user retention and reply rates by preserving conversational context, reducing off-topic noise. Despite these benefits, hierarchical formats face challenges, particularly in deep threads exceeding three levels, where users struggle to track parentage amid visual clutter or fragmentation across display modes. Developers mitigate this via /expand toggles and views, but critics argue pure hierarchies can disjoint discussions, favoring flat chronological lists for simpler scanning in high-volume scenarios. Platforms like integrate hierarchical threading as an embeddable widget, allowing sites to toggle between nested and linear views, though nested modes predominate for forums emphasizing debate over announcements. Overall, these formats excel in structuring asynchronous but require careful depth limits to maintain .

Third-Party and Embedded Systems

Third-party comment systems provide website publishers with external platforms to host and manage user comments, typically integrated via embedded widgets or iframes that load dynamically without requiring native backend implementation. These systems offload comment storage, moderation, and processing to the provider's servers, allowing sites to embed functionality with minimal code, such as a single script tag. Popular examples include , which powers comments on over 1.64% of the top 1 million websites as of September 2021, and formerly Livefyre, acquired by in 2016. Disqus, founded in 2007 as a startup and officially launched on October 30, 2007, exemplifies the model with features like threaded replies, integration, real-time notifications, and built-in spam detection using algorithms. Publishers benefit from centralized moderation tools, including flagging, pre-moderation queues, and analytics on engagement metrics, which reduce server load and simplify anti-abuse efforts compared to in-house systems. However, reliance on third-party hosting introduces dependencies: if the service experiences downtime, comments become inaccessible, and data export can be challenging due to proprietary formats. Embedded systems often overlap with third-party ones, as widgets are rendered via asynchronous scripts that fetch comments from remote , enabling cross-platform identity persistence (e.g., users logging in once for multiple sites). Comments, introduced around 2011, allowed embedding via the Facebook SDK but faced declining adoption due to scandals and was effectively phased out for new integrations by 2019, with remaining plugins deprecated amid GDPR compliance pressures. Drawbacks include degradation from additional HTTP requests and script execution, potentially increasing page load times by 100-500 milliseconds, and risks from user tracking for advertising, prompting alternatives like self-hosted options such as Commento or Cusdis, which prioritize but require more setup. These systems gained traction during for their ease in fostering cross-site communities, but criticisms persist over reduced publisher control—comments are not owned outright—and potential for platform-specific biases in moderation algorithms, which may flag content unevenly based on undisclosed rules. Privacy-focused variants, like Hyvor Talk, mitigate tracking by avoiding and third-party , appealing to users concerned with under regulations like CCPA. Overall, while third-party and embedded approaches streamline deployment, they trade autonomy for convenience, with adoption varying by site scale: small blogs favor simplicity, while large publishers weigh costs against custom solutions.

Platform-Specific Variations

On , comments form a tree-like structure with unlimited nesting levels, where each reply attaches to a specific parent comment, enabling complex branching discussions; the system retrieves comments from a flat database and renders them hierarchically on the frontend, with sorting based on net upvotes minus downvotes that dynamically collapse or promote threads for community-curated visibility. This pseudonymous format supports features like editable comments within time limits and tags, fostering in-depth, topic-specific exchanges moderated by subreddit volunteers rather than centralized algorithms. YouTube structures comments as threads comprising a top-level comment and its replies, displayed in a semi-nested view sorted by relevance, top, or newest; replies appear indented beneath parents, but until 2025 updates, lacked deep visual threading, limiting conversation flow to flat lists with manual expansion. Recent implementations for Premium users introduced Reddit-inspired threading with connected reply chains and improved readability, including algorithmic sorting of replies within threads to prioritize relevance, though without unlimited depth or voting-based ranking. X (formerly Twitter) eschews traditional comment sections for a reply mechanism where responses to posts create interconnected threads, often linear but branching via quoted or direct replies, constrained by a 280-character limit per post and sortable by relevance, latest, or author-likes to surface pertinent discourse amid high-volume interactions. Threads can span multiple connected posts for longer-form replies, emphasizing real-time, concise exchanges over deep nesting, with algorithmic promotion favoring verified or engaged users in default views. employs multi-level nested comments under posts, allowing replies to replies with indentation for , introduced in to enable direct @mentions and conversation tracking without requiring prefix tags; nesting depth is practically limited to avoid UI clutter, typically 3-5 levels before flattening, and integrates with real-name policies for accountability. Many independent websites, particularly news outlets and blogs, adopt third-party systems like for threaded, nested comments with features such as cross-site logins, upvote sorting, and built-in spam detection, contrasting native platform implementations by offloading moderation and data storage to external providers, which can enhance portability but introduce load-time delays and privacy trade-offs compared to integrated social media natives.

Functions and Benefits

Enabling Public Discourse and Feedback

Comments sections enable users to contribute directly to ongoing discussions surrounding published content, extending the scope of public discourse beyond the author's initial presentation. By allowing threaded replies and responses, they facilitate the exchange of diverse viewpoints, including challenges to the article's premises or supplementary evidence, thereby simulating a deliberative where ideas are through . This structure promotes argumentative engagement on substantive issues, heightening awareness and participation in civic matters. In terms of , comments provide immediate mechanisms for readers to signal inaccuracies, offer overlooked , or demand clarifications from content creators, creating iterative loops that refine informational accuracy over time. Journalists and publishers utilize this input to gauge resonance, adjust coverage priorities, and foster loyalty among engaged communities. For instance, active reporter participation in comment threads—such as posing questions or amplifying insightful replies—has been shown to generate an average of 4.48 comments per day on associated posts, compared to lower volumes without such involvement, thereby amplifying constructive . Empirical surveys reveal widespread utilization for expressive and receptive purposes: approximately 55% of have posted news-related comments, with 56% citing or as a primary motive, while 77.9% read comments to ascertain others' perspectives, especially on political topics. These interactions underscore comments' role in democratizing access to , as users perceive heightened and personal when comment options are visible, motivating broader involvement. Overall, by aggregating unvetted public input, comments sections counteract potential institutional echo chambers, enabling corrections and viewpoint that might otherwise remain suppressed in top-down ecosystems. This feedback dynamic not only informs individual users but also signals aggregate sentiment to platforms, influencing algorithmic prominence and editorial evolution.

Community Engagement and Informational Value

Comments sections facilitate by enabling users to contribute feedback, debate topics, and with authors and peers, which correlates with elevated platform metrics such as , shares, and repeat visits. Empirical analysis of over 157,000 news messages on platforms revealed that posts eliciting comments generate higher across likes, shares, and reactions compared to those without, as comments signal deeper user investment and stimulate ongoing . This fosters a sense of participation, where users perceive their input as influencing content evolution, thereby strengthening loyalty to the platform or publication. Beyond mere interaction, comments add informational value by supplementary details, insights, and factual corrections that extend the original article's scope. Studies on news-related comments demonstrate that users often share motivations tied to information dissemination, including highlighting overlooked data or challenging inaccuracies, which can refine collective understanding when verified by participants. For example, crowdsourced within comment threads recruits diverse users to scrutinize circulating claims, yielding evaluations that rival professional verification in accuracy for straightforward assertions, as evidenced by experiments where participant assessments aligned closely with judgments. Such contributions leverage distributed , potentially elevating article utility—particularly in domains like news, where reader comments have been shown to modulate perceptions of technological risks and benefits through added context. However, the informational benefits hinge on comment quality; constructive threads amplify value by integrating minority perspectives absent from editorial content, promoting a broader evidential base. on news discussions indicates that diverse commenter inputs enhance perceived informativeness, countering chambers when platforms surface varied opinions, though this requires mechanisms to prioritize substantive over emotive replies. In practice, platforms like demonstrate this through user-driven flagging and discussion, where community-sourced refinements to posts improve factual accuracy over time via iterative feedback. Overall, these dynamics underscore comments' role in transforming static articles into dynamic knowledge repositories, provided engagement prioritizes evidence-based exchanges.

Economic and Algorithmic Advantages

Comments sections provide economic advantages to online platforms by driving key metrics that enhance monetization opportunities, particularly through advertising and subscriptions. Active commenters, as analyzed by engagement technology provider Viafoura across its client data, generate 5.3 times higher than non-commenting users, increasing ad exposure and impression-based revenue. These users also demonstrate 45 times greater likelihood of subscribing to premium content, contributing to diversified revenue streams beyond ads. Furthermore, comments as augment site pages with additional text, improving through increased content volume, keyword diversity, and signals of site authority, which elevate organic traffic rankings. From an algorithmic perspective, comments furnish rich, textual data that refines recommendation engines beyond binary metrics like likes or views. Social media algorithms treat comments as strong indicators of resonance and user interest, prioritizing material that elicits such interactions to optimize feed relevance and retention. Research on integrating comments into recommendation models shows they enable and topic modeling, yielding more accurate — for example, by associating comment themes with user queries in news forums to suggest contextually aligned . This enhances predictive capabilities, as comments reveal nuanced preferences, including affective tones that correlate with deeper patterns. These algorithmic gains compound economic benefits by extending session durations and repeat visits, creating feedback loops where improved recommendations sustain high-value user cohorts. Platforms like and incorporate comment signals to boost algorithmic performance, indirectly amplifying ad revenue through prolonged platform stickiness.

User Behaviors and Dynamics

Constructive Participation Patterns

Constructive participation in comments sections refers to user behaviors that enhance discussion quality, such as providing evidence-based arguments, offering clarifications, or posing substantive questions that advance collective understanding. Studies analyzing platforms like and news sites indicate that these patterns often emerge in moderated environments where users prioritize factual contributions over emotional venting; for instance, a analysis of 1.2 million comments on scientific articles found that constructive replies, defined by inclusion of references or , received 2.5 times more upvotes than neutral or hostile ones. Such patterns foster iterative refinement of ideas, as seen in collaborative threads on [Stack Exchange](/page/Stack Exchange), where question-answer dynamics lead to verified solutions in over 70% of cases. Key patterns include evidence-sharing, where commenters link to primary sources or datasets to support claims, thereby elevating the thread beyond opinion. On platforms like , this manifests in discussions where users dissect or studies, with a 2022 review showing that threads with multiple cited links averaged 40% higher engagement duration than uncited ones. Another pattern is constructive critique, involving polite disagreement backed by counter-evidence, which empirical data from YouTube comment analyses (2019-2021) links to reduced polarization, as critiquing comments garnered 15-20% more replies focused on merit rather than attacks. Question-driven participation, such as seeking elaboration on ambiguities, also prevails in academic forums; a of arXiv comments from 2018-2023 revealed that inquisitive posts prompted author responses in 28% of instances, often yielding clarifications or errata. These patterns are more prevalent among users with higher domain expertise, per a 2021 Pew Research survey of U.S. users, which found that 62% of frequent commenters on sites engaged constructively when motivated by learning, compared to 34% in entertainment sections. Platform design influences adoption; threaded formats on enable pattern-building by allowing direct replies, with data from a 2019 implementation study showing a 18% uptick in chained constructive exchanges versus flat lists. However, constructive participation remains minority-driven, comprising under 20% of total comments in unmoderated sections across major sites, underscoring the challenge of scaling these behaviors amid dominant negativity.

Prevalence of Conflict and Toxicity

Toxicity in comments sections manifests as harassing, insulting, or demeaning language, with empirical studies revealing its persistence across platforms, particularly in protracted or polarizing discussions. Analyses employing Google's Perspective API, which scores comments on a 0-1 scale (with scores above 0.5 indicating likely ), demonstrate that escalates in longer threads, where extended interactions correlate with higher overall toxic content. On , toxic comments elevate the probability of subsequent toxic replies, fostering chains of antagonism in deep conversations. Similarly, Wikipedia talk pages show toxic comments reducing volunteer editor activity by 0.5 to 2 active days per user in the short term, underscoring behavioral impacts. Prevalence varies by platform and topic, but rates often reach double digits in contentious areas. A 2025 study of news comments found comprising 24.8% in religion-related articles and 25.9% in or crime discussions, highlighting domain-specific spikes. On , anti-vaccine videos—analyzed across 414,436 comments—exhibit toxicity scores in the top 20th percentile averaging 0.29, with highly liked early toxic comments amplifying fear and discord in replies. Platform transparency reports further indicate removal of toxic content, such as Meta's 0.14-0.15% of views involving toxic posts in 2021, though underreporting and algorithmic detection limits may underestimate true incidence. User-level patterns reveal that a minority drives disproportionate : high-activity users display elevated toxicity scores over time, especially post-2013, while low-activity contributors paradoxically produce the most toxic content. Surveys quantify exposure, with a 2021 report indicating 41% of U.S. adults have faced online , and 22% of incidents occurring in comments sections per a analysis. These dynamics persist despite , as toxicity not only correlates with reduced participation but also sustains in echo chambers or adversarial exchanges.

Psychological and Social Drivers

Users engage in online comments sections driven by a combination of psychological mechanisms that lower inhibitions and fulfill intrinsic needs. The , characterized by reduced due to factors such as dissociative and invisibility to others, prompts individuals to post more aggressive, honest, or impulsive content than they would offline, often manifesting as flaming or trolling in comment threads. This effect is amplified by , where immersion in anonymous online crowds erodes personal accountability, leading to with group norms that favor hostility over restraint. Empirical surveys of news commenters reveal motivations rooted in traits like extraversion and , with participants citing desires for opinion expression, social interaction, and entertainment as primary drivers, though these can veer into provocation when self-control wanes. Social drivers further propel commenting dynamics through identity-based processes and group reinforcement. posits that individuals derive self-esteem from affiliations with online groups, fostering in-group bias and out-group derogation in discussions, which escalates conflicts in polarized comment sections. In toxic exchanges, disembodiment—lacking nonverbal cues—combined with minimal accountability and rapid coordination among responders, creates contagion effects where initial aggression spreads, sustaining high-toxicity patterns across platforms and topics. Studies indicate that while toxic comments deter some participation, they paradoxically boost engagement for others seeking validation or power through provocative , with power motives correlating to higher dissemination of contentious content. These drivers interact causally: psychological provides the initial spark for unrestrained input, while social mechanisms like echo chambers and reciprocal escalation perpetuate cycles of conflict, often overriding constructive intent. Research distinguishes between discussion-oriented commenting, aimed at , and provocation-driven participation, which thrives on emotional and audience reactions, with the latter prevailing in high-stakes threads. Topic sensitivity heightens , as threats on divisive issues trigger defensive responses aligned with group norms rather than evidence-based dialogue.

Moderation Strategies

Manual and Community-Based Methods

Manual moderation entails human reviewers evaluating user-submitted comments against predefined guidelines, typically classifying content as permissible or violative based on criteria such as , , or off-topic irrelevance. This process occurs in two primary forms: pre-moderation, where comments are held pending approval before visibility, and post-moderation, permitting initial publication followed by potential removal or editing. Pre-moderation ensures higher initial quality but delays engagement, while post-moderation fosters rapid discourse at the risk of transient harmful content. At major news outlets like , moderation teams conduct manual post-publication reviews during the first 24 hours of a story's comment section activity, prioritizing emerging contentious topics, verifying dubious claims through cross-referencing, and synthesizing community signals to inform ongoing oversight. Such practices demand contextual judgment that automated systems often lack, enabling nuanced decisions on , cultural references, or evolving debates, though they remain labor-intensive and susceptible to moderator or inconsistent application. Community-based methods decentralize enforcement by empowering users to report or suspicious comments, which queues them for elevated scrutiny or collective mechanisms. Flagging systems serve as the foundational in moderation pipelines, with users identifying toxic behaviors like or , prompting platform intervention. Empirical analysis of subreddits from 2015 to 2021 demonstrates a statistically significant positive between user-initiated community moderation actions—such as downvoting or reporting—and subsequent comment removals, particularly in moderated versus unmoderated spaces, suggesting self-regulation enhances removal efficacy across multiple years. In news comment sections, German moderators interviewed in 2021 reported relying on user flags to delineate boundaries, though definitional ambiguities led to varied enforcement, underscoring the method's dependence on participant and potential for subjective escalation. These approaches promote user investment in discourse quality but can amplify echo chambers or mob dynamics if flags cluster around ideological lines rather than objective violations. Hybrid implementations combine manual oversight with community input for scalability; for instance, flagged comments undergo expedited human review to mitigate false positives inherent in user reports. Research highlights manual review's superiority in capturing intent and context over purely reactive flagging, with human intervention essential for high-stakes accuracy in diverse linguistic environments. However, resource constraints limit widespread adoption, as evidenced by platforms to underpaid contractors, which correlates with inconsistent outcomes and . Overall, these methods prioritize human discernment for equitable enforcement but require robust training protocols to counter biases in judgment.

Automated and AI-Driven Approaches

Automated moderation systems for comments sections leverage algorithms to scan and classify user-generated text in real time, identifying potential violations such as toxicity, spam, or before human review. These systems typically process comments through pipelines that extract features like sentiment, keywords, and contextual patterns, applying probabilistic scores to flag content exceeding thresholds defined by platform policies. Early implementations relied on rule-based filters detecting explicit or links, but modern approaches integrate models trained on large datasets of labeled comments to achieve higher in nuanced detection. A prominent example is Google's Perspective , released in 2017 by , a computational organization under , which employs neural networks to evaluate comments on attributes including —defined as rude, disrespectful, or unreasonable language—and assigns scores from 0 to 1 based on training from diverse online conversation corpora. The supports integration into platforms for proactive filtering, such as warning users before posting high- comments or prioritizing them for moderator queues, and has been adopted by sites like for enhancing discussion quality. Advanced variants use architectures, such as convolutional neural networks or transformers, for , distinguishing categories like insults, threats, or identity-based attacks in datasets exceeding millions of examples. AI-driven enhancements extend to hybrid models that incorporate , where toxicity prediction occurs alongside identity detection to mitigate demographic biases in scoring, as demonstrated in empirical evaluations showing reduced disparate error rates across groups. Platforms like and deploy proprietary systems scaling to billions of daily comments, using ensemble methods combining , networks, and large language models for context-aware decisions, often fine-tuned via from moderator feedback loops. These approaches enable automated actions like auto-deletion of or quarantining severe violations, freeing human resources for edge cases while adapting to evolving patterns through periodic retraining.

Effectiveness Metrics and Empirical Outcomes

Pre-moderation strategies, which involve screening comments prior to publication, have demonstrated significant reductions in levels within online comment sections of news sites. In a nine-month conducted with an Austrian , pre-moderation lowered toxic content by approximately 25% compared to post-moderation approaches, where toxic comments appear briefly before removal, without observable declines in overall user participation or engagement metrics such as comment volume. This suggests that proactive manual filtering preserves discourse quality while maintaining user incentives to contribute. Community-based moderation, often relying on volunteer enforcers in forums like , yields mixed empirical outcomes on and participation. Interventions such as subreddit bans or user restrictions effectively diminish activity from high- accounts, with studies observing decreased mean scores among remaining users post-moderation in affected communities. However, such actions can inadvertently elevate in surviving threads by concentrating uncivil behavior or prompting reallocations to less moderated spaces, alongside reductions in overall posting activity that signal potential chilling effects on constructive engagement. Automated moderation systems, including AI-driven deletions and demotions, enhance rule adherence and curb propagation in comment threads, particularly in shorter discussions. Analysis of over 412 million comments revealed that automated deletions reduced subsequent rule-breaking interventions by up to 0.946 per thread (95% : -1.59 to -0.299) in conversations with 20 or fewer comments, with persistent deterrence effects on affected users over 28 days, lowering interventions by 0.192 (95% : -0.291 to -0.092). metrics, measured in standard deviations, declined by 0.037 SDs among non-deleted commenters, indicating spillover benefits in maintaining thread . Yet, these systems correlate with temporary drops in user commenting volume—for instance, 4.55 fewer comments from first-time offenders over seven days (95% : -6.00 to -3.11)—suggesting heightened perceived risks that may suppress broader participation, though activity often rebounds. Hybrid approaches combining triage with human oversight show promise in amplifying effectiveness, as evidenced by experiments where AI-assisted feedback improved crowd-sourced moderation accuracy and reduced low-quality content in user-generated discussions. Across strategies, key metrics like classifiers (e.g., Perspective API scores for or threats) and proxies (comment counts, retention rates) consistently highlight trade-offs: reductions of 20-30% are common, but over-moderation risks eroding informational value if not calibrated to context-specific norms. Empirical underscore that effectiveness hinges on platform scale and thread length, with manual methods excelling in nuanced enforcement and in scalable enforcement, though long-term studies remain limited by access.

Controversies and Debates

Allegations of Ideological in

Allegations of ideological in the of comments sections center on claims that platforms and news outlets apply rules inconsistently, favoring left-leaning viewpoints while targeting conservative or dissenting ones for removal, shadowbanning, or reduced visibility. These accusations, prominent since the mid-2010s, often cite anecdotal instances of comments questioning mainstream narratives on topics such as election integrity, policies, or measures being deleted, while similar rhetoric from opposing perspectives remains. Public surveys indicate widespread perception of such bias, with 90% of Republicans in 2020 believing sites censor political viewpoints, compared to 59% of Democrats. Empirical research reveals bias primarily in human and community-driven moderation rather than algorithmic enforcement. A October 2024 University of Michigan study of over 100 Reddit subreddits documented that moderators remove comments opposing their inferred political orientation at higher rates, fostering ideological echo chambers; in left-leaning communities like r/politics, this pattern disadvantages right-leaning contributions. Experimental studies corroborate this, showing participants across ideologies preferentially censor opposing political arguments in simulated forum settings, with the effect amplified in polarized environments. On news websites, comment sections tend to mirror the outlet's ideological lean—conservative sites host more right-slanting discussions—but selective moderation of "uncivil" dissent can skew discourse, as uncivil comments influence perceptions of article bias regardless of factual merit. Countervailing studies attribute enforcement disparities to behavioral differences, not deliberate favoritism. Analysis of neutral bots on from 2019 found no platform-level in , with right-leaning accounts encountering more low-credibility content due to network effects and higher sharing volumes of violative material among conservative users. Similarly, 2024 research concluded that conservatives face more removals because they post at greater rates, independent of policy . These findings suggest that while user-driven exhibits clear ideological filtering, systemic platform remains unsubstantiated beyond user composition and content patterns. Such allegations have prompted policy debates, including calls for in moderation decisions and audits of moderator demographics, though evidence of overt institutional is anecdotal rather than data-driven. The prevalence of left-leaning ideologies among workers and volunteer moderators may causally contribute to asymmetric outcomes, as homogeneous groups enforce norms favoring in-group views, but rigorous longitudinal data on moderator affiliations remains limited.

Free Speech Implications Versus Harm Prevention

In online comments sections, the core tension arises between preserving free expression, which facilitates open debate and the emergence of diverse viewpoints, and implementing moderation to mitigate harms such as , , and that can undermine constructive discourse. Proponents of prioritizing free speech contend that unmoderated or lightly moderated environments function as modern equivalents to John Stuart Mill's , where erroneous claims are refuted through counter-speech rather than suppression, ultimately advancing truth-seeking by exposing weaknesses in prevailing narratives. Empirical evidence supports this by showing that heavy moderation can produce chilling effects, reducing user contributions; for instance, one study on platform-initiated comment deletions found that such actions decreased subsequent posting activity among affected users due to perceived risks of further removal. Conversely, advocates for harm prevention argue that unchecked comments often devolve into environments dominated by , deterring participation and amplifying chambers, as and targeted overwhelm legitimate exchange. Surveys indicate broad public support for in cases of severe , with a 2022 conjoint experiment involving 2,564 U.S. respondents revealing that 58% to 71% favored removing posts containing harmful (e.g., at 71%, climate denial at 58%), even at the expense of absolute free speech protections, particularly when consequences were grave or offenses repeated. This preference holds across topics but shows partisan divergence, with Democrats more inclined to endorse removals than Republicans, who often favor inaction to avoid overreach. Critics of aggressive harm-focused moderation highlight its potential for ideological , where enforcement disproportionately targets certain perspectives, stifling dissent under the guise of safety. For example, teams, often staffed by individuals with left-leaning political donations, face career incentives to err toward removal, leading to over-moderation that suppresses discussions on topics like origins or election integrity, thereby narrowing discourse and pushing affected users to fringe platforms. Analyses of platforms like pre-2022 reveal double standards, such as leniency toward white supremacist content linked to figures while strictly enforcing against marginalized groups' advocacy, prioritizing business and regulatory interests over consistency. Such biases exacerbate chilling effects, particularly among conservatives and moderates, who report higher in moderated spaces due to anticipated removal risks. Ultimately, empirical outcomes suggest no zero-sum resolution: while targeted moderation against and direct threats enhances platform usability and participation—evident in cases like Techdirt's filtering of thousands of daily spam comments to sustain viable discussion—expansive harm prevention efforts risk entrenching institutional biases and reducing overall viewpoint diversity, as seen in the migration of users to less moderated alternatives post-removals. This dynamic underscores the need for transparent, minimal interventions that prioritize verifiable harms over subjective offenses, preserving comments sections as arenas for empirical scrutiny rather than curated consensus.

Case Studies of Platform-Specific Disputes

One prominent case involved X (formerly ), where the platform's comments, known as replies, experienced a documented surge in following Elon Musk's acquisition on October 27, 2022. Multiple analyses indicated a 50% increase in hate messages between the acquisition and June 2023, attributed to reduced moderation staff and policy shifts emphasizing free speech over content removal. An audit using classifiers found prevalence rose dramatically in the immediate aftermath, with no corresponding drop in bot activity to explain it away, fueling disputes over whether lax enabled or if prior over-moderation suppressed legitimate discourse. Critics, including advertisers who paused spending, argued this undermined user safety, while proponents cited algorithmic changes reducing visibility of extreme content as a counterbalance, though empirical data showed persistent elevation in slurs targeting , , and . YouTube faced backlash in early 2019 over predatory toxicity in comments on videos featuring minors, prompting the platform to disable comments platform-wide on such content starting 28. This followed exposés revealing grooming attempts and explicit in sections under family vlogs and kids' channels, which had amassed billions of views but fostered unchecked exploitation. Creators protested the blanket policy as discriminatory and economically damaging, arguing it penalized non-problematic channels without granular moderation tools, while YouTube defended it as necessary to prioritize child safety amid regulatory scrutiny from bodies like the . The measure reduced reported incidents but sparked ongoing debates about overreach, with data later showing persistent toxicity migration to unmonitored areas like live chats, highlighting tensions between algorithmic detection limits and proactive restrictions. On , disputes arose from public officials' use of comment filters and blocks, exemplified by a 2021 federal court ruling against a police department's automated profanity filters that censored words like "pig" in replies to official posts. The Eastern District of deemed this viewpoint discrimination, violating First Amendment protections in public forums, as the filters suppressed criticism without human review. Similar cases, including a 2019 Fourth Circuit decision barring officials from blocking constituents on pages used for government business, underscored allegations of selective favoring institutional narratives over dissent. documented systemic suppression of pro-Palestine comments during 2023 escalations, with internal data showing disproportionate removals despite policy neutrality claims, raising questions about trained on Western-centric datasets. These incidents fueled broader litigation and policy tweaks, balancing harm prevention against free expression guarantees.

Societal Impact and Evolution

Influence on Media Narratives and

Online comments sections under articles frequently shape readers' inferences about , as individuals rely on them as heuristics for broader sentiment despite their unrepresentativeness. A 2012 study analyzing user-generated comments on sites found that exposure to predominantly one-sided comments led participants to overestimate the prevalence of that viewpoint in the general , subsequently influencing their own perceptions of the story's slant. Similarly, a 2020 of readers' reactions demonstrated that comments discordant with the article's tone diminished favorable attitudes toward the story, amplifying perceived in the reporting. This dynamic extends to opinion formation, where comments exert persuasive effects comparable to content. Experimental research on revealed that online comments could shift readers' attributions of responsibility, with supportive comments reinforcing article-aligned views and oppositional ones prompting reevaluation. In scientific contexts, science-critical comments appended to posts reduced the perceived credibility of factual claims, as shown in two 2020 exploratory studies where participants exposed to such commentary reported lower trust in the source material. These effects persist across platforms, with a 2020 investigation confirming that user comments on news significantly altered readers' overall perceptions, often via emotional or confirmatory cues rather than deliberative reasoning. Comments sections also contribute to media narratives by highlighting dissonant voices, frequently countering the ideological leanings of articles. Empirical analyses indicate that comments on progressive-leaning sites often feature higher proportions of conservative-leaning responses, fostering perceptions of dissent against dominant framings. This mismatch enables minority viewpoints to challenge prevailing narratives, as evidenced by studies where users employed comments strategically to contest perceived opinion climates, potentially broadening beyond control. However, such influences risk entrenching , with uncivil or echo-chamber-like comments exacerbating divisions rather than fostering , per findings on deliberative quality in discussions. Over time, aggregated comment trends have prompted outlets to adjust coverage, as reader feedback signals unmet demands for alternative perspectives.

Empirical Studies on Overall Value

Empirical research on the overall value of online comments sections reveals a complex balance between potential benefits for public discourse and prevalent drawbacks such as and propagation. Studies indicate that under moderated conditions, comments can enhance user and deliberative ; for instance, structured formats like three-column layouts (pro, con, neutral) have been shown to increase participation while reporter involvement in responding to queries can boost by approximately 15%. However, unmoderated sections frequently devolve into spaces dominated by uncivil or extreme content, which erodes trust in news outlets; one analysis found that exposure to uncivil comments significantly lowers both general and outlet-specific trust, outweighing any neutral or positive contributions. Positive effects emerge in targeted contexts, such as when positive comments counteract or foster . Experimental evidence demonstrates that viewing affirmative user comments on news articles can reduce readers' toward out-groups, suggesting a corrective in broadening perspectives. Similarly, the mere presence of comment functionalities signals , empowering users with a perceived voice and encouraging deeper engagement with content, though this benefit diminishes without active moderation. Organized citizen engagement, as in campaigns prompting structured commenting, correlates with higher deliberative quality, including more reasoned arguments and less hostility, indicating that comments hold participatory value when guided by external incentives. Conversely, numerous studies highlight net harms to discourse quality. in comments influences perceptions of journalistic credibility, with aggressive tones leading to brand damage for news organizations and a on broader participation. Research on science-related posts shows that critical user comments undermine perceived , amplifying skepticism even among informed readers. dynamics in partisan media comments further polarize views, with ideological clustering reducing exposure to diverse opinions and reinforcing biases. Content analyses across journalistic comments consistently reveal low argumentative depth, with prevalence of attacks over substantive debate, suggesting limited additive value without intervention. Quantitative assessments of net impact remain sparse, but available evidence tilts toward conditional utility: comments sections contribute to societal value primarily through increased accessibility to counter-narratives and user-driven corrections when is mitigated, yet they often exacerbate division and in raw form. Peer-reviewed syntheses emphasize that while forums akin to comments aid for marginalized voices via , news-specific sections more commonly distort estimates toward vocal minorities. These findings underscore the need for empirical evaluation of strategies to maximize benefits, as unfiltered comments risk amplifying harms over constructive . In 2024, platforms moderated over 118 million comments across hundreds of thousands of posts, with approximately one in six comments hidden due to violations of community guidelines, highlighting the scale of ongoing efforts to manage user-generated discourse. Following the 2024 U.S. , platforms like implemented policy rollbacks in early 2025, correlating with a documented increase in toxic language targeting specific demographics, such as women, as evidenced by sustained rises in gendered harassment reports. X (formerly Twitter) released its first transparency report in 2024, revealing patterns in removal that underscored varying across ideological lines, prompting debates on selective . Advancements in large language models (LLMs) for gained traction by mid-2025, with peer-reviewed analyses showing improved detection of nuanced but persistent challenges in contextual accuracy, as AI systems often misflag non-harmful or cultural references. Despite these tools' —enabling filtering of comments—professional moderators reported in 2025 that AI replacements frequently failed at high-stakes tasks, such as identifying child exploitation material, leading to human-AI models as a corrective measure. Looking ahead, projections indicate a shift toward accelerated in comment sections, with most decisions handled by machines rather than humans, driven by cost efficiencies and volume demands, though this raises concerns over reduced in edge cases. By late 2025, tools are expected to incorporate deeper contextual analysis and generative loops, potentially reducing false positives by 20-30% through from user disputes, as outlined in emerging technical frameworks. This evolution may widen the divide between platforms investing in robust —favoring user retention in civil spaces—and those deprioritizing it, fostering migration to niche, community-enforced forums. Brands are increasingly leveraging comment sections for direct engagement on viral content, treating them as organic polling mechanisms to gauge sentiment, a trend projected to integrate with -driven for targeted responses.

References

  1. [1]
    Full article: Quality User-Generated Content? A Case Study of the ...
    Mar 20, 2024 · “Behind the Comments Section: The Ethics of Digital Native News Discussions.” Media and Communication 8 (2): 86–97. https://doi.org/10.17645 ...
  2. [2]
    Liking versus commenting on online news: effects of expression ...
    ... online media platforms. How do these acts of expression affect our feelings ... E. (. 2017. ). Do online media polarize? Evidence from the comments' section.
  3. [3]
    Comments, Shares, or Likes: What Makes News Posts Engaging in ...
    Discussions in the comments section: Factors influencing participation and interactivity in online newspapers' reader comments. New Media & Society, 16(6) ...
  4. [4]
    Despite Flaws, Comments Are Good for Public Discourse
    For example, rather than closing the comments sections altogether, many Korean news organizations allow readers to log in through their social ...Missing: impact | Show results with:impact
  5. [5]
    Safe spaces or toxic places? Content moderation and social ...
    Jul 25, 2025 · Toxicity detection (see Methods) serves as a valuable tool for identifying concerning trends, guiding content moderation efforts, and assessing ...
  6. [6]
    A critical reflection on the use of toxicity detection algorithms in ...
    What are the broader challenges of embedding toxicity-detection algorithms into socio-technical systems such as proactive moderation interventions? RQ2. How ...
  7. [7]
  8. [8]
    Not all comments are created equal: the case for ending online ...
    Sep 10, 2015 · Seriously: when tech news website Re/code shut down its comments section last year, editors cited the growth of social media as one reason for ...
  9. [9]
    A Guide to Content Moderation for Policymakers - Cato Institute
    May 21, 2024 · This is relevant for society because it means online platforms aren't ... “The Comments Section,” Help, New York Times; and “Detecting ...<|separator|>
  10. [10]
    Feb. 16, 1978: Bulletin Board Goes Electronic | WIRED
    Feb 16, 2010 · 1978: Ward Christensen and Randy Suess launch the first public dialup bulletin board system. The two unleash the kernel of what would eventually ...<|separator|>
  11. [11]
    Social Media's Dial-Up Ancestor: The Bulletin Board System
    The history of the BBS shows that pre-Internet social media was pretty great · For millions of people around the globe, the Internet is a simple fact of life.
  12. [12]
    First post: A history of online public messaging - Ars Technica
    Apr 29, 2024 · Usenet, which came alive in 1979, was a public message board divided into different “newsgroups” on various topics. The first was net.general, ...<|separator|>
  13. [13]
    The History of Usenet: The Oldest Online Community - UsenetServer
    Apr 8, 2025 · The advent of Usenet introduced the concept of threaded discussions, moderated groups, and open participation in online conversations. It ...
  14. [14]
    A trip down memory lane: FidoNet and Usenet - Nicola Iarocci
    Jul 9, 2020 · In 1987 I was the operator (sysop) behind Lorien, the first online bulletin board system (BBS) that went online in my area.
  15. [15]
    Online Messaging Systems of Yesteryear - TidBITS
    May 6, 2024 · At Ars Technica, Jeremy Reimer has penned a history of online public messaging: Today, many folks look back with fondness on the early days ...
  16. [16]
    Slashdot's 20th Anniversary: History of Slashdot
    Oct 19, 2017 · " The piece generated more than 5,600 comments, making it the most discussed submission in Slashdot history. That August, Slashdot's most ...
  17. [17]
    A Pre-History of Slashdot on its 20th Birthday - Rob Malda - Medium
    Oct 5, 2017 · Slashdot went from from something with a stupid name that I was building into something we were building… with the help of thousands of nerds ...
  18. [18]
    Life in the Quiet Period - WIRED
    Apr 1, 2000 · And told his diary the whole story. Rob Malda, aka CmdrTaco, launchedwww.slashdot.org in 1997 while he was a student at Hope College in Holland, ...
  19. [19]
    The complicated history and frightful future of the Internet comment ...
    Feb 22, 2019 · The first comment section, as we know it, appeared in 1998 on the website Open Diary. Open Diary serves as a public diary platform for ...
  20. [20]
    The History of Blogging: From 1997 Until Now (With Pictures)
    Mar 13, 2024 · 1998 also saw the creation of Open Diary, a blogging platform that allowed members of the community to comment on each other's writing. This ...
  21. [21]
    No Comments - The New York Times
    Sep 20, 2013 · The first comment there arrived on Oct. 5, 1998: “Too bad coders can't be like rock stars and get their money for nothing and their chicks for ...
  22. [22]
    1993: CGI Scripts and Early Server-Side Web Programming
    Mar 24, 2021 · CGI, invented in 1993, enabled server-side web interactivity, acting as a gateway for web servers to connect to information servers and ...
  23. [23]
    Where did Online Comments Come from Anyway? - LinkedIn
    Dec 6, 2018 · It is said that the first website to offer a comments section was Open Diary in 1998. That same year, The Rocky Mountain News was one of the ...
  24. [24]
    The Comments Section: A Brief History of Time and Trolling
    Sep 2, 2016 · Early internet platforms like Usenet and Telnet were comment-based. The web and social media expanded access, but lack regulation, making  ...
  25. [25]
    Web 2.0: A New Wave of Innovation for Teaching and Learning?
    Mar 17, 2006 · We can survey the ground traversed by Web 2.0 projects and discussions in order to reveal a diverse set of digital strategies with powerful implications for ...
  26. [26]
    Blogging in the 2000s | Research Starters - EBSCO
    Blogging in the 2000s marked a significant evolution from its origins as an online diary into a vital communication platform utilized across various sectors.
  27. [27]
    Study: Newspaper Websites Are Still Figuring Out This ... - TechCrunch
    Dec 18, 2008 · Newspaper sites that incorporate user-generated content is on the rise (58 percent in 2008, versus 24 percent in 2007), as are comments on ...
  28. [28]
    Disqus Officially Launches - Paul Stamatiou
    Oct 30, 2007 · Disqus improves the commenting experience for both publishers and regular commentors. Commentors can setup accounts and track their comments.
  29. [29]
    History of Social Media (It's Younger Than You Think)
    Sep 19, 2025 · Blogs and early networks laid the groundwork for what would become a revolution in how people communicate.
  30. [30]
    10 Things We Learned by Analyzing 9 Million Comments from The ...
    This report describes what we learned from analyzing 9,616,211 comments people posted to The New York Times website between October 30, 2007 – the date on ...Missing: growth 2008
  31. [31]
    Mental Health Effects of Reading Negative Comments Online
    Nov 23, 2022 · Let's take a look at why comment sections are toxic, how they affect your mental health, and what to do about it. The Psychology of ...
  32. [32]
    64% of Americans say social media have a mostly negative effect on ...
    Oct 15, 2020 · Those who have a negative view of the impact of social media mention, in particular, misinformation and the hate and harassment they see on ...Missing: sections | Show results with:sections<|control11|><|separator|>
  33. [33]
    Researchers to Study Connection Between Online Misinformation ...
    Oct 12, 2022 · What is not understood, however, is how online abuse and harassment like this spread via misinformation can lead to real, physical violence in ...
  34. [34]
    What happened after 7 news sites got rid of reader comments
    Sep 16, 2015 · Recode, Reuters, Popular Science, The Week, Mic, The Verge, and USA Today's FTW have all shut off reader comments in the past year.
  35. [35]
    A Brief History of the End of the Comments - WIRED
    Oct 8, 2015 · For years, comment boxes have been a staple of the online experience. Now many media companies are giving up on them.
  36. [36]
    NPR Website To Get Rid Of Comments
    Aug 17, 2016 · As of Aug. 23, online comments, a feature of the site since 2008, will be disabled. With the change, NPR joins a long list of other news organizations choosing ...<|separator|>
  37. [37]
    A Year After NPR Ends Commenting, No Plans To Revive It
    Aug 24, 2017 · A year ago, NPR announced its decision to end commenting at the end of stories on NPR.org, terminating a form of audience engagement that had been a fixture of ...
  38. [38]
    No Comment: Shutting down newspaper comment sections is a ...
    Feb 13, 2023 · On Feb. 1, a series of active comment sections across America were shut down as newspaper publisher Gannett closed online comments for most ...
  39. [39]
    News Comments: What Happens When They're Gone or When ...
    Oct 21, 2020 · The study revealed that turning off comments reduced the average time users spent on the site compared to sites that continued using Facebook commenting.
  40. [40]
    Comment sections are poison: handle with care or remove them
    Sep 12, 2014 · I asked the mods how they manage this incredible feat of creating a wonderful, safe space, while dealing with the toxicity that comes with women ...
  41. [41]
    Toxic Online Comments: How They Happen, and How To Stop Them
    Jun 10, 2021 · Identify and filter out toxicity: Our AI and ML-driven moderation software monitors millions of conversations to filter out toxic comments and ...
  42. [42]
    The role of AI in content moderation - AIContentfy
    Jul 28, 2025 · AI helps content moderators by screening and filtering user-generated content, identifying and removing harmful content, and processing large ...
  43. [43]
    Fact-checked out: Meta's strategic pivot and the future of content ...
    Feb 24, 2025 · Meta changed content moderation by ending fact-checking, overhauling hate speech policies, and introducing "community notes" where users add ...Missing: reforms | Show results with:reforms<|separator|>
  44. [44]
    The unappreciated role of intent in algorithmic moderation of ...
    Jul 29, 2025 · Throughout these processes, platforms must design moderation systems to be sensitive to evolving definitions of inappropriate content and ...
  45. [45]
    Threaded Conversation - an overview | ScienceDirect Topics
    A threaded conversation is defined as an online discussion structure where participants engage in conversations by posting messages and replies in a ...
  46. [46]
    Documentation:Comment Threading - MovableType.org
    Hierarchical Threads are the more traditional way of displaying comment threads. Here, “top level” comments (i.e. those without replies) are sorted by date.
  47. [47]
    Discussions: Flat or Threaded? - Coding Horror
    Nov 24, 2006 · Threaded discussions are disjointed and discombobulating, while flat discussions are simple and less problematic, despite some limitations.Missing: disadvantages | Show results with:disadvantages
  48. [48]
    Comment reply system design - Dilip Kumar
    Jul 16, 2024 · Schema design to store threaded reply. Approach 1: Use parent child relationship. Following can be schema for Posts table. PostId Text ...
  49. [49]
    Styling Comment Threads - CSS-Tricks
    Dec 7, 2020 · Comment threads can be deceptively simple to get right. In this article, we will learn how to design comment threads the right way.
  50. [50]
    Hierarchical comments usability issues - UX Stack Exchange
    Mar 3, 2011 · Let's look at a typical threaded commenting system, such as Reddit. It has the following problems: It's hard to tell what's the parent of a ...Hierarchical/flat comment system - User Experience Stack ExchangeTransitioning from flat comments to threaded commentsMore results from ux.stackexchange.comMissing: advantages disadvantages
  51. [51]
    Web Discussions: Flat by Design - Coding Horror
    Dec 13, 2012 · You should be wary of threading as a general purpose solution for human discussions. Always favor simple, flat discussions instead.
  52. [52]
    The readers' editor on… the switch to a 'nesting' system on comment ...
    Dec 23, 2012 · On 18 October 2012 the Guardian began rolling out a new system for presenting the threads. The new system shows responses to a comment directly ...
  53. [53]
    [PDF] How Threaded Conversations Promote Comment System User ...
    Threading was implemented on October 29, 2012, for the environment section and on November 22, 2012, for all other sections. This dataset includes 11,425 ...
  54. [54]
    How Threaded Conversations Promote Comment System User ...
    Jul 1, 2015 · In 2012 the news organization introduced single-level threading to its commenting system [33], providing a unique opportunity to examine the ...
  55. [55]
    25 Comment Thread Design Examples For Inspiration - Subframe
    Feb 21, 2025 · Discover 25 inspiring comment thread design examples to enhance user engagement and improve your website's interaction. Get inspired now!
  56. [56]
    Hierarchical/flat comment system - User Experience Stack Exchange
    Oct 8, 2012 · I'm trying to design comment system for my reddit-like site. I like hierarchical comments because you can sort them to quickly get the best ...Hierarchical comments usability issues - UX Stack ExchangeWhat is a good way to display infinite nested comments?More results from ux.stackexchange.com
  57. [57]
    4 Most Popular Third Party Commenting Systems - Develare
    Aug 7, 2014 · Disqus is the most popular third party commenting system because it is considered the easiest to use by both bloggers and commenters alike.
  58. [58]
    What is Disqus? | Disqus
    Disqus is the world's most trusted comments plugin. It makes communities easier for publishers to manage, and readers love using it. Looks good.Missing: advantages disadvantages
  59. [59]
    From healthy communities to toxic debates: Disqus' changing ideas ...
    Disqus, founded in 2007, became a popular commenting system as it enabled any webmaster to add a commenting system to their website by simply embedding a ...
  60. [60]
    6 Good Third-Party Commenting Systems - - Kevin Muldoon
    Dec 17, 2013 · 6 Good Third-Party Commenting Systems · 1. Livefyre · 2. DISQUS · 3. Intense Debate · 4. Vicomi · 5. Facebook · 6. Google+ · Which Commenting Solution ...
  61. [61]
    DISQUS Company Overview, Contact Details & Competitors | LeadIQ
    Disqus is the leading audience engagement and community growth platform. Since launching in 2007, Disqus has helped millions of publishers get closer to their ...
  62. [62]
    Disqus Pros and Cons - Learn Internet Grow
    To help you decide if integrating Disqus into your website or blog is the right move, here are the top Disqus Pros and Cons.
  63. [63]
    What is the advantage/disadvantage of depending solely on Disqus ...
    Feb 14, 2015 · One of the biggest advantages is its built-in anti-spam feature which seems to work very well. Not one bit of spam has gotten through on any ...What is Disqus about? - QuoraWhat is Disqus used for? - QuoraMore results from www.quora.com
  64. [64]
    Six comments apps for personal blogs - Brian Liddell
    Apr 23, 2021 · Disqus is a commercial hosted service, connected to an iframe that's embedded in post pages, containing a readers' comment form and a list of ...
  65. [65]
    The Pros and Cons of Disqus vs. Native WordPress Comments
    Comments bring traffic to a blog and native WordPress and Disqus are two of the biggest commenting systems. Here are the pros and cons of both the systems:<|separator|>
  66. [66]
    Cusdis - Lightweight, privacy-first, open-source comment system
    Cusdis is an open-source, lightweight, privacy-first alternative to Disqus. It's super easy to use and integrate with your existing website.
  67. [67]
    Commento – Add comments to your website
    Generate a code snippet. You can embed Commento on any website with just a couple of lines of simple HTML. · Import comments · Customize the look.
  68. [68]
    DISQUS: Elevating to the Next Level Commenting System
    This poses a pretty big drawback. Another big disadvantage of using DISQUS or any other third party commenting system is that it lessens your control over it.
  69. [69]
    Hyvor Talk - Comments, Newsletters, Memberships & More
    Hyvor Talk is a privacy-first, all-in-one platform for comments, newsletters, memberships and more engaging features for your website.Login · Pricing · Console · Docs<|control11|><|separator|>
  70. [70]
    Most Popular 3rd Party Comment Systems for Your Website - Medium
    Nov 16, 2018 · Disqus is the most widely used 3rd party commenting system with lots of features, so it rightfully deserves first place on my list.
  71. [71]
    The Anatomy of Reddit-Style Comments — A Weekend Engineering ...
    taking a flat list of comments from a database and transforming it into a visually nested conversation ...
  72. [72]
    Design Reddit | System Design - GeeksforGeeks
    Jul 23, 2025 · The comment services within the platform facilitate user engagement by allowing users to engage in discussions, provide feedback, and interact ...
  73. [73]
  74. [74]
    CommentThreads | YouTube Data API - Google for Developers
    Aug 28, 2025 · A commentThread resource contains information about a YouTube comment thread, which comprises a top-level comment and replies, if any exist, to that comment.
  75. [75]
    YouTube is experimenting with comment threading - BetaNews
    Jul 23, 2025 · Comment threading provides a more focused reading experience and helps users to easily understand conversations. There is even a little ...
  76. [76]
    YouTube tests threaded comments for Premium users
    Jul 31, 2025 · The purpose of threaded comments is to provide a clearer structure for discussions that occur under videos. With the new format, replies to ...
  77. [77]
    How to post X replies and mentions - Help Center
    On X, you can reply to posts or mention someone in your posts. Learn how to start X conversations.
  78. [78]
    Elon Musk's X lets users sort replies to find more relevant comments
    Aug 9, 2024 · X will now let users sort replies. Blue checks will no longer be prioritized in certain reply sorting options.
  79. [79]
    How to create a thread on X and how to view - Help Center
    A thread on X is a series of connected posts from one person. With a thread you can provide additional context, an update, or an extended point by connecting ...
  80. [80]
    A briefing on Facebook's new Nested Replies | Smart Insights
    Apr 4, 2013 · Now I can totally see the sense of nested replies in that people can easily respond directly to comments without the need for an @name before ...
  81. [81]
    Single-Level Nesting - Jeff Kaufman
    Aug 18, 2015 · When facebook first rolled out their feature where you could reply to comments I was disappointed. Only one level of nesting?
  82. [82]
    Best User Engagement Comments: Disqus vs Native Comparison
    Feb 24, 2025 · Disqus can slow down your website due to additional load times, while native comments are faster. Choosing the right comment system depends on ...
  83. [83]
    Journalist Involvement in Comment Sections
    Sep 10, 2014 · Optimistically, online comment sections offer a forum for gathering and sharing diverse opinions. Newsrooms have much to gain from these spaces ...
  84. [84]
    Survey of Commenters and Comment Readers
    Mar 14, 2016 · Online comment sections provide a space for the public to interact with news, to express their opinions, and to learn about others' views.
  85. [85]
    Online Comments Sections: Finding the Balance to... Michael Gioia ...
    Aug 23, 2016 · One researcher has referred to comments sections as a location for “public discursive processing of news issues by readers,” which ...<|control11|><|separator|>
  86. [86]
    To comment or not? The role of brand-related content type on social ...
    Mar 12, 2024 · This research study aims to examine what types of incentives trigger customers' engagement in terms of commenting on different brand-related content types on ...
  87. [87]
    Understanding news-related user comments and their effects
    Several scholars showed that comments can also be used as a tool to counter public opinion. For example, multiple studies found that minority groups (e.g., ...
  88. [88]
    How do online users respond to crowdsourced fact-checking? - Nature
    Nov 25, 2023 · Crowd-sourced fact-checking consists in recruiting internet users to evaluate information circulating online (Wojcik et al. 2022). In principle, ...
  89. [89]
    [PDF] The Effects of User Comments on Science News Engagement
    In general, people show more interest in a task when they perceive it as having value or utility. [16, 61]. Furthermore, people have been shown to be more ...
  90. [90]
    Exploring characteristics of online news comments and commenters ...
    The findings of the present study suggest that online commenting systems should be improved in a way that can guarantee more diverse opinions from readers. An ...
  91. [91]
    [PDF] Wisdom of Two Crowds: Misinformation Moderation on Reddit and ...
    Specifically, we observe that almost all Reddit moderators are heavily reliant on crowdsourced flagging by ordinary users to come upon potential COVID-19 ...<|separator|>
  92. [92]
    Why publishers are pivoting to community as a new source of growth
    Apr 8, 2025 · According to data from Viafoura, community members generate 5.3x higher dwell time, are 45x more likely to subscribe and make 3x more site ...Missing: commenters | Show results with:commenters
  93. [93]
    Why user-generated content works well for SEO - Search Engine Land
    Jun 9, 2025 · Google wants helpful, trustworthy content – and UGC delivers. Find out how to harness reviews, forums, and social posts for SEO.
  94. [94]
    Everything You Need to Know About Social Media Algorithms
    Oct 30, 2023 · Key signals include: User engagement: Likes, shares and comments indicate that users find the content interesting and relevant.
  95. [95]
    User comments for news recommendation in social media
    In this work, we present a framework to recommend relevant information in the forum-based social media using user comments. When incorporating user comments, we ...
  96. [96]
    [PDF] Affective Signals in a Social Media Recommender System - arXiv
    Jun 24, 2022 · Note, because these patterns are applied to the comments rather than the post, training any models on the content of the post will not be biased.
  97. [97]
  98. [98]
    Persistent interaction patterns across social media platforms ... - NIH
    Mar 20, 2024 · Long conversations online consistently exhibit higher toxicity, yet toxic language does not invariably discourage people from participating in a ...
  99. [99]
    Analyzing Toxicity in Deep Conversations: A Reddit Case Study - arXiv
    Apr 11, 2024 · We find that toxic comments increase the likelihood of subsequent toxic comments being produced in online conversations. Our analysis also ...
  100. [100]
    Toxic comments are associated with reduced activity of volunteer ...
    Dec 5, 2023 · We find that toxic comments are consistently associated with reduced activity of editors, equivalent to 0.5–2 active days per user in the short term.
  101. [101]
    [PDF] Investigating Online Toxicity in Users Interactions with the ...
    Oct 19, 2025 · Our analysis shows that religion- and violence/crime- related news derive the highest rate of toxic comments constituting 24.8%, and 25.9% of ...
  102. [102]
    The impact of toxic trolling comments on anti-vaccine YouTube videos
    Mar 1, 2024 · We discovered that highly liked toxic comments were associated with a significant level of fear in subsequent comments. Moreover, we found ...
  103. [103]
    Exploring the impact of social network structures on toxicity in online ...
    Transparency reports from Meta estimate that 0.14–0.15% of all views on Facebook in 2021 were of toxic posts. Twitter reports that it removed roughly two ...
  104. [104]
    Tracking patterns in toxicity and antisocial behavior over user ...
    Jul 14, 2025 · An increasing amount of attention has been devoted to the problem of “toxic” or antisocial behavior on social media. In this paper we analyze ...
  105. [105]
    The State of Online Harassment | Pew Research Center
    Jan 13, 2021 · A Pew Research Center survey of US adults in September finds that 41% of Americans have personally experienced some form of online harassment.
  106. [106]
    About 1 in 5 victims of online harassment say it happened in a ...
    Nov 20, 2014 · A recent Pew Research Center study found that roughly one-in-five (22%) internet users that have been victims of online harassment reported that their last ...
  107. [107]
    (PDF) The Online Disinhibition Effect - ResearchGate
    Aug 5, 2025 · This article explores six factors that interact with each other in creating this online disinhibition effect: dissociative anonymity, invisibility, ...
  108. [108]
    Deindividuation: How the Presence of Others Affects Behavior
    Jan 8, 2024 · Deindividuation refers to when a person becomes part of a crowd or group and then begins to lose their individual identity, adopting a mob mentality.
  109. [109]
    Exploring the Role of User Personality and Motivations for Posting ...
    This study examines personality traits and motivations in association with individuals' online news comment behavior. A survey of 517 participants indicated ...
  110. [110]
    Social Identities, Group Formation, and the Analysis of Online ...
    The overall aim of this chapter is to explore how social identity affects the formation and development of online communities.
  111. [111]
    Why do people share (mis)information? Power motives in social media
    Our findings revealed that both chronic and context-specific power motives were significantly associated with increased dissemination of posts and news in daily ...<|separator|>
  112. [112]
    Journal of Media and Communication Studies - personality traits ...
    Further, motivations to comment seem to fall along two dimensions: Those who wish to discuss, and those who wish to provoke, with the discussion factor playing ...
  113. [113]
    Topic-driven toxicity: Exploring the relationship between online ...
    Feb 21, 2020 · We specifically investigate a concept that we refer to as online news toxicity, defined as toxic commenting taking place in relation to online ...
  114. [114]
    Content Moderation – Immersive Truth
    As detailed by Zeuthen (2024), there are five main forms of content moderation: manual pre-moderation, manual post-moderation, reactive moderation, distributed ...Missing: comments | Show results with:comments
  115. [115]
    Practices for Moderating Online Discussion at a News Website
    Oct 18, 2021 · We discuss how managing comments at the NYT is not merely a matter of content regulation, but can involve reporting from the "community beat" ...
  116. [116]
    [PDF] Proactive Moderation of Online Discussions: Existing Practices and ...
    To address the widespread problem of uncivil behavior, many online discussion platforms employ human moderators to take action against objectionable content ...
  117. [117]
    Flag and Flaggability in Automated Moderation - ACM Digital Library
    May 7, 2021 · Online platforms rely upon users or automated tools to flag toxic behaviors, the very first step in online moderation.
  118. [118]
    The case of spontaneous community-based moderation on Reddit
    This study examines this phenomenon on Reddit, which employs a platform-wide content ranking system based on user upvotes and downvotes.
  119. [119]
    “The Boundaries are Blurry…”: How Comment Moderators in ...
    Dec 22, 2021 · Based on 20 interviews, this paper explores what comment moderators in Germany consider to be hate comments, how they moderate them, and how differences in ...
  120. [120]
    [PDF] Examining How Community Rules Affect Discussion Structures on ...
    Aug 2, 2023 · Our study find several significant effects, including greater clustering among users when subreddits increase rules focused on structural ...
  121. [121]
    The Necessity of Content Moderation Manual Review of ... - Callnovo
    Jun 19, 2023 · In conclusion, manual review is essential for content moderation in the Southeast Asian market to ensure the accuracy & effectiveness of reviews ...
  122. [122]
    [PDF] Regulating Online Content Moderation - Georgetown Law
    Leaked internal training manuals from Facebook reveal content moderation practices that are rushed, ad hoc, and at times incoherent. The time has come to ...
  123. [123]
    Detection and moderation of detrimental content on social media ...
    Sep 5, 2022 · Moderation is about making a decision about the checking and verifying the adequacy of the detected content according to the rules and policies ...
  124. [124]
    How Automated Content Moderation Works (Even When It Doesn't)
    Mar 1, 2024 · To moderate billions of posts, many social media platforms first compress posts into bite-sized pieces of text that algorithms can process quickly.
  125. [125]
    Get started with Perspective API - Google for Developers
    Sep 18, 2024 · Perspective API is a free API that helps you host better conversations online. The API uses machine learning (ML) to analyze a string of text.
  126. [126]
    Perspective API
    Perspective API is a free developer tool that helps platforms host conversations that flourish - on their own terms.Perspective · Research into Machine Learning · Getting Started
  127. [127]
    Deep learning for religious and continent-based toxic content ...
    Oct 19, 2022 · This research analyzes and compares modern deep learning algorithms for multilabel toxic comments classification.
  128. [128]
    Investigating Bias In Automatic Toxic Comment Detection - arXiv
    Aug 14, 2021 · Results show that improvement in performance of automatic toxic comment detection models is positively correlated to mitigating biases in these models.Missing: studies | Show results with:studies
  129. [129]
  130. [130]
    Investigating the heterogeneous effects of a massive content ... - arXiv
    Jul 1, 2025 · The results in Table 4 indicate a consistent decrease in mean toxicity between abandoning and remaining users across all banned subreddits, with ...
  131. [131]
    Assessing Community Effects of Moderation Interventions on r ...
    We find that the interventions greatly reduced the activity of problematic users. However, the interventions also caused an increase in toxicity and led users ...
  132. [132]
    [PDF] Automated Content Moderation Increases Adherence to Community ...
    While previous work has documented the importance of manual content moderation, the efects of automated content moderation remain largely unknown.
  133. [133]
    Automated Content Moderation Increases Adherence to Community ...
    Apr 30, 2023 · Online social media platforms use automated moderation systems to remove or reduce the visibility of rule-breaking content.
  134. [134]
    AI Feedback Enhances Community-Based Content Moderation ...
    Oct 6, 2025 · This study offers insights into the evolving role of AI in crowd-based content moderation. We demonstrate that integrating a large language ...
  135. [135]
    Most Americans Think Social Media Sites Censor Political Viewpoints
    Aug 19, 2020 · Chart shows majorities across parties say social media sites likely censor political views, but conservative. Larger shares in both parties ...
  136. [136]
    It's true, social media moderators do go after conservatives
    Oct 3, 2024 · The research points out that, in the United States, critics claim conservatives and Republicans are purposefully targeted by social media ...
  137. [137]
    U-M study explores how political bias in content moderation on ...
    Oct 28, 2024 · Our research documents political bias in user-driven content moderation, namely comments whose political orientation is opposite to the moderators' political ...
  138. [138]
    Censoring political opposition online: Who does it and why - PMC
    In three studies of behavior on putative online forums, supporters of a political cause (e.g., abortion or gun rights) preferentially censored comments that ...<|separator|>
  139. [139]
    News comment sections and online echo chambers: The ideological ...
    Mar 5, 2022 · We found that the political slant of the average user comments to be in alignment with the political leaning of the conservative news outlets; ...
  140. [140]
    Neutral bots probe political bias on social media - PMC - NIH
    Social media platforms moderating misinformation have been accused of political bias. Here, the authors use neutral social bots to show that, while there is ...
  141. [141]
    Social media users' actions, rather than biased policies, could drive ...
    Oct 2, 2024 · MIT Sloan research has found that politically conservative users tend to share misinformation at a greater volume than politically liberal users.
  142. [142]
    [PDF] Reasoning about Political Bias in Content Moderation
    In this paper, we first introduce two formal criteria to measure bias (i.e., independence and separation) and their contextual meanings in content moderation, ...
  143. [143]
    University of Michigan study finds political bias by moderators in ...
    Nov 4, 2024 · "This bias creates echo chambers, online spaces characterized by homogeneity of opinion and insulation from opposing viewpoints," he said.Missing: allegations ideological
  144. [144]
    Resolving content moderation dilemmas between free speech and ...
    In online content moderation, two key values may come into conflict: protecting freedom of expression and preventing harm. Robust rules based in part on how ...Missing: ideology | Show results with:ideology
  145. [145]
    [PDF] A Blessing or a Curse? The Impact of Platform-initiated Comment ...
    We explain the results from the perspective of a chilling effect on contributors aroused from comment moderation.
  146. [146]
    Why Moderating Content Actually Does More To Support ... - Techdirt.
    Mar 30, 2022 · So moderating spam seems to quite clearly enable more free speech by making platforms for speech more usable. Without such moderation, the ...
  147. [147]
    Double Standards in Social Media Content Moderation
    Aug 4, 2021 · This report demonstrates the impact of content moderation by analyzing the policies and practices of three platforms: Facebook, YouTube, and Twitter.
  148. [148]
    Social media, expression, and online engagement: a psychological ...
    Apr 13, 2025 · This study explores how political orientation influences perceptions of online speech regulation and consequent self-censorship behaviors.
  149. [149]
    Hate speech soared on Twitter after Elon Musk's acquisition and its ...
    Feb 13, 2025 · The number of hate messages on Twitter (now X) rose by 50% between the time that Elon Musk bought the social media platform in October 2022 and June 2023.
  150. [150]
    Auditing Elon Musk's Impact on Hate Speech and Bots - arXiv
    Jan 28, 2024 · We find that hate speech rose dramatically upon Musk purchasing Twitter and the prevalence of most types of bots increased, while the prevalence of astroturf ...
  151. [151]
    X under Musk's leadership: Substantial hate and no reduction in ...
    Feb 12, 2025 · Numerous studies have reported an increase in hate speech on X (formerly Twitter) in the months immediately following Elon Musk's acquisition of the platform.
  152. [152]
    YouTube is disabling comments on almost all videos featuring children
    Feb 28, 2019 · YouTube will no longer allow the majority of channels featuring kids to include comment sections following a controversy over predatory comments.Missing: case studies
  153. [153]
    Banning comments won't fix YouTube's paedophile problem. Its ...
    Mar 1, 2019 · Disabling comments on all videos of minors on YouTube is using a blunt instrument to tackle a precise problem.
  154. [154]
    The impact of toxic trolling comments on anti-vaccine YouTube videos
    Mar 26, 2024 · The average toxicity of highly liked comments has a high coefficient compared to the average toxicity of all comments (1.3 times higher in the average value in ...Missing: studies | Show results with:studies
  155. [155]
    A federal court held that a police department's use of Facebook ...
    Oct 1, 2021 · Judge rules a police department's use of Facebook profanity filters to block comments that include words like “pig" and “jerk” violates the ...
  156. [156]
    Court Rules Public Officials Can't Block Critics on Facebook | ACLU
    Jan 9, 2019 · The Fourth Circuit Court of Appeals ruled that the interactive portion of a public official's Facebook page is a “public forum,” so an official cannot block ...
  157. [157]
    Meta's Broken Promises: Systemic Censorship of Palestine Content ...
    Dec 21, 2023 · Meta's policies and practices have been silencing voices in support of Palestine and Palestinian human rights on Instagram and Facebook in a wave of heightened ...
  158. [158]
    That's Not the Way It Is: How User-Generated Comments on the ...
    Oct 1, 2012 · This study investigated if user-generated comments on Internet news sites affect readers' inferences about public opinion, and subsequently, their perceptions ...Abstract · Results · Discussion<|control11|><|separator|>
  159. [159]
    When and How User Comments Affect News Readers' Personal ...
    Nov 5, 2020 · Research has consistently found that people infer public opinion from online user comments, which obviously lack representativeness (see Lee ...
  160. [160]
    The effect of online news story comments on other readers' attitudes
    Individuals who read comments in conflict with the tone of the news story perceived the news story less positively than did those who read comments affirming ...Missing: article narratives
  161. [161]
    Will comments change your opinion? The persuasion effects of ...
    Aug 10, 2025 · For crisis communication in particular, online comments influence readers' opinions regarding crisis responsibility (Hong and Cameron 2018) .<|separator|>
  162. [162]
    Attacking science on social media: How user comments affect ... - NIH
    Two exploratory studies were performed to investigate the effects of science-critical user comments attacking Facebook posts containing scientific claims.
  163. [163]
    Social media comments can impact perceptions - UGA Today
    Feb 26, 2020 · New research from the University of Georgia has found that comments can, indeed, have a big influence on readers.
  164. [164]
    The proportion of liberal and conservative news comments within the...
    This study examines how users across the liberal-conservative political spectrum respond to participatory propaganda, with a special focus on top-ranked comment ...
  165. [165]
    The influence of the deliberative quality of user comments on the ...
    May 25, 2023 · We analyze how deliberative characteristics of Facebook user comments, namely, reciprocity, respect, rationality, and constructiveness, can influence the ...
  166. [166]
  167. [167]
    (PDF) The Good, the Bad, and the Evil Media: Influence of Online ...
    ... harm perceived media trust. However, it is the presence of. uncivil comments that harms both general- and outlet-level media trust. Second, we contribute to ...
  168. [168]
    [PDF] ATTACKS IN THE COMMENT SECTIONS:
    The results of our research showed that uncivil comments do taint perceptions of a news site. We also found that it doesn't matter if the first comments ...
  169. [169]
    Full article: Content Analyses of User Comments in Journalism
    Mar 31, 2021 · “Discussions in the Comments Section: Factors Influencing Participation and Interactivity in Online Newspapers' Reader Comments.” New Media ...
  170. [170]
    Individual and social benefits of online discussion forums
    Online discussion forums have benefits at individual and society level. They are positively linked to well-being for stigmatised group members.
  171. [171]
    [PDF] 10000 Social Media Users Can(not) Be Wrong
    Apr 1, 2022 · Research generally shows that user comments have more effect on perceived public opinion than popularity cues (e.g., Boot, Dijkstra, & Zwaan ...
  172. [172]
    2025 Social Media Comment Insights Report | Respondology
    In 2024 we moderated 118.4 million comments on 734,500 posts across the 450+ brands we work with. We hid 1 out of every 6 comments.
  173. [173]
    How Meta's new Content Moderation Policies affect Gender-based ...
    Jun 30, 2025 · Meta's rollback of content moderation in January 2025 coincided with a sustained rise in toxic language targeting women.Missing: developments | Show results with:developments
  174. [174]
    Readers will seek out well-moderated spaces - Nieman Lab
    2025 will amplify an existing trend: the growing divide between online spaces that invest in moderation and those that don't. X's first transparency report ...Missing: developments | Show results with:developments
  175. [175]
  176. [176]
    Tackling (misleading) incivility online: a user-centric evaluation of ...
    In sum, three key factors may influence users' acceptance of comment moderation: (1) the type of incivility, (2) the moderation strategy employed, and (3) ...
  177. [177]
    AI Is Replacing Online Moderators, But It's Bad at the Job - Bloomberg
    Aug 22, 2025 · But according to 13 professional moderators, the AI now relied upon to stop the spread of dangerous content, like child sex abuse material or ...
  178. [178]
    Content Moderation in a New Era for AI and Automation
    Researchers have posited that content moderation could potentially be improved through the use of new generative AI tools. However, this could mean that ...
  179. [179]
    AI Content Moderation Tools: 10 Advances (2025) - Yenra
    AI Content Moderation Tools: 10 Advances (2025) · 1. Automated Filtering · 2. Image and Video Analysis · 3. Real-time Moderation · 4. Scalability · 5. Contextual ...
  180. [180]
    AI Content Moderation in the Real World: 5 Uses You'll Actually See ...
    Oct 5, 2025 · By 2025, AI Content Moderation will be more sophisticated and pervasive. Trends point toward greater contextual understanding, reducing false ...
  181. [181]
    20 Social Media Trends To Guide Your Strategy In 2025
    Mar 24, 2025 · Brands are increasingly showing up in the comment section of viral posts and chiming in on conversations to boost their visibility and tap into ...<|separator|>