Fact-checked by Grok 2 weeks ago

Pretexting

Pretexting is a social engineering technique in which an attacker creates a fabricated scenario or identity to build trust and manipulate a into disclosing sensitive , granting unauthorized , or performing compromising actions. It differs from other cyber threats by exploiting human psychology rather than technical vulnerabilities, often through impersonation of authority figures such as executives, IT staff, or government officials. Pretexting is a core element in broader attacks like business email compromise (BEC), where it accounts for about 25% of incidents and contributes to average costs of $4.44 million as of 2024 (IBM 2025). BEC attacks, 25% of which begin with pretexting, caused over $2.9 billion in losses in 2023 (FBI ); in 2024, BEC losses totaled $2.77 billion (FBI ).

Background

Definition

Pretexting is a form of social engineering in which an attacker creates a fabricated , or pretext, and adopts a false to build and manipulate a victim into revealing sensitive information, such as , passwords, or access credentials. This tactic relies on the attacker's ability to craft a convincing that aligns with the victim's expectations, encouraging without immediate detection of deceit. The core objectives of pretexting encompass acquiring confidential details for further , securing unauthorized physical or digital access to systems and facilities, or prompting the victim to perform specific actions that advance the attacker's agenda, all while maintaining the illusion of legitimacy. These goals are pursued through sustained engagement, distinguishing pretexting from more opportunistic deceptions. What sets pretexting apart from simpler social engineering methods, like generic emails, is its emphasis on elaborate , , and preparatory to personalize the and foster . Representative pretexts might involve an attacker impersonating a colleague requiring immediate help with a work-related issue, a government authority conducting a routine , or a representative addressing an urgent billing concern. Pretexting operates within the wider umbrella of social engineering, which targets human vulnerabilities rather than technological weaknesses.

Historical Development

The concept of pretexting, involving the creation of fabricated scenarios to deceive and bypass defenses, traces its roots to ancient times. One of the earliest recorded examples is the during the around 1184 BCE, where Greek forces disguised soldiers inside a large wooden horse presented as a gift to the Trojans, exploiting trust to infiltrate the city and secure victory. Similarly, biblical accounts in describe the serpent in the tempting with a deceptive narrative about , promising knowledge and immortality to undermine divine prohibitions, illustrating early use of fabricated pretexts to manipulate behavior. In the 19th and early 20th centuries, pretexting evolved through con artistry and confidence tricks, which relied on elaborate deceptions to exploit human vulnerabilities in industrializing societies. During the , schemes like the "Spanish Prisoner" involved fraudsters posing as distressed nobles needing funds to free imprisoned relatives, preying on victims' sympathy and greed through mailed letters and staged scenarios. These tactics, often targeting emerging urban populations, marked a shift toward systematic social manipulation, as documented in period literature and legal records of cases. Pretexting emerged in modern cybersecurity during the 1970s and 1980s amid —the of telephone systems—and early computer intrusions, where attackers used impersonation to gain unauthorized access. , a prominent figure in this era, popularized pretexting through telecom fraud in the 1980s, such as posing as company insiders to extract passwords and network details from employees at firms like . His techniques, blending technical exploits with social deception, highlighted pretexting's role in bypassing security, influencing the field's recognition as a core method. The 1989 "AIDS" diskette scam represented an early digital pretext, where biologist Joseph Popp mailed 20,000 infected floppy disks labeled as AIDS research materials to conference attendees, embedding that demanded payment after encrypting files. By the 1990s and 2000s, the term "pretexting" gained formal traction in cybersecurity literature, particularly through Mitnick's 2002 book , which detailed its mechanisms within social engineering and advocated awareness training. With the internet's proliferation post-2000, pretexting evolved into sophisticated cyber threats, integrating and personas to scale deceptions across global networks.

Relation to Social Engineering

Core Techniques

Pretexting forms a key component of social engineering attacks, where perpetrators employ to extract confidential information. In the research phase, attackers conduct thorough investigations using (OSINT) from platforms, public databases, and company websites to compile personal details about targets, such as job roles, interests, or recent activities, enabling the construction of tailored deceptions. This process can yield sufficient data for a convincing pretext in as little as 100 minutes through online searches. Scenario creation involves developing detailed, plausible narratives that align with the target's context, often featuring a fabricated character—such as an IT or —and a specific situation, like an urgent system update or verification request, to foster immediate credibility. These stories incorporate authentic elements, including organizational logos or , to mimic legitimate interactions. Execution methods typically occur through direct channels, including telephone calls (vishing) for verbal impersonation, emails ( variants) with pretextual content, or in-person approaches using physical disguises like forged badges. Attackers may also employ to gain physical access by posing as authorized personnel. Escalation tactics begin with innocuous inquiries to establish and , progressively advancing to requests for sensitive , such as passwords or financial details, while leveraging principles like urgency—e.g., threats of service disruption—to compel action. Reciprocity is invoked by offering fabricated assistance in exchange for information, gradually eroding defenses. Supporting tools include prepared scripts for consistent delivery, voice modulation software for altering accents or tones during calls, and post-2020 advancements in AI-driven technology, such as voice cloning from brief audio samples, to enhance impersonation realism in vishing scenarios.

Reverse Social Engineering

Reverse social engineering represents a specialized variant of pretexting in which the attacker deliberately engineers a problematic situation to compel the to initiate , allowing the attacker—posing as a trusted or helper—to extract sensitive during the subsequent . In this approach, the attacker first creates a disruption, such as a temporary or , and then positions themselves as the apparent provider, exploiting the victim's urgency to resolve the . This method inverts the typical dynamic of social engineering by making the victim the proactive party, thereby enhancing the attacker's credibility from the outset. The process unfolds in distinct steps: initially, the attacker fabricates or induces a problem, for instance, by deploying to disrupt network access or deleting critical files to simulate a technical glitch. Next, the attacker establishes their role as a reliable intermediary, such as by leaving behind contact details disguised as official support or subtly advertising their "expertise" within the target's environment. Once the victim reaches out for assistance, the attacker leverages the interaction to request credentials, , or other confidential details under the guise of . This sequence ensures the exchange appears victim-driven, minimizing overt deception. Compared to conventional pretexting techniques, reverse social engineering offers notable advantages, particularly in reducing victim suspicion since the approach stems from the target's own initiative rather than unsolicited outreach. It achieves higher success rates in structured settings like corporate workplaces, where controlled disruptions can reliably prompt internal support requests, allowing the attacker to infiltrate without raising alarms. These benefits stem from the inherent trust placed in responders to urgent problems, amplifying the method's efficacy. The concept emerged in early 1990s hacking communities, with roots traceable to the Phrack magazine era (1984–1995), where it was described as a sophisticated . It gained prominence through Mitnick's documented exploits, as detailed in his 2002 book , which illustrates instances where attackers staged issues to pose as insiders offering resolutions, drawing from real-world tactics employed during his activities in the late 1980s and early 1990s. Mitnick's accounts highlight its evolution from phone lore into a core element of non-technical strategies. In contemporary applications, reverse social engineering has adapted to digital landscapes, often incorporating that simulates failures—such as fake virus alerts directing victims to fraudulent hotlines—or elements that embed contact prompts within seemingly benign communications. These evolutions maintain the core principle of problem induction followed by opportunistic assistance, targeting remote workers and cloud-based systems with increasing precision.

Psychological Aspects

Factors Enabling Trust

Pretexting exploits the psychological principle of authority, as outlined by Robert Cialdini, wherein individuals tend to defer automatically to perceived figures of power or expertise, such as impersonated police officers or corporate executives, thereby building rapid credibility and eliciting compliance without scrutiny. This deference stems from ingrained social norms that prioritize obedience to authority, making victims more likely to disclose sensitive information when the pretext aligns with hierarchical expectations. Reciprocity and further enable trust by invoking a of obligation and normalization; attackers may offer minor favors, like assistance with a technical issue, to trigger the human impulse to reciprocate, while referencing "common" or peer-endorsed scenarios to suggest widespread acceptance and reduce skepticism. These Cialdini-derived tactics leverage social dynamics, where the perceived mutual benefit or collective behavior fosters an illusion of legitimacy, prompting to lower their defenses. Personalization enhances the effectiveness of pretexts by incorporating specific details, such as family names or recent activities gleaned from public sources, to simulate familiarity and diminish doubt, with studies showing response rates increasing dramatically from baseline levels when messages are tailored this way. This approach creates a that feels authentic, exploiting the comfort derived from recognized personal context to bypass rational verification. Cultural factors amplify in pretexting, particularly in hierarchical societies with high —such as those in parts of —where to figures is culturally reinforced, making individuals more susceptible to compliant behaviors under deceptive scenarios. During crises, this vulnerability heightens as emotional reliance on trusted roles surges, further enabling attackers to manipulate relational norms prevalent in collectivistic cultures. Cognitive biases, notably , contribute by predisposing individuals to accept pretexts that align with their preexisting expectations or beliefs, overweighting supportive evidence while ignoring inconsistencies, thus sustaining the throughout the interaction. This bias integrates with pretexting techniques to reinforce perceived validity, as victims selectively interpret ambiguous cues in favor of the attacker's narrative.

Susceptibility and Threat Perception

Individuals exhibit varying levels of susceptibility to pretexting based on demographic characteristics, with older adults and those less familiar with showing heightened vulnerability. Research indicates that age is positively correlated with susceptibility to , including social engineering tactics like pretexting, as cognitive and experiential factors may reduce toward fabricated scenarios. For instance, studies on victimization reveal that elderly individuals often comply at rates exceeding general populations due to lower and isolation, with simulated tests demonstrating higher compliance among less tech-savvy groups under controlled conditions. further amplifies this risk, as individuals under are more likely to bypass protocols without thorough assessment. Low threat perception significantly contributes to pretexting's effectiveness, as victims frequently interpret deceptive scenarios—such as a routine tech support call—as benign rather than malicious. This stems from , a cognitive tendency where people underestimate personal risk and overestimate their ability to detect , leading to reduced vigilance in everyday interactions. In unverified communications, this bias results in high compliance, with pretexting achieving notable success rates in exploiting routine assumptions. Emotional states play a critical role in overriding rational judgment during pretexting attempts, particularly when urgency or is invoked. Attackers often fabricate high-stakes situations, like an imminent account suspension, prompting impulsive responses that prioritize immediate action over verification. This emotional increases , as studies on vishing— a common pretexting vector—report success rates up to 75% when combined with pressure tactics that heighten . In organizational settings, role-based assumptions exacerbate vulnerabilities to pretexting, as employees may default to trusting external callers presumed to hold legitimate positions without independent confirmation. This gap in protocols heightens risks, particularly in dynamic work environments where cues are quickly accepted. Vulnerability assessments underscore pretexting's potency, with reports indicating the human element, including social engineering such as pretexting, is involved in approximately 60% of breaches through unverified interactions. Such findings highlight the need to address these perceptual and situational weaknesses to mitigate risks.

Examples

Historical Cases

In the 1970s and 1980s, scams exemplified early pretexting tactics within the sector, where attackers impersonated phone company representatives to extract authorization or billing codes from employees and operators, enabling unauthorized long-distance calls and widespread fraud. These schemes, part of the broader subculture, often involved phone calls in which perpetrators posed as technicians or officials needing verification details to "resolve network issues," tricking victims into revealing sensitive access information. A notable example is the exploits associated with the community inspired by figures like , known as Captain Crunch, whose technical discoveries with tone-generating devices were complemented by social engineering methods to bypass billing systems, resulting in millions of dollars in lost revenue for through fraudulent international calls. Kevin Mitnick's operations in the late 1980s and early 1990s represented a sophisticated escalation of pretexting against telecom infrastructure, particularly targeting employees to obtain and system access. Mitnick frequently called staff, fabricating scenarios such as being a fellow engineer troubleshooting urgent network problems or a vendor requiring configuration details, thereby convincing them to disclose passwords, , or even grant remote access to systems and databases. One documented instance involved Mitnick posing as an internal auditor to extract dialing instructions and software updates from engineers, which he used to infiltrate and monitor 's networks. These actions culminated in his high-profile arrest by the FBI on February 15, 1995, in , after a two-and-a-half-year , exposing critical vulnerabilities in telecom employee training and verification protocols. The 1989 AIDS Trojan marked an early fusion of pretexting with digital malware distribution, where evolutionary biologist Joseph Popp mailed approximately 20,000 floppy disks worldwide under the guise of providing free AIDS research and health information software, primarily targeting attendees of the World Health Organization's international AIDS conference. Recipients, lured by the pretext of valuable public health data, installed the disks on their PCs, unwittingly activating a Trojan horse that counted boot cycles and, after the 90th reboot, displayed a message encrypting directory names and demanding $189 via mail to a Panama post office box for a decryption key. This attack affected thousands of users across academic, medical, and government institutions, causing data disruption and highlighting the potential for pretextual physical media to deliver malicious payloads in an era of limited antivirus protections. In the during the 1980s, blagging—a term synonymous with pretexting—became a routine practice among tabloid journalists seeking exclusive celebrity information through deceptive phone calls to banks, hospitals, and phone companies. Reporters or hired private investigators would impersonate celebrities, their relatives, or officials to obtain personal details such as addresses, medical records, or financial data, often framing inquiries as routine verifications or family concerns. Publications like and employed these tactics to fuel sensational stories on figures such as royalty and actors, predating the more publicized phone-hacking scandals of the but establishing blagging as a staple of invasive . These pre-2000 pretexting cases collectively drove nascent cybersecurity awareness in and beyond, prompting companies like and to implement stricter employee verification and access controls, though formal anti-fraud laws remained underdeveloped until later decades. Telecom firms incurred millions in direct losses from fraudulent calls and system breaches—estimated at over $10 million from alone in the 1970s—while the AIDS underscored the risks of trusted sources, influencing early antivirus development and on cyber threats. Despite these incidents, the absence of comprehensive regulations allowed pretexting to proliferate, setting the stage for heightened vigilance in social engineering defenses.

Modern Incidents

In the 2006 Hewlett-Packard (HP) spying scandal, private investigators hired by the company employed pretexting to impersonate board members and journalists, thereby obtaining their phone records without authorization. This illicit surveillance, which also involved physical tailing and trash collection, targeted leaks about boardroom conflicts and extended to nine journalists from outlets like CNET and The Wall Street Journal. The scandal prompted congressional hearings by the House Energy and Commerce Committee, where HP Chairwoman Patricia Dunn testified, ultimately leading to her resignation along with that of General Counsel Ann Baskins. The 2011 News of the World phone hacking scandal in the UK highlighted pretexting's role in large-scale media intrusions, where reporters and private investigator Glenn Mulcaire used the technique to obtain voicemail PIN codes for high-profile targets, including members of the British royal family. Initially exposed in 2005 for hacking Prince William's voicemails, the scandal escalated in 2011 with revelations of widespread voicemail interceptions affecting celebrities, politicians, and even a murdered teenager's phone, involving over 5,000 unique targets. The fallout forced the closure of the 168-year-old tabloid after its final edition on July 10, 2011, and triggered the Leveson Inquiry into media ethics and police corruption. Between 2013 and 2015, attackers impersonated executives from , a Taiwan-based manufacturer and Apple supplier, to send fake invoices via , defrauding and of over $100 million in a business email compromise scheme. This social engineering breach resulted in massive financial losses, underscoring vulnerabilities in corporate payment verification processes. In the , pretexting has increasingly incorporated technology, as seen in a 2023 incident where scammers in used AI-generated video to impersonate a multinational firm's () during a videoconference, tricking a finance employee into authorizing $25 million in fraudulent transfers. The victim, from engineering firm Arup, believed they were joining a legitimate meeting with the and other executives, all portrayed via deepfakes sourced from public videos. By 2024, AI-enhanced vishing— amplified by synthetic audio clones—saw a 442% surge in the second half of the year compared to the first, enabling more convincing impersonations in pretext calls. Recent trends show pretexting integrating with campaigns, where initial lures are followed by pretext phone calls to extract credentials or approvals for payload deployment. According to the 2025 Verizon Data Breach Investigations Report, third-party involvement in breaches doubled to 30% year-over-year.

Relevant Laws and Regulations

In the United States, the Gramm-Leach-Bliley Act (GLBA) of 1999, as amended in 2006, explicitly prohibits pretexting by banning the use of , , or to obtain nonpublic personal information from or directly from consumers. This law applies to and related entities, requiring safeguards for and imposing civil penalties of up to $10,000 per violation (as adjusted for inflation under the Act), with criminal penalties including up to five years imprisonment and fines of up to $250,000 for willful violations. The Telephone Records and Privacy Protection Act of 2006, enacted in response to high-profile incidents like the pretexting scandal, criminalizes the use of pretexting to obtain confidential telephone records from carriers or service providers without authorization. This federal statute empowers the and other agencies to enforce prohibitions against deceptive acquisition of phone records, with criminal penalties including fines under Title 18, , and imprisonment of up to 10 years for offenses such as fraudulent obtaining or selling of records, plus up to 5 additional years for aggravated offenses involving multiple victims or significant financial gain. At the state level, laws vary but often extend protections against pretexting through and impersonation statutes; for instance, California's Penal Code § 502 criminalizes unauthorized access to computer systems or , which courts have applied to deceptive tactics akin to pretexting, including digital impersonation, with penalties ranging from misdemeanors to felonies carrying up to three years in prison. Many states also regulate private investigators (PIs) through licensing boards that prohibit pretexting in or information-gathering activities, with violations leading to license revocation or civil fines. Internationally, the United Kingdom's , which incorporates the EU's (GDPR), prohibits pretexting by mandating that personal data be processed lawfully, fairly, and transparently under Article 5 of the GDPR, rendering deceptive collection methods unlawful as they violate fairness principles. Violations can result in enforcement by the (ICO), with maximum fines of £17.5 million or 4% of an organization's global annual turnover, whichever is higher. Enforcement of these laws has included notable actions, such as the FTC's 2007 settlements with multiple data brokers accused of pretexting to obtain and sell consumer phone records, resulting in permanent bans on such practices and monetary redress exceeding $1 million. More recently, the U.S. Department of Justice (DOJ) has issued 2024 guidance warning of harsher penalties for the deliberate misuse of AI in criminal activities, including fraud, under existing statutes such as wire fraud and laws.

Ethical Implications

Pretexting in raises significant ethical debates, particularly regarding the balance between and individual invasion. Undercover reporting that employs pretexting, such as posing as a participant to expose corruption, is often justified under the principle of serving the greater good, yet it risks violating personal autonomy and trust. The , established in response to widespread phone-hacking scandals involving deceptive practices akin to pretexting, highlighted these tensions and recommended strict guidelines limiting such tactics to cases of substantial , with proportionality and necessity as key criteria. Post-Leveson, journalistic codes, including those from the Independent Press Standards Organisation, impose restrictions on pretexting, emphasizing that privacy breaches must be outweighed by demonstrable public benefit to avoid ethical lapses. In private investigations, pretexting is ethically permissible when aimed at detecting or protecting legitimate interests, but it becomes unethical if pursued for personal gain or without oversight, potentially undermining professional integrity. The () Model Rules of Professional Conduct, particularly Rule 4.1, prohibit lawyers from making false statements of material fact to third parties, cautioning against deceptive tactics like pretexting unless they align with client representation duties and do not involve . Organizations such as the Electronic Privacy Information Center have argued that pretexting conflicts with multiple rules, including those on fairness (Rule 3.4) and misconduct (Rule 8.4), urging attorneys to avoid it in investigative to prevent ethical violations. This stance reflects broader concerns that unchecked pretexting erodes the adversarial system's reliance on truthful advocacy. Within cybersecurity testing, pretexting as part of , such as in authorized exercises, demands explicit consent to maintain moral legitimacy, while unauthorized applications contravene professional codes and risk severe harm. The (ISC)² Code of Ethics mandates that certified professionals advance societal protection through honorable conduct and avoid any unlawful or unethical acts, implying that pretexting without permission violates canons on and legal . In scenarios, where pretexting simulates social engineering attacks, ethical frameworks require predefined scopes and debriefings to ensure participants are not deceived without justification, distinguishing authorized simulations from malicious exploitation. Pretexting contributes to broader societal harms by eroding in interpersonal and communications, fostering a culture of suspicion that diminishes social cohesion. Frequent exposure to such deceptions can lead to widespread reluctance in sharing information, amplifying vulnerabilities in community interactions and online platforms. Vulnerable groups, including the elderly, low-income individuals, and those with limited , face disproportionate impacts, as pretexting exploits their more readily, exacerbating inequities in access to secure and services. Philosophically, pretexting in research pits perspectives—where deceptive methods are defensible if they yield net benefits like enhanced protections—against views that deem intentional deceit inherently wrong, regardless of outcomes. might endorse pretexting in controlled tests to safeguard at large, prioritizing aggregate welfare over individual rights. In contrast, emphasizes absolute duties to truthfulness, arguing that pretexting undermines moral imperatives like respect for persons, even in pursuit of advancements. This tension underscores ongoing debates in , where the ends-justifying-means rationale must be rigorously scrutinized against principles of and honesty.

Prevention and Education

Awareness Training Frameworks

Awareness training frameworks for pretexting focus on structured programs that educate individuals on recognizing manipulative social engineering tactics, where attackers create fabricated scenarios to extract sensitive information. These frameworks emphasize proactive learning to build resilience against deception-based threats. Key guidelines include the National Institute of Standards and Technology (NIST) Special Publication 800-50, which outlines a approach to developing cybersecurity awareness programs, incorporating scenario-based simulations to mimic real-world pretexting attempts like impersonation or urgent requests for data. Complementing this, the offers specialized modules on social engineering, including interactive content that covers pretexting techniques such as pretext creation and , delivered through short, targeted videos and assessments. Training components within these frameworks prioritize hands-on engagement to reinforce conceptual understanding. Interactive workshops often incorporate exercises where participants simulate pretexting interactions, such as responding to a executive request for credentials, to practice verification and refusal protocols. simulations, adapted for pretexting variants like vishing or smishing, are central; platforms like KnowBe4 deploy realistic scenarios that have demonstrated significant behavioral shifts, with organizations achieving an 86% reduction in global click rates after 12 months of consistent . These elements address susceptibility by targeting trust-building factors through repeated exposure and immediate . At the organizational level, frameworks recommend scalable models to ensure consistent implementation. The ISO/IEC 27001 standard mandates awareness training as part of its system, typically requiring annual mandatory sessions for all employees to cover evolving threats like pretexting, with documentation of completion to maintain compliance. In high-risk sectors such as , the (FINRA) tailors these requirements, advocating periodic cybersecurity training focused on social engineering risks, including pretexting, to align with regulatory oversight and reduce sector-specific vulnerabilities like account takeovers. Effectiveness of these frameworks is measured through standardized metrics that quantify learning outcomes and risk reduction. Pre- and post-training quizzes assess knowledge retention, often showing marked improvements in threat identification scores following scenario-based modules. Broader impact is tracked via incident reduction rates; for instance, Verizon's 2025 Data Breach Investigations Report indicates that regular security training correlates with a fourfold increase in phishing reporting rates, contributing to overall declines in successful social engineering incidents, including pretexting-related breaches. Emerging trends in 2025 are enhancing these frameworks with immersive technologies to deepen engagement. (VR) simulations are gaining adoption for pretexting scenarios, allowing users to navigate interactive environments that replicate high-stakes deceptions, such as a intrusion, to build intuitive response skills without real-world exposure. Additionally, curricula are integrating detection tools, teaching participants to leverage machine learning-based systems for flagging anomalous requests, thereby combining human awareness with automated safeguards in a hybrid educational model.

Best Practices for Mitigation

Organizations and individuals can mitigate pretexting risks by implementing robust protocols that emphasize multi-factor for any unsolicited . For instance, always callback to a known, verified phone number or use an independent channel to confirm the legitimacy of the requestor, rather than relying on provided contact details. Adopting a "zero trust" model for all interactions, particularly those involving sensitive information, ensures that no assumptions of trustworthiness are made based on apparent authority or familiarity. Technical controls play a critical role in detecting and blocking pretexting attempts, especially those leveraging digital deception. Deploy tools for detection, such as protocols or AI-based analyzers that examine call metadata for anomalies like mismatched signaling patterns. For pretexting via or messaging—often termed pretext phishing—implement advanced filters that scan for suspicious indicators, including urgent language or spoofed domains. Additionally, enforce (MFA) for all sensitive access points to add layers of verification beyond initial contact. Establishing clear policy measures within organizations fosters a structured approach to handling potential pretexting incidents. Define explicit reporting chains that encourage immediate escalation of suspicious interactions to a dedicated team, ensuring rapid assessment without fear of reprisal. Conduct regular audits of access logs to identify anomalies, such as unauthorized data queries or unusual patterns in employee behavior, which could indicate a successful . On an individual level, cultivating defensive habits is essential for personal resilience against pretexting. Always pause to scrutinize claims of urgency, as frequently use time to rational judgment. Be vigilant for red flags, including inconsistent details in the story, requests for confidential information without prior context, or high- tactics that evoke or . These practices build on foundational awareness training to empower proactive verification. Effective response plans are vital for containing pretexting breaches once detected. Develop incident playbooks that outline steps for isolating affected data, notifying stakeholders, and conducting forensic reviews to prevent recurrence. Since 2020, with the rise of AI-driven deepfakes in social engineering, incorporate verification methods like biometric checks—such as voice analysis or liveness detection—to authenticate identities in video or audio interactions.

References

  1. [1]
    What Is Pretexting? | IBM
    Pretexting is the use of a fabricated story, or pretext, to gain a victim's trust and trick or manipulate them into sharing sensitive information.What is pretexting? · How pretexting works
  2. [2]
    What Is Pretexting? - Definition & Attack Examples | Proofpoint US
    Pretexting is a social engineering attack where the attacker creates a fabricated identity or scenario to persuade a victim to divulge confidential information.What Is Pretexting? · How Pretexting Works · Real-World Examples of...
  3. [3]
    Social Engineering: Pretexting and Impersonation
    Feb 20, 2020 · Pretexting is a form of social engineering where a criminal creates a fictional backstory that is used to manipulate someone into providing private information.
  4. [4]
    What Is Pretexting? Definition, Examples and Attacks - Fortinet
    Pretexting is a form of social engineering tactic used by attackers to gain access to information, systems, or services by creating deceptive scenarios.Missing: sources | Show results with:sources
  5. [5]
    What is Pretexting? A Guide to This Social Engineering Attack
    Feb 1, 2024 · Pretexting usually involves more research and preparation by the attacker than other types of social engineering attacks. The attacker needs to ...Missing: distinction | Show results with:distinction
  6. [6]
    What Is Pretexting? - Palo Alto Networks
    Pretexting is a form of social engineering where attackers impersonate trusted entities to extract sensitive information or gain unauthorized access.<|control11|><|separator|>
  7. [7]
    What Is Pretexting? Definition, Examples, Attacks & More - Zscaler
    Pretexting is a form of social engineering attack in which a scammer creates a plausible scenario to bait victims into divulging sensitive information.
  8. [8]
    What is Pretexting? Attacks, Examples & Techniques - SentinelOne
    Jul 22, 2025 · Pretexting is a form of social engineering wherein an attacker creates a fabricated scenario to scam the target into exposing secret information ...Missing: credible | Show results with:credible
  9. [9]
    6 Types of Social Engineering Attacks and How to Prevent Them
    Jul 11, 2024 · Pretexting is a type of social engineering technique where the attacker creates a scenario where the victim feels compelled to comply under ...
  10. [10]
    What is pretexting in cybersecurity? - Huntress
    Oct 3, 2025 · Pretexting is a sneaky social engineering attack where bad actors make up a believable story (a "pretext," if you will) to trick someone into ...
  11. [11]
    The History of Social Engineering - Mitnick Security
    Incredulously, the earliest accounts of social engineering-like strategic hoodwinking trace back to the Trojan War in 1184 B.C.
  12. [12]
    Social Engineering and how it affects your coverage | Colony West
    The earliest form of 'pretexting' comes from the story in the bible where the Devil tempts Adam and Eve with an apple. Early forms of 'baiting' can be seen with ...Missing: origins | Show results with:origins
  13. [13]
    Victorian Trolling: How Con Artists Spammed in a Time Before Email
    Oct 29, 2013 · ... 19th and early 20th centuries that were part of an international con known as the Spanish Prisoner Scheme. “In this confidence trick,” he ...Missing: pretexting | Show results with:pretexting
  14. [14]
    How Scams Worked In The 1800s : NPR History Dept.
    Feb 12, 2015 · The 1800s were the Golden Age of schemes. The term "confidence man" or "con man" was probably coined midcentury and, according to the New York Times, the ...Missing: tricks pretexting
  15. [15]
    Famous Social Engineering Attacks: 12 Crafty Cons - Meritas Group
    Aug 12, 2019 · Kevin Mitnick's wild run. Kevin Mitnick was one of the most notorious hackers of the '80s and '90s computer age. His exploits were driven by ...
  16. [16]
    AIDS Trojan | PC Cyborg | Original Ransomware - KnowBe4
    The AIDS trojan was created by a biologist Joseph Popp who handed out 20,000 infected disks to attendees of the World Health Organization's AIDS conference. The ...
  17. [17]
    Social Engineering 2.0 - Communications of the ACM
    Sep 4, 2025 · 2. Deepfake Audio and Video. Voice cloning used to work well only when fed hours of recording for training. Now, a 30-second clip is enough ...<|control11|><|separator|>
  18. [18]
    (PDF) Defining Social Engineering in Cybersecurity - ResearchGate
    May 6, 2020 · of reverse social engineering, in which a social engineer, e.g.. first creates a network failure, and make victims believe that. the attacker is ...
  19. [19]
    What is Reverse Social Engineering? And How Does It Work? | Aware
    Reverse Social Engineering is a cyberattack, targeting individuals by making direct contact and compelling them into divulging sensitive data.
  20. [20]
    [PDF] MASTER'S THESIS - Social Engineering and Influence - DiVA portal
    Typical examples of reverse Social Engineering attacks in Baker et al.'s approach are: an attacker who creates a problem in the facility and then visits as a ...<|control11|><|separator|>
  21. [21]
    What Is a Social Engineering Attack? - Mitnick Security Consulting
    Dec 4, 2022 · Social engineering is a hacking technique used by criminals to trick employees into taking action or providing information that leads to the ...Missing: history | Show results with:history
  22. [22]
  23. [23]
  24. [24]
    Empirical Analysis of Weapons of Influence, Life Domains, and ... - NIH
    Cialdini [25] described six such weapons of influence: reciprocation, liking, scarcity, social proof, authority, and commitment. A seventh weapon, perceptual ...
  25. [25]
    [PDF] HUMAN FACTORS IN CYBERSECURITY: A CROSS-CULTURAL ...
    Oct 27, 2025 · Role of Trust. Trust plays a vital role in cybersecurity, specifically in social engineering attacks. Social engineering is a method used by ...
  26. [26]
    [PDF] Human Factors in Web Authentication - UC Berkeley EECS
    Feb 6, 2009 · One threat to Web authentication is phishing, a social engineering ... confirmation bias [89], a human tendency to overweight confirming ...
  27. [27]
    [PDF] L18: Social Engineering - Abhi Shelat - Northeastern University
    Cognitive biases are leveraged in all three steps. Page 50. Mitnick on Pretexting. “When you use social engineering, or 'pretexting', you become an actor.
  28. [28]
    The psychology of the internet fraud victimization of older adults - NIH
    Sep 5, 2022 · James et al. (2014) found that age is positively correlated with the level of susceptibility to fraud, especially as reported by news and media ...Missing: pretexting | Show results with:pretexting
  29. [29]
    Vulnerability to Financial Scams Among Older Adults: Cognitive and ...
    Dec 26, 2020 · Older adults with smaller social networks and less social support were more likely to report both exposure and vulnerability to scams. Higher ...<|control11|><|separator|>
  30. [30]
    A Comprehensive Review of Factors Influencing User Susceptibility ...
    The most frequent dependent variable used in the reviewed studies is susceptibility to social engineering attacks (N = 11). Susceptibility to social-engineering ...
  31. [31]
    The Role of Cognitive Biases Towards Social Engineering-Based ...
    Dec 12, 2023 · This paper focuses on the contributions of cognitive biases towards successful social engineering attacks.
  32. [32]
    9 Cognitive Biases Hackers Exploit During Social Engineering Attacks
    May 9, 2023 · Optimism Bias: Overestimating the probability of positive events while underestimating the probability of adverse events. Example: Phishing ...
  33. [33]
    What Is A Social Engineering Attack? (& How To Prevent Them)
    Recent research suggests that vishing alone has a success rate of 37%, but this increases to 75% when combined with email phishing.<|control11|><|separator|>
  34. [34]
    It all adds up: Pretexting in executive compromise | IBM
    Pretexting is an inherently human attack vector that exploits the social nature of work. While it's impossible for C-suite members to eliminate their human ...Missing: role- based assumptions
  35. [35]
    2025 Data Breach Investigations Report - Verizon
    Social Engineering. Phishing and pretexting are top causes of costly data breaches. Discover how to help prevent attacks by blocking connected devices from ...
  36. [36]
    Kevin Mitnick, hacker and FBI-wanted felon turned security guru ...
    Jul 20, 2023 · Mitnick had first been arrested for computer crimes at age 17 for brazenly walking into a Pacific Bell office and taking a handful of ...
  37. [37]
    Malicious Life Podcast: Kevin Mitnick, Part 2 - Cybereason
    Mitnick decided to get to the bottom of the matter: he hacked Pacific Bell and obtained Eric's phone number. The next step was finding out his address. “Posing ...Missing: examples | Show results with:examples
  38. [38]
    The bizarre story of the inventor of ransomware | CNN Business
    May 16, 2021 · ... Health Organization's AIDS conference in ... Eddy Willems with his original floppy disc with ransomware from 1989. Courtesy Eddy Willem.
  39. [39]
    Scandal on tap | Media | The Guardian
    Dec 4, 2006 · He is bringing a spate of prosecutions against the private eyes who some journalists regularly use as cut-outs to make pretext phone calls to ...
  40. [40]
    What is pretexting? Definition, examples, and attacks - CSO Online
    Sep 20, 2024 · Pretexting is a social engineering attack that employs a fabricated scenario and character impersonation to win trust and gain access to data and accounts ...
  41. [41]
    Hewlett-Packard under investigation in spying scandal - The Guardian
    Sep 7, 2006 · HP said its investigators had used a method known as "pretexting" - disguising their true identity - to obtain Mr Keyworth's phone records. The ...
  42. [42]
    HP Spying Scandal Offers Window into Board Battles - NPR
    Oct 12, 2006 · HP's investigators secretly gathered phone records and tailed the board members' relatives and even dug through people's trash in an effort to ...
  43. [43]
    - HEWLETT-PACKARD'S PRETEXTING SCANDAL - GovInfo
    On September 6, 2006, H-P publicly disclosed to the Securities and ... pretexting was taking place based upon the information I got from HP. That ...
  44. [44]
    Phone hacking can extend beyond voice mail - CNN.com
    Jul 8, 2011 · News of the World appears to have exploited a mechanism in mobile-phone carriers' systems that allows people to access voice-mail messages ...
  45. [45]
    Phone hacking: how News of the World's story unravelled
    Jul 8, 2011 · At the trial it emerged that five others had their phones or messages hacked into – none of whom were members of the Royal family, subjects of ...
  46. [46]
    Timeline: News of the World phone-hacking row - BBC
    Jul 11, 2011 · The News of the World has ceased publication after 168 years, as anger mounts over allegations of phone hacking and corruption by the paper.
  47. [47]
    Arup revealed as victim of $25 million deepfake scam ... - CNN
    May 17, 2024 · Hong Kong police said in February that during the elaborate scam the employee, a finance worker, was duped into attending a video call with ...
  48. [48]
    Company worker in Hong Kong pays out £20m in deepfake video ...
    Feb 5, 2024 · An employee at an unnamed company claimed she was duped into paying HK$200m (£20m) of her firm's money to fraudsters in a deepfake video conference call.<|separator|>
  49. [49]
    Vishing Attacks Surge 442%: Here's How We're Simulating Them
    Jun 19, 2025 · Vishing attacks are spiking, and they're powered by AI voice clones and social engineering. Here's how to prevent vishing with real-world ...
  50. [50]
    FTC Kicks off "Operation Detect Pretext" | Federal Trade Commission
    Jan 31, 2001 · The Gramm-Leach-Bliley Act ("GLB") prohibits individuals from obtaining a customer's information from a financial institution or directly ...
  51. [51]
    [PDF] How To Comply with the Privacy of Consumer Financial Information ...
    Jul 1, 2002 · “Gramm-Leach-Bliley Act Financial Privacy and Pretexting.” Page 3. -i ... The GLB Act prohibits financial institutions from sharing account ...
  52. [52]
    Telephone Records and Privacy Protection Act of 2006 - Congress.gov
    ... telephone company employees selling data to unauthorized data brokers; (B) ``pretexting'', whereby a data broker or other person represents that they are an ...
  53. [53]
    FTC Testifies on the Sale of Consumers Phone Records
    Feb 1, 2006 · “While pretexting to acquire telephone records has recently become more prevalent, the practice of pretexting is not new,” the testimony states.
  54. [54]
    Penal Code § 502 PC – Unauthorized Computer Access and Fraud
    California Penal Code § 502 PC prohibits accessing another person's or company's computer, data, software, or a computer network without permission.Missing: pretexting impersonation
  55. [55]
    Unauthorized Computer Access and Fraud - California Penal Code ...
    PC 502 covers a wide range of ways someone can commit unauthorized computer access. For example, Penal Code 502(c), the most common form, makes it a crime to ...Missing: pretexting | Show results with:pretexting
  56. [56]
    Art. 5 GDPR – Principles relating to processing of personal data
    Rating 4.6 (10,116) Personal data shall be: processed lawfully, fairly and in a transparent manner in relation to the data subject ('lawfulness, fairness and transparency'); ...Article 89 · Lawfulness · Recital 39Missing: deceptive | Show results with:deceptive
  57. [57]
    A guide to the data protection exemptions | ICO
    This part of the Guide focuses on the exemptions in Schedules 2-4 of the DPA 2018. We give guidance on the exceptions built in to the UK GDPR in the parts of ...Missing: pretexting | Show results with:pretexting
  58. [58]
    Telephone Records Seller Settles FTC Charges
    Dec 17, 2007 · The settlement bars the defendants from marketing or selling consumers' phone records and requires them to give up their ill-gotten gains.
  59. [59]
    DOJ Doubles Down on Warnings Against AI Misuse
    Mar 14, 2024 · ... AI policy, including regarding corporate compliance issues. The DOJ's message is clear: "Fraud using AI is still fraud. Price fixing using ...
  60. [60]
    [PDF] an inquiry into the culture, practices and ethics of the press report
    Nov 29, 2012 · Page 1. The. Leveson. Inquiry culture, practices and ethics of the press. AN INQUIRY INTO THE CULTURE,. PRACTICES AND ETHICS OF THE. PRESS.
  61. [61]
    Leveson inquiry: Guardian journalist justifies hacking if in the public ...
    Dec 6, 2011 · Guardian investigations editor David Leigh says a 'certain amount of guile' can be needed to find evidence of corruption.
  62. [62]
    Letter to Ethics Board Concerning Attorneys' Use of Pretexting - EPIC
    Feb 21, 2006 · We believe that pretexting is incompatible with ABA Model Rules 1.2, 3.4, 4.1, 4.4, and 8.4. We provide documentation below of the mounting ...
  63. [63]
    ISC2 Code of Ethics
    All information security professionals who are certified by ISC2 recognize that such certification is a privilege that must be both earned and maintained.
  64. [64]
    The ISC2 code of ethics - Infosec Institute
    The ISC2 code of ethics is a collection of requirements that apply to how you act, interact with others (including employers) and make decisions as an ...
  65. [65]
    What is Social Engineering? - Palo Alto Networks
    Loss of reputation and erosion of customer trust. All of these are difficult to recover from. The long-term effects of social engineering on security involve ...How Does Social Engineering... · Common Strategies Used By... · Notable Social Engineering...Missing: societal | Show results with:societal
  66. [66]
    Pretexting in Cybersecurity: What You Need to Know - SearchInform
    At its core, pretexting is an elaborate act of deception. It involves creating a fabricated scenario, or "pretext," to manipulate individuals into divulging ...<|control11|><|separator|>
  67. [67]
    [PDF] Ethical Frameworks and Computer Security Trolley Problems
    and in particular utilitarianism and Kantian deontological ethics, respectively (Section ...
  68. [68]
    Computer Security Research, Moral Dilemmas, and Ethical ...
    Aug 2, 2023 · Deontological ethics centers questions about duties (deon) and rights. Under deontological ethics, one focuses on asking what one owes others ...Missing: pretexting | Show results with:pretexting
  69. [69]
    [PDF] Building a Cybersecurity and Privacy Learning Program
    Sep 1, 2024 · This long-awaited update to the 2003 NIST Special. Publication (SP) 800-50, Building an Information Technology Security Awareness and Training.Missing: pretexting | Show results with:pretexting
  70. [70]
    Security Awareness Training - SANS Institute
    A library of over 50 training modules across 6 tracks, curated by SANS Subject Matter Experts to reduce risk, increase awareness and mature programs.Download Now · Career Development · Learn More
  71. [71]
    SANS Security Awareness Suite - American Bankers Association
    Social Engineering (4 minutes) This module explains and illustrates different types of social engineering attacks and how people can detect and defend against ...
  72. [72]
    KnowBe4 Report Reveals Security Training Reduces Global ...
    May 13, 2025 · KnowBe4's 2025 Phishing by Industry Benchmarking Report shows a drop in the global Phish-prone TM Percentage (PPP) to 4.1% after 12 months of security training.
  73. [73]
    A Guide to ISO 27001:2022 Security Awareness Training
    Aug 8, 2024 · ISO 27001, Clause 7.3, requires staff to be aware of the ISMS and information security. Learn how to build a security awareness program.
  74. [74]
    Information Security Awareness, Education, and Training | ISMS.online
    For example, the organisation may need to put in place security awareness training at least annually, or as required by the risk assessment, for all employees ...
  75. [75]
    Security Awareness Compliance Requirements
    NIST provides guidance, requiring security awareness training. FINRA requires annual training, and HIPAA requires training on policies and procedures.Missing: pretexting | Show results with:pretexting
  76. [76]
    2025 Verizon DBIR: Why Devaluing Data Stops Breaches
    May 27, 2025 · Encouragingly, the report notes that organizations with regular security training saw phishing reporting rates improve fourfold. The Fix: ...
  77. [77]
    [PDF] VirSec – Immersive Security Training within Virtual Reality
    OneBonsai's 'Cyber Security Awareness VR' is a cyber awareness training simulator [14] that aims to make players aware of cyber threats within a corporate ( ...Missing: pretexting | Show results with:pretexting
  78. [78]
    VR/AR-Based Cybersecurity Training: Enhancing Platform Security ...
    Mar 20, 2025 · Conduct platform-wide security awareness assessment. Identify priority threat vectors for VR/AR treatment. Develop initial concept prototypes.
  79. [79]
    Integrating Artificial Intelligence into Cybersecurity Curriculum
    The integration of Artificial Intelligence (AI) into cybersecurity tools offers significant advantages, enhancing threat detection, predictive analysis, and ...
  80. [80]
    Avoiding Social Engineering and Phishing Attacks | CISA
    Feb 1, 2021 · In a social engineering attack, an attacker uses human interaction (social skills) to obtain or compromise information about an organization or its computer ...Missing: pretexting | Show results with:pretexting
  81. [81]
    How to Protect Your Organization from a Pretexting Attack | Verizon
    Sep 26, 2025 · To protect against pretexting, train employees, use multi-factor authentication, patch software, install anti-malware, and use AI-powered email ...
  82. [82]
    7 Tips to Defend Against the Rising Threat of Pretexting - Verus
    Jul 18, 2023 · Zero Trust Security Model: Implement a zero trust security model, where every access request is verified, regardless of where it comes from.
  83. [83]
    Caller ID Spoofing | Federal Communications Commission
    Nov 13, 2024 · Caller ID spoofing is when a caller deliberately falsifies the information transmitted to your caller ID display to disguise their identity.
  84. [84]
    How Can AI Detect Caller ID Spoofing in VoIP and Telecom Networks
    AI detects caller ID spoofing by analyzing large volumes of call metadata and signaling information in real time. Machine learning models identify unusual ...
  85. [85]
    4 Ways to Defend Against Pretexting Scams | Proofpoint US
    May 17, 2018 · How to stop pretexting and phishing · Secure email · Train users to be on the lookout for pretexting and phishing scams · Establish a policy to ...
  86. [86]
    Don't Take the Bait! Phishing and Other Social Engineering Attacks
    Pretexting involves the attacker creating a believable scenario and sharing it to convince the target to provide sensitive data. Attackers that use pretexting ...
  87. [87]
    [PDF] Cyber Security Planning Guide
    They should be educated on computer security best practices and what to do if information is accidentally deleted or cannot easily be found in their files.
  88. [88]
    You're the weakest link: How to avoid revealing your government's ...
    How to avoid becoming a victim · Slow down. · Learn how to recognize common phishing email subject lines. · Verify the identity of the person making the request.
  89. [89]
    What Is Pretexting and How to Avoid It? - McAfee
    Here are some strategies to avoid falling victim to pretexting: Educate yourself and others: Awareness is the first line of defense against pretexting.Missing: practices | Show results with:practices
  90. [90]
    [PDF] Increasing Threat of DeepFake Identities - Homeland Security
    In this scenario we consider the use of deepfake technology to more convincingly execute social engineering attacks. First, a malign actor would conduct ...
  91. [91]
    Deepfake threats to companies - KPMG International
    Enhanced social engineering attacks. By using deepfakes, bad actors can penetrate organizations by, for example, impersonating a Chief Technology Officer ...