Computer security
Computer security encompasses the measures and controls designed to protect computer systems, networks, and data from unauthorized access, use, disclosure, disruption, modification, or destruction, thereby ensuring their availability, integrity, authenticity, confidentiality, and non-repudiation.[1] At its core lies the CIA triad—confidentiality, which prevents unauthorized disclosure of information; integrity, which safeguards against improper modification; and availability, which ensures timely and reliable access to data and resources.[2] These principles guide the development of technologies such as encryption algorithms, firewalls, and intrusion detection systems, which have evolved since the 1970s alongside the growth of networked computing.[3] The field addresses a wide array of threats, including malware, phishing, and advanced persistent threats from state actors, which exploit vulnerabilities in software, hardware, and human behavior.[2] Empirical data underscores its economic imperative: cybercrime inflicts annual global costs exceeding $8 trillion as of 2023, projected to reach $10.5 trillion by 2025, driven by data breaches, ransomware, and operational disruptions that erode trust and productivity.[4] Defining characteristics include ongoing trade-offs between robust protection and usability, as overly restrictive measures can hinder legitimate operations, while insufficient safeguards invite exploitation, as evidenced by high-profile incidents revealing systemic weaknesses in patching and access controls.[5] Achievements such as the standardization of secure protocols like TLS for web communications have mitigated widespread risks, yet persistent controversies arise over encryption backdoors proposed for law enforcement access, balancing public safety against individual privacy rights under first-principles scrutiny of causal incentives for abuse.[3][6]Fundamentals
Definitions and Scope
Computer security encompasses the measures and controls designed to protect computer systems, including hardware, software, and associated data, from unauthorized access, use, disclosure, disruption, modification, or destruction. The National Institute of Standards and Technology (NIST) defines it as the safeguards ensuring confidentiality, integrity, and availability (CIA triad) of information processed and stored by a computer.[1] Confidentiality prevents unauthorized disclosure of information, integrity ensures data accuracy and prevents improper modification, and availability guarantees timely and reliable access to authorized users.[7] This framework, originating from foundational risk management models in the 1970s and formalized in standards like NIST SP 800-12, prioritizes causal protections against threats stemming from both intentional attacks and accidental failures.[8] The scope of computer security primarily focuses on endpoint devices such as desktops, laptops, servers, and virtual machines, distinguishing it from broader cybersecurity, which extends to networked environments and internet-scale threats.[9] While overlapping with information security—which protects data across all media—computer security emphasizes system-level defenses against malware, exploits, and physical tampering.[10] It includes technical controls like access restrictions and encryption, as well as procedural elements such as user authentication and auditing, but excludes non-digital information handling. NIST notes that the term has been largely superseded by "cybersecurity" in modern contexts due to the interconnected nature of computing, yet it remains relevant for isolated or legacy systems.[1] Key boundaries delineate computer security from adjacent fields: it does not typically cover human-centric risks like social engineering (addressed under cybersecurity practices) or purely physical security unrelated to computational assets. Empirical evidence from incident reports, such as the 1988 Morris Worm affecting 10% of internet-connected computers, underscores the need for scoped definitions to enable targeted defenses without diluting focus on verifiable system vulnerabilities.[11]Core Principles
The core principles of computer security revolve around the CIA triad—confidentiality, integrity, and availability—a foundational model that defines the objectives for safeguarding information systems and data.[12][7] This triad, originating from early information security frameworks, informs standards such as those from NIST, where it serves as the basis for evaluating risks and implementing controls.[13] Security measures must address threats to each element without unduly compromising the others, as overemphasis on one principle, like stringent confidentiality protocols, can inadvertently reduce availability.[14] Confidentiality ensures sensitive data is accessible only to authorized entities, preventing unauthorized disclosure through methods such as encryption, which transforms readable data into ciphertext requiring a key for decryption.[15][16] Access controls, including authentication mechanisms like passwords and biometrics, further enforce this by verifying user identities before granting access.[17] Breaches of confidentiality, such as data leaks, underscore its importance, as seen in incidents where unencrypted transmissions exposed personal information.[18] Integrity protects data from unauthorized alteration or destruction, maintaining its accuracy and trustworthiness via techniques like checksums, hashing algorithms (e.g., SHA-256), and digital signatures that detect modifications.[19][20] These methods allow verification that data received matches the original, countering threats like malware injections or insider tampering.[21] Without integrity assurances, decisions based on compromised data could lead to erroneous outcomes in critical systems, such as financial transactions or medical records.[22] Availability guarantees timely and reliable access to resources for authorized users, defended against disruptions like distributed denial-of-service (DDoS) attacks through redundancies such as backup systems, load balancers, and failover mechanisms.[7][12] In high-stakes environments, such as e-commerce or emergency services, unavailability can result in significant operational and economic losses; for example, a 2023 DDoS attack on a major cloud provider disrupted services for hours, affecting millions of users.[14] Beyond the triad, principles like non-repudiation extend security by ensuring actions or transactions cannot be denied by parties involved, often achieved through audit logs and cryptographic proofs.[23][21] Authenticity verifies the genuineness of data or users, complementing authentication processes.[21] These augment the CIA model in comprehensive strategies, particularly for accountability in distributed systems.[24]Threats and Vulnerabilities
Malware and Exploits
Malware, short for malicious software, consists of programs or code designed to disrupt, damage, or gain unauthorized access to computer systems, often by exploiting software vulnerabilities or user errors. These programs propagate through vectors such as email attachments, infected websites, or removable media, with global infections reaching approximately 6.2 billion in 2024, driven by AI-generated variants and phishing.[25] Ransomware, comprising 28% of malware cases in 2024, encrypts data and demands payment for decryption, while other types like trojans masquerade as legitimate software to deliver payloads.[26] Common malware categories include viruses, which attach to legitimate files and replicate upon execution; worms, self-replicating entities that spread across networks without host files; and spyware, which covertly monitors user activity to steal sensitive information. Adware, often bundled with free software, displays unwanted advertisements and can facilitate further infections. In the first quarter of 2025, malvertising emerged as a primary infection vector, accounting for significant detections through campaigns like SocGholish.[27] Exploits target specific software flaws to execute arbitrary code or escalate privileges, distinct from but frequently enabling malware delivery. A buffer overflow occurs when data exceeds allocated memory, allowing attackers to overwrite adjacent memory and inject malicious instructions, as seen in vulnerabilities affecting older Windows systems. Zero-day exploits leverage undisclosed vulnerabilities unknown to vendors, enabling attacks before patches exist; these remain highly dangerous due to lack of defenses at the time of discovery.[28][29][30] Notable incidents illustrate malware-exploits synergy: Stuxnet, discovered in 2010, exploited four zero-day vulnerabilities in Windows and Siemens software to sabotage Iranian nuclear centrifuges, marking the first known cyber weapon targeting physical infrastructure. WannaCry, propagating in May 2017 via the EternalBlue exploit in unpatched Windows SMB protocol, infected over 200,000 systems across 150 countries, halting operations at entities like the UK's National Health Service and causing billions in damages. Such events underscore causal chains where unpatched exploits serve as entry points for malware proliferation, amplifying impacts through rapid, autonomous spread.[31][32]| Malware Type | Description | Example Impact |
|---|---|---|
| Ransomware | Encrypts files for ransom | WannaCry (2017): $4 billion+ global losses[32] |
| Worm | Network-spreading replicator | Stuxnet (2010): Physical destruction of equipment[31] |
| Trojan | Deceptive payload delivery | Emotet (2018+): Banking credential theft[33] |
Network and Physical Attacks
Network attacks exploit vulnerabilities in data transmission protocols and infrastructure to disrupt services, intercept communications, or gain unauthorized access. Common variants include denial-of-service (DoS) attacks, which overwhelm target systems with excessive traffic to exhaust bandwidth or processing resources, thereby denying legitimate users access.[35] Distributed denial-of-service (DDoS) attacks scale this threat by coordinating floods from botnets of compromised devices, often peaking at terabits per second.[36] For instance, the 2016 Dyn DDoS attack, leveraging the Mirai botnet, generated over 1.2 Tbps of traffic, disrupting services like Twitter and Netflix for much of the U.S. East Coast.[37] Similarly, the 2018 GitHub attack reached 1.35 Tbps using memcached amplification, though mitigated within 10 minutes via traffic scrubbing.[37] Man-in-the-middle (MITM) attacks intercept and potentially alter data between communicating parties by positioning the attacker between endpoints, often exploiting unencrypted protocols or spoofed credentials.[36] Eavesdropping, or packet sniffing, passively captures unencrypted traffic on shared networks like Wi-Fi to extract sensitive information such as credentials or session tokens.[35] Routing attacks, such as Border Gateway Protocol (BGP) hijacking, involve falsifying route advertisements to divert traffic through attacker-controlled paths, enabling surveillance or redirection.[38] A prominent case occurred on February 24, 2008, when Pakistan Telecom hijacked YouTube's BGP prefixes, inadvertently blocking global access for about two hours while routing traffic to servers in Pakistan.[39] Physical attacks require direct access to hardware or leverage observable physical phenomena to compromise systems, bypassing software defenses. These include theft of devices, where portable hardware like laptops is stolen to extract stored data, often unencrypted at rest. Tampering involves modifying hardware, such as inserting keyloggers or replacing components with malicious ones during supply chain stages.[40] Environmental disruptions, like cutting power supplies or using electromagnetic pulses, can cause data loss or denial of service by targeting physical infrastructure.[41] Side-channel attacks exploit unintended information leakage from physical implementations, such as power consumption, timing variations, or electromagnetic emissions during computations. Timing attacks measure execution differences to infer secrets, like cryptographic keys processed faster for certain inputs. Power analysis, including differential power analysis (DPA), statistically correlates power traces with operations to reconstruct keys in smart cards or embedded devices.[42] Electromagnetic attacks capture radiated signals to deduce internal states without direct contact. Notable examples include Spectre and Meltdown, disclosed in January 2018, which abused CPU speculative execution and cache side-channels to read arbitrary kernel memory across major processors from Intel, AMD, and ARM.[43] These vulnerabilities affected billions of devices, with mitigations requiring microcode updates and software patches that reduced performance by up to 30% in some workloads. Cold boot attacks, demonstrated in 2008, recover cryptographic keys from DRAM shortly after power-off by cooling and reading residual charge, exploiting volatility assumptions in memory design.[44]Social Engineering and Human Factors
Social engineering in computer security refers to the psychological manipulation of individuals to induce actions or disclosures that compromise information security, exploiting human vulnerabilities rather than technical weaknesses.[45] This approach relies on tactics such as deception, influence, or coercion to bypass defenses, often targeting trust, fear, or curiosity.[46] Unlike purely technical attacks, social engineering succeeds because humans remain the weakest link in security chains, with cognitive biases like authority compliance and reciprocity facilitating breaches.[47] Common techniques include phishing, where attackers impersonate legitimate entities via email or messages to extract credentials or install malware; pretexting, involving fabricated scenarios to obtain information; and baiting, offering enticing items like infected USB drives to prompt unauthorized access.[48] Vishing (voice phishing) and smishing (SMS phishing) extend these to phone and text channels, while quid pro quo promises favors for data.[49] In 2024, phishing accounted for 16% of confirmed data breaches analyzed in the Verizon 2025 Data Breach Investigations Report (DBIR), which examined 12,195 breaches from 22,052 incidents.[50] Social engineering incidents reached 4,009 that year, with 85% leading to data disclosure.[51] Human factors amplify these threats through errors, negligence, or misuse, contributing to 68% of incidents and up to 95% of breaches according to multiple analyses.[52] [53] Privilege misuse, such as sharing credentials under social pressure, and errors like clicking malicious links stem from inadequate training or overreliance on intuition over protocols.[54] Insider threats, often unintentional, arise from these factors; for instance, in the 2016 Bangladesh Bank heist, social engineering via SWIFT network manipulation enabled $81 million theft after staff were tricked into authorizing transfers.[55] The 2020 Twitter Bitcoin scam, where attackers phone-phished employees for internal tool access, drained $120,000 from high-profile accounts, highlighting persistent vulnerabilities despite technical safeguards.[56] These elements underscore that technical defenses alone fail without addressing human behavior, as empirical data shows social engineering evades layered protections by design.[57] Reports consistently attribute breach escalation to human elements over initial vectors, emphasizing the causal primacy of psychological exploitation in security failures.[58]Emerging Threats
Artificial intelligence enables adversaries to automate and refine attacks, including the creation of deepfake audio and video for social engineering, polymorphic malware that evades detection, and targeted phishing campaigns at scale. IBM predicts that AI-assisted threats, such as enhanced phishing and malware variants, will proliferate in 2025, while AI-powered attacks like deepfake scams introduce novel deception vectors.[59] CrowdStrike's 2025 Global Threat Report documents a 442% increase in vishing incidents in the second half of 2024, often leveraging AI-generated voices to impersonate trusted entities.[60] Malware-free techniques now account for 79% of detections, reflecting adversaries' preference for living-off-the-land methods that exploit legitimate system tools to avoid signature-based defenses.[60] This shift reduces reliance on traditional payloads, with average breakout times reaching as low as 48 minutes and fastest instances at 51 seconds.[60] Nation-state actors, including North Korea's FAMOUS CHOLLIMA group responsible for 304 incidents in 2024, integrate generative AI to fabricate profiles, emails, and websites, amplifying insider threat operations that comprised 40% of such activities.[60] China-nexus adversaries saw a 150% rise in operations, underscoring state-sponsored escalation.[60] Quantum computing threatens current public-key encryption standards, such as RSA and ECC, by enabling efficient factorization and discrete logarithm solving through algorithms like Shor's.[59] Organizations face "harvest now, decrypt later" risks, where encrypted data is collected today for future quantum decryption; IBM advocates crypto-agility and migration to post-quantum algorithms standardized by NIST.[59] Ransomware persists as a dominant vector, representing 28% of malware incidents in 2024 despite a slight decline, often combined with credential theft which surged 71% year-over-year.[26][61] Emerging variants emphasize extortion without encryption, targeting cloud environments and critical infrastructure.[62] Shadow AI—unsanctioned models deployed without oversight—exposes sensitive data to unvetted risks, complicating enterprise governance.[59]Defensive Measures
Security Architecture and Design
Security architecture encompasses the foundational design principles and structural components that integrate security into computer systems from inception, rather than as an afterthought, to mitigate risks through proactive controls. This approach emphasizes building systems that enforce confidentiality, integrity, and availability while minimizing vulnerabilities inherent in software and hardware implementations. Key to this is the concept of a trusted computing base (TCB), defined as the set of hardware, software, and firmware components critical to security enforcement, which must be verifiably reliable to prevent compromise of the entire system.[63] The TCB operates as a reference monitor, mediating all access to objects and ensuring compliance with security policies, a requirement formalized in the U.S. Department of Defense's Trusted Computer System Evaluation Criteria (TCSEC) in 1985, which classified systems into divisions from minimal protection (D) to verified protection (A1) based on assurance levels.[63] Foundational design principles, articulated by Jerome Saltzer and Michael Schroeder in their 1975 paper, guide secure architecture by prioritizing simplicity, verifiability, and resistance to errors. These include:- Economy of mechanism: Keep protection mechanisms simple and small to facilitate analysis and reduce flaws.[64]
- Fail-safe defaults: Deny access by default unless explicitly permitted, basing decisions on permissions rather than exclusions.[65]
- Complete mediation: Verify every access to every object for authorization, without relying on cached or assumed trust.[65]
- Open design: Security should not depend on secrecy of mechanisms, allowing public scrutiny to identify weaknesses.[65]
- Separation of privilege: Require multiple keys or conditions for sensitive operations to prevent single-point failures.[65]
- Least privilege: Assign minimal permissions necessary for tasks, limiting damage from errors or compromises.[65]
- Least common mechanism: Minimize shared resources among users to avoid interference or collusion risks.[65]
- Psychological acceptability: Design interfaces that encourage compliance without excessive burden on users.[65]