Fact-checked by Grok 2 weeks ago

Cynthia Dwork


Cynthia Dwork is an American theoretical computer scientist recognized for establishing the mathematical foundations of , a framework that quantifies and guarantees individual privacy in outputs. She holds the position of Professor of at Harvard University's John A. Paulson School of Engineering and Applied Sciences, with affiliations in and the Department of Statistics.
Dwork's career includes over three decades in industrial research laboratories at and before joining Harvard, during which she advanced through innovations such as non-malleable , the first lattice-based public-key , and proof-of-work mechanisms that underpin modern cryptocurrencies. Her work extends to fault-tolerant distributed systems and statistical validity in adaptive , addressing core challenges in ensuring reliable inferences from explored datasets. , co-developed by Dwork, has been deployed widely, including in Apple's devices for location services and the U.S. Bureau's 2020 disclosure avoidance system.
Among her honors, Dwork received the in 2025 for contributions to privacy, , and distributed computing; the ACM-IEEE in 2020; the ; and the ACM Paris Kanellakis Theory and Practice Award in 2021. She is a member of the and the .

Personal Background

Early Life and Education

Cynthia Dwork was born on June 27, 1958, in the United States, as the daughter of mathematician Bernard Dwork, who served as the Eugene Higgins Professor of Mathematics at from 1964 until his retirement. Her sister, Debórah Dwork, is a historian specializing in . Dwork completed her undergraduate studies at , earning a in (B.S.E.) in and in 1979; she graduated cum laude and received the Charles Ira Young Award for Excellence in Independent Research, recognizing outstanding senior thesis work. She then pursued graduate studies at , obtaining a Ph.D. in in 1983 under the supervision of John Hopcroft; her dissertation, titled Bounds on Fundamental Problems in Parallel and Distributed Computation, addressed theoretical limits in concurrent systems.

Professional Career

Academic and Research Positions

Following her PhD from in 1982, Cynthia Dwork began her research career at , serving as a Research Staff Member at the Almaden Research Center from August 1985 to June 2000. In this role, she contributed to foundational work in and within IBM's industrial research environment. In 2001, Dwork transitioned to , where she held the position of Distinguished Scientist, primarily at the laboratory, continuing her research until 2017. This period marked over a decade of leadership in at one of the premier corporate research institutions. Dwork joined in January 2017 as the Professor of at the John A. Paulson School of Engineering and Applied Sciences. She concurrently serves as Radcliffe Alumnae Professor at the Radcliffe and maintains an affiliation with . In recent years, Dwork has engaged in visiting lectureships, including a scheduled series of Messenger Lectures at from May 5 to 7, 2025, focused on data privacy.

Key Collaborations and Influences

Dwork's foundational contributions to emerged from collaborations with at and Larry Stockmeyer at Almaden Research Center, particularly their 1988 paper introducing partial synchrony as a model bridging asynchronous and synchronous systems for protocols. This joint effort, which analyzed timing uncertainties in fault-tolerant systems, earned the 2007 Edsger W. Dijkstra Prize for its enduring impact on distributed agreement algorithms. These partnerships during her early industrial research at highlighted how interactions with systems theorists like Lynch shaped Dwork's approach to resilience in unreliable networks. In and , Dwork's long-term collaboration with Moni Naor of the Weizmann Institute produced influential works, including their 1992 proposal for resource pricing via computational puzzles to deter , laying groundwork for proof-of-work mechanisms. Their co-authorship continued into , notably the 2006 exploration of disclosure risks in statistical databases, which argued for as a robust alternative to weaker anonymization techniques. Naor's expertise in secure computation complemented Dwork's, fostering innovations like proofs under continual observation. At , where Dwork spent over two decades, joint projects with researchers such as Omer Reingold and Guy Rothblum advanced non-malleable and succinct arguments, integrating complexity-theoretic tools into privacy protocols. These institutional networks, spanning and , influenced her synthesis of distributed systems reliability with , distinct from purely academic lineages in . Upon joining Harvard in 2017, Dwork's collaborations extended to interdisciplinary efforts in fairness, building on prior co-authorship patterns without supplanting earlier distributed and foundations.

Research Contributions

Foundations in Distributed Computing

Cynthia Dwork's foundational contributions to in the 1980s centered on achieving and agreement in fault-prone environments, addressing challenges posed by processor failures, timing uncertainties, and adversarial behaviors. Collaborating with and Larry Stockmeyer, she developed models and protocols for under partial synchrony, a framework that relaxes strict synchronous assumptions while avoiding the impossibilities of fully asynchronous systems. This work delineated conditions under which reliable agreement is achievable despite bounded delays and faults, providing essential theoretical bounds for system designers. In their 1988 Journal of the ACM paper, Dwork, Lynch, and Stockmeyer formalized partial synchrony by introducing parameters for upper bounds on message delays and relative speeds, enabling protocols that terminate correctly if the stabilizes within these bounds. The protocols tolerate up to one-third faulty in Byzantine settings, using authenticated messages to prevent invalidation by malicious , and achieve both (all non-faulty agree on the same value) and liveness (non-faulty eventually decide). This bridged gaps in prior synchronous models, which assumed fixed rounds, and asynchronous ones, where is impossible even with crashes per the FLP result, offering practical resilience for networks with variable but bounded asynchrony. Dwork also advanced in constrained topologies, such as of bounded degree. In a SIAM Journal on Computing paper, she introduced "almost-everywhere agreement," a where is required only among nearly all correct processors, relaxing total agreement to enable efficiency in sparse graphs prone to partitions or failures. This approach tolerated a constant fraction of faults while minimizing communication overhead, influencing designs for scalable, resilient infrastructures like early multiprocessor systems and precursor protocols to modern mechanisms. Her protocols incorporated randomized elements for efficiency and drew on empirical observations of real behaviors, such as intermittent delays, to validate theoretical guarantees against practical unreliability.

Advances in Cryptography

In the early 1990s, while at Almaden Research Center, Dwork co-authored the foundational paper introducing non-malleable cryptography, a strengthening of for encryption schemes. Non-malleability ensures that an adversary, given a encrypting some , cannot produce a valid for a related but distinct , even under chosen-ciphertext attacks; this addresses vulnerabilities in interactive protocols where may be adaptively generated. The construction relies on verifiable security proofs grounded in the hardness of decisional Diffie-Hellman assumptions, providing causal guarantees against malleability exploits that could undermine protocols like or multiparty , without relying on unproven black-box simulation paradigms. Building on this, Dwork and Moni Naor proposed computational proof-of-work mechanisms in to mitigate denial-of-service attacks and resource abuse, such as junk . These require clients to solve moderately hard puzzles—demonstrating computational effort proportional to requested service—before accessing resources, with server verification efficient and puzzle generation tunable for difficulty. The approach establishes a model via costs, offering provably secure deterrence under standard cryptographic assumptions like the hardness of factoring or discrete logarithms, and has influenced practical systems from anti-spam filters to . In the late and early , Dwork contributed to zero-knowledge proofs through the development of "zaps," non-interactive witness-indistinguishable arguments with public-coin verification. Zaps enable two-message protocols where the verifier commits first, facilitating applications in concurrent settings and reducing interaction rounds compared to traditional zero-knowledge proofs, while maintaining soundness via extractability from commitments. This work emphasizes first-principles reductions to pseudorandom function security, yielding verifiable protocols resilient to resetting adversaries—key for distributed systems—distinct from privacy-preserving mechanisms by prioritizing computational hiding over indistinguishability of outputs. Dwork's cryptographic innovations also intersected with , notably in -based constructions linking worst-case problems to average-case for public-key and generators. These derandomization techniques convert deterministic verifiers into ones, supporting efficient protocols with provable indistinguishability from random strings under quantum-resistant assumptions, and enabling by providing robust randomness sources without interactive oracles.

Development of Differential Privacy

Differential privacy emerged as a mathematical framework for quantifying and achieving data guarantees against inference attacks, formalized by Cynthia Dwork, Frank McSherry, Kobbi Nissim, and in their 2006 paper "Calibrating Noise to Sensitivity in Private Data Analysis," presented at the Theory of Cryptography Conference. The core definition posits that a randomized satisfies ε-differential if, for any two neighboring datasets differing by at most one record and any measurable subset of possible outputs, the probability of observing a particular output changes by at most a multiplicative factor of e^ε (typically small, e.g., ε ≈ 0.1–1 for strong ). This indistinguishability criterion ensures that no individual's data can be reliably inferred from query outputs, regardless of auxiliary information available to an adversary, shifting from ad-hoc anonymization to provable bounds on causal influence from single records. A foundational introduced in the work is the Laplace mechanism, which perturbs the true output of a numeric query function f by adding independent drawn from a with scale parameter Δf / ε, where Δf is the global (maximum change in f from altering one record). For instance, to privately release the of a bounded , is calibrated such that the mechanism outputs f(D) + Lap(Δf / ε), preserving for aggregate statistics while bounding privacy loss. Complementing this, McSherry and Kunal developed the exponential mechanism in , enabling private selection among discrete alternatives by sampling outputs with probability proportional to exp(ε u(o, D) / (2Δu)), where u is a function and Δu its , facilitating tasks like private release or optimization without relying solely on continuous . These primitives enabled practical deployments balancing and utility, with early adoption in industry following Dwork and McSherry's affiliation with . integrated into internal data analysis pipelines, such as SQL Server's private aggregations, achieving ε ≈ 1 with utility losses under 5% for large-scale queries on enterprise datasets. applied it in Chrome's usage histograms starting around , injecting Laplace noise to report aggregate browser metrics while ensuring per-user contribution indistinguishability, with reported accuracy degradation of less than 1% for histograms exceeding 100 bins. In government, the U.S. Bureau adopted for the 2020 Decennial , using a variant of the Laplace mechanism to add calibrated noise to tabulations, protecting against reconstruction attacks at ε = 5.8 globally (tighter locally), though empirical evaluations showed mean squared errors up to 10% higher in small geographies compared to non-private releases. These implementations underscored the framework's emphasis on quantifiable trade-offs, where privacy budgets (cumulative ε across queries via theorems) constrain sequential releases without assuming trusted data custodians.

Work on Algorithmic Fairness

In her seminal 2012 paper "Fairness Through Awareness," co-authored with Moritz Hardt, Toniann Pitassi, Omer Reingold, and , Cynthia Dwork introduced a formal framework for algorithmic fairness centered on individual-level protections rather than aggregate group statistics. The approach posits that fairness requires treating similar individuals similarly, formalized through a condition: a classifier D satisfies individual fairness if, for a task-specific \delta on the input (representing similarity between individuals), D(x) - D(y) \leq L \cdot \delta(x, y) for some constant L, ensuring bounded differences in outcomes for bounded differences in inputs. This \delta must be predefined based on domain knowledge, such as for distributions over protected attributes, to enforce consistency without relying on observed sensitive features during . Dwork's formulation explicitly contrasts individual fairness with group-based metrics, such as demographic parity (equal positive prediction rates across groups) or equalized odds (equal true/false positive rates conditional on outcomes). She argued that group metrics often fail to capture nuanced similarities, potentially enforcing artificial that ignores legitimate individual differences, while individual fairness aligns with intuitive notions of non-discrimination by preserving distance-based . Subsequent theoretical work building on her ideas, including impossibility theorems, demonstrates that no non-trivial classifier can simultaneously satisfy individual fairness and certain group fairness criteria unless group distributions are identical, highlighting inherent trade-offs rooted in differing base rates or causal structures between groups. These results underscore that enforcing group parity may violate individual similarity when real-world data reflects causal disparities, such as varying qualification rates across demographics due to non-discriminatory factors like or behavior. Empirical studies applying Dwork's individual fairness lens reveal practical challenges, where imposing fairness constraints—whether or group—typically reduces predictive accuracy by constraining model flexibility to fit data patterns. For instance, in credit scoring datasets like , enforcing Lipschitz-bounded classifiers or group parity increases error rates by 5-20% compared to unconstrained models, as the constraints prevent exploitation of predictive signals correlated with protected attributes (e.g., income proxies). Similarly, in recidivism prediction on the dataset, fairness adjustments under individual metrics degrade AUC scores from 0.70 to below 0.65, reflecting the causal reality that outcome disparities often stem from behavioral differences rather than . Dwork's emphasis on metric design acknowledges these utility costs, advocating for transparency in \delta to fairness against performance, rather than presuming disparities indicate injustice.

Recognition and Awards

Major Honors and Elections

In 2008, Dwork was elected to the for contributions to the theory of secure . She was also elected a Fellow of the American Academy of Arts and Sciences that year. In 2014, she was elected to the . Dwork received the in 2017, jointly with Frank McSherry, Kobbi Nissim, and , for their 2006 paper introducing . The Donald E. Knuth Prize was awarded to her in 2020 by the Association for Computing Machinery and IEEE Computer Society. She was awarded the in 2024 for foundational contributions to , including secure and -preserving data analysis; the medal was presented on January 13, 2025.

Debates and Critiques

Limitations of

does not provide absolute guarantees, as demonstrated by impossibility results showing that achieving strong —where outputs reveal no information about any individual—while maintaining non-trivial utility is fundamentally unattainable. In their 2006 work, Cynthia Dwork and Moni Naor proved that for any statistical database supporting queries with positive utility, an adversary can reconstruct the entire database with high probability, violating individual under general conditions of privacy violation and utility notions. This holds even under broad assumptions, emphasizing that represents a pragmatic compromise rather than a perfect shield, where privacy leakage scales with the parameter but cannot eliminate all risks without rendering data useless. A core trade-off arises in utility loss for , outliers, and small subgroups, where addition obscures signals critical for analysis. For instance, in epidemiological studies of rare diseases, 's calibrated can suppress detections in low-prevalence populations, as the prioritizes bounding over preserving accuracy for sparse . Similarly, critiques highlight that inadvertently hides outliers by design, potentially masking re-identification risks in datasets where anomalies stand out without , yet the added reduces overall fidelity for downstream tasks like . This is exacerbated in small subgroups, where privacy protections amplify relative error, limiting applicability to granular analyses without restricting queries to larger aggregates. Composability introduces further practical limitations, as sequential queries erode the budget, weakening guarantees multiplicatively and complicating deployment in interactive settings. Advanced theorems mitigate some degradation but still impose linear or logarithmic penalties in epsilon, making long query sequences inefficient without advanced techniques like . Critics, including the "Fool's Gold" analysis, argue that over-reliance on fosters a false sense of security, as real-world implementations often relax parameters for usability, undermining the mathematical rigor and potentially concealing auxiliary re-identification channels not captured by the definition. Empirical implementations reveal noise-induced inaccuracies, particularly in public data releases. The U.S. Census Bureau's adoption of for 2020 decennial data, using a privacy budget of =7.4 for geographic products, resulted in systematic discrepancies, including undercounts in rural areas and non-white populations, with errors up to 10% in small blocks compared to un-noised tabulations. These distortions affected and , illustrating how theoretical privacy gains manifest as practical utility deficits in high-stakes, low-margin datasets. Such cases underscore the need for careful parameter tuning and post-processing, yet highlight persistent challenges in balancing protection against verifiable accuracy losses.

Controversies in Algorithmic Fairness

Dwork's framework of individual fairness, which posits that similar individuals should receive similar decisions based on a task-specific similarity , has highlighted tensions with group-based fairness criteria such as demographic or equalized , where outcomes must be statistically balanced across protected groups regardless of individual traits. These definitions often conflict, as demonstrated by impossibility theorems showing that no non-trivial classifier can simultaneously achieve (predictive accuracy conditional on true outcomes) and balance (equal selection rates or error rates across groups) unless base rates of the outcome are identical across groups—a rare empirical condition. For instance, Kleinberg, Mullainathan, and Raghavan's 2016 analysis proves that satisfying equality of false positive rates and true positive rates alongside requires equal prevalence of the positive outcome in groups, underscoring the absence of a universal fairness solution even in simple settings. Critics argue that group fairness metrics, by mandating outcome parity, overlook causal factors such as pre-existing group differences in qualifications, interests, or behaviors, which can produce legitimate disparate impacts rather than . In hiring contexts, for example, enforcing demographic parity may compel algorithms to downrank higher-qualified candidates from overrepresented groups to balance selections, effectively prioritizing ideological equality of outcomes over merit-based equality of opportunity. This approach assumes interchangeability across groups, yet empirical reveal persistent variances, such as gender differences in occupational preferences and variance in cognitive abilities, leading to natural disparities in applicant pools for roles like or without invoking . Such interventions risk systemic inefficiency, as they treat observed disparities as presumptive evidence of while disregarding first-principles causal that groups may differ in average suitability due to non-discriminatory factors. Studies confirm trade-offs where fairness constraints degrade overall and accuracy; for instance, causal analyses show that imposing group fairness reduces decision-maker by distorting predictions away from true underlying distributions, with losses quantified in simulations where enforcement lowers true positive rates by up to 20-30% in imbalanced settings. In algorithmic lending and hiring, relaxing fairness metrics to prioritize predictive or treatment preserves higher profitability and hire quality, as evidenced by models where demographic compliance sacrifices with outcomes. Proponents of alternative views, emphasizing accuracy and opportunity equality, contend that over-reliance on group metrics—often driven by regulatory pressures—imposes at the expense of performance, potentially exacerbating inequalities by hiring or promoting less capable , as critiqued in analyses of real-world systems like resume screeners. These debates, influenced by Dwork's foundational distinctions, reveal that no metric resolves all tensions without empirical trade-offs, favoring approaches grounded in verifiable merit over aggregate enforced equity.

Legacy and Impact

Influence on Technology and Policy

Apple incorporated into its operating system released on September 13, 2016, applying it to features such as prediction and learning to enable aggregate analytics while bounding the risk of individual data inference. This adoption marked an early large-scale deployment of the framework Dwork co-developed in 2006, influencing subsequent privacy engineering practices by demonstrating feasible trade-offs between data utility and protection in consumer devices. similarly integrated mechanisms into elements of its initiative, launched in 2020 as a cookie-deprecation alternative for web advertising, aiming to aggregate user signals without exposing personal identifiers; though the broader faced adoption challenges and partial phase-out by October 2025, its components underscored efforts to operationalize privacy guarantees in browser ecosystems. In the , the U.S. Census Bureau implemented as the core of its 2020 Disclosure Avoidance System, releasing data on August 12, 2021, with controlled noise addition to safeguard respondent confidentiality amid rising re-identification threats from data linkage. This shift, informed by Dwork's foundational work, prioritized formal privacy over traditional suppression methods but drew empirical critiques for introducing systematic errors, particularly in undercounting small geographic units and minorities, with studies showing up to 10-20% distortions in population counts for certain locales that affected accuracy under the Voting Rights Act. Such utility losses highlighted causal tensions between stringent privacy enforcement and reliable public statistics, prompting ongoing refinements like reduced epsilon parameters in post-2020 evaluations. Dwork's contributions to algorithmic fairness definitions have shaped auditing protocols in sectors like lending and hiring, influencing frameworks such as those proposed in the EU AI Act of 2024, which mandate bias assessments but have sparked debates over enforcement costs potentially exceeding €6 billion annually for compliance in high-risk systems. These tools promote proactive fairness checks yet risk overreach by imposing vague metrics that may deter innovation, as evidenced by reduced AI deployments in regulated domains due to litigation fears rather than inherent technical flaws. Overall, differential privacy has driven a paradigm toward privacy-by-design in data systems, fostering verifiable protections in both tech products and policy mandates, though balanced against documented reductions in analytical precision that can undermine downstream applications like epidemiological modeling or equitable .

Selected Publications

Dwork's early work on distributed systems includes the 1988 paper "Consensus in the Presence of Partial Synchrony," co-authored with and Larry Stockmeyer and published in the Journal of the ACM. This paper, which has received over 1,300 citations, earned the Prize in 2007 for its enduring impact on fault-tolerant computing. A pivotal contribution to is the 2006 paper "Calibrating Noise to Sensitivity in Private Data Analysis," co-authored with Frank McSherry, Kobbi Nissim, and , presented at the Theory of Cryptography Conference. With more than 1,700 citations, it received the TCC Test-of-Time Award in 2016. In the domain of algorithmic fairness, Dwork co-authored "Fairness Through Awareness" in 2012 with Moritz Hardt, Toniann Pitassi, Omer Reingold, and , published in the proceedings of the 3rd Innovations in Conference. The paper has amassed over 2,100 citations.

References

  1. [1]
    A Brief Intellectual Biography - Cynthia Dwork - Harvard University
    Dwork joined Harvard after more than thirty years in industrial research at IBM and Microsoft. Some of her earliest work established the pillars on which every ...
  2. [2]
    Cynthia Dwork - ACM Awards
    The theory of differential privacy gives a conceptually new, simple, and mathematically rigorous definition of privacy.
  3. [3]
    Cynthia Dwork | Cynthia Dwork
    Cynthia Dwork is Gordon McKay Professor of Computer Science at the Harvard University John A. Paulson School of Engineering and Applied Sciences.
  4. [4]
    Cynthia Dwork - National Science and Technology Medals Foundation
    Differential Privacy is widely deployed in industry and formed the backbone of the Disclosure Avoidance System for the 2020 US Decennial Census. Dwork joined ...
  5. [5]
    Pioneer of modern data privacy Cynthia Dwork wins National Medal ...
    Jan 8, 2025 · Dwork is known for placing privacy-preserving data analysis on a mathematically rigorous foundation. Differential privacy, a strong privacy ...
  6. [6]
    [PDF] 2020 Knuth Prize is awarded to Cynthia Dwork
    She is widely known for the introduction and development of differential privacy, and for her work on nonmalleability, lattice-based encryption, concurrent ...
  7. [7]
    Princeton Engineering - Cynthia Dwork '79 will join faculty at Harvard
    Apr 29, 2016 · Cynthia Dwork, currently a distinguished scientist at Microsoft Research in Silicon Valley, will join the faculty at Harvard in January 2017.
  8. [8]
    Who is Cynthia Dwork? - Bit2Me Academy
    Ethe name of Cynthia dwork is another of the big names in the world of computing and cryptography. Dwork was born in the year 1958 in the United States and ...
  9. [9]
    Undergraduate alumna honored with National Medal of Science
    Jan 14, 2025 · Undergraduate alumna Cynthia Dwork received the National Medal of Science for visionary contributions to the field of computer science and secure public key ...
  10. [10]
    Cynthia Dwork - CCC - Computing Research Association
    Dwork received her B.S.E. from Princeton University in 1979, graduating Cum Laude, and receiving the Charles Ira Young Award for Excellence in Independent ...
  11. [11]
    Visiting lecturer to explore data privacy protection - Cornell Chronicle
    Apr 28, 2025 · A cornerstone of Dwork's work has been the development of differential privacy – a mathematical technique that has influenced data privacy ...
  12. [12]
    [PDF] Cynthia Dwork | Harvard University
    Cynthia Dwork. Gordon McKay Professor of Computer Science, Harvard University ... August, 1985 – June, 2000: IBM Almaden Research Center, Research Staff Member.
  13. [13]
    Leading Silicon Valley computer scientist to join Harvard faculty
    Feb 19, 2016 · Dwork received her undergraduate degree from Princeton University and Ph.D. from Cornell University. After a two-year post-doctoral ...
  14. [14]
    Cynthia Dwork | Radcliffe Institute for Advanced Study at Harvard ...
    Cynthia Dwork is a Gordon McKay Professor of Computer Science at the Harvard John A. Paulson School of Engineering and Applied Sciences.Missing: biography | Show results with:biography
  15. [15]
    Consensus in the presence of partial synchrony - ACM Digital Library
    Cynthia Dwork. Cynthia Dwork. IBM Almaden Research Center, San Jose, CA; and Massachusetts Institute of Technology, Cambridge. View Profile. , Nancy Lynch.
  16. [16]
    [PDF] Consensus in the Presence of Partial Synchrony - Research
    CYNTHIA. DWORK AND NANCY LYNCH .Massachusetts Institute of Technology, Cambridge, Massachusetts. AND. LARRY STOCKMEYER. IBM Almaden Research Center, San Jose ...
  17. [17]
    [PDF] Pricing via Processing or Combatting Junk Mail
    Cynthia Dwork. Moni Naor y. Abstract. We present a computational technique for combatting junk mail, in particular, and controlling access to a shared resource ...Missing: collaboration | Show results with:collaboration
  18. [18]
    On the Difficulties of Disclosure Prevention in Statistical Databases ...
    Sep 1, 2010 · Dwork, Cynthia, and Moni Naor. 2010. “On the Difficulties of Disclosure Prevention in Statistical Databases or The Case for Differential ...Missing: collaboration | Show results with:collaboration
  19. [19]
    Differential privacy under continual observation - ACM Digital Library
    Differential privacy under continual observation ; Cynthia Dwork · Microsoft Research, Mountain View, CA, USA ; Moni Naor · Weizmann Institute of Science, Rehovot, ...Missing: collaboration | Show results with:collaboration
  20. [20]
    Spooky Interaction and its Discontents: Compilers for Succinct Two ...
    Mar 17, 2016 · Cynthia Dwork, Moni Naor, and Guy N. Rothblum. Abstract. We are interested in constructing short two-message arguments for various languages ...Missing: collaboration | Show results with:collaboration
  21. [21]
    Fault Tolerance in Networks of Bounded Degree
    We define a new paradigm for distributed computing, almost-everywhere agreement, in which we require only that almost all correct processors reach consensus.
  22. [22]
    "Fault Tolerance in Networks of Bounded Degree" by Cynthia Dwork ...
    We define a new paradigm for distributed computing, almost-everywhere agreement, in which we require only that almost all correct processors reach consensus.Missing: 1980s | Show results with:1980s
  23. [23]
    Non-malleable cryptography | Proceedings of the twenty-third ...
    Non-malleable cryptography. Authors: Danny Dolev, Danny Dolev, IBM, Almaden Research Center, View Profile, Cynthia Dwork, Cynthia Dwork, IBM, Almaden Research ...
  24. [24]
    Non-Malleable Cryptography - Abstract
    Danny Dolev, Cynthia Dwork and Moni Naor. Abstract: The notion of non-malleable cryptography, an extension of semantically secure cryptography, is defined.
  25. [25]
    Nonmalleable Cryptography | SIAM Journal on Computing
    Universally composable security: a new paradigm for cryptographic protocols ... Recommended Content. Nonmalleable Cryptography · Danny Dolev; ,; Cynthia Dwork ...<|control11|><|separator|>
  26. [26]
    Proof of Work (PoW) - Applied Mathematics Consulting
    Nov 11, 2018 · The idea of proof of work (PoW) was first explained in a paper Cynthia Dwork and Moni Naor [1], though the term “proof of work” came later ...Missing: verifiable | Show results with:verifiable
  27. [27]
    [PDF] Zaps and Their Applications
    Abstract. A zap is a two-round, public coin witness-indistinguishable protocol in which the first round, con- sisting of a message from the verifier to the ...
  28. [28]
    [PDF] Derandomization in Cryptography - Cryptology ePrint Archive
    Oct 5, 2005 · Abstract. We give two applications of Nisan–Wigderson-type (“non-cryptographic”) pseudorandom generators in cryptography.
  29. [29]
    [PDF] Derandomization in Cryptography
    Nov 13, 2006 · A BMY-type generator is the standard kind of pseudorandom generator used in cryptography. ... [DN00]. Cynthia Dwork and Moni Naor. Zaps and ...
  30. [30]
    Calibrating Noise to Sensitivity in Private Data Analysis - Microsoft
    Mar 1, 2006 · Calibrating Noise to Sensitivity in Private Data Analysis. Cynthia Dwork ,; Frank McSherry ,; Kobbi Nissim ,; Adam Smith. Third Theory of ...<|separator|>
  31. [31]
    [PDF] Calibrating Noise to Sensitivity in Private Data Analysis
    We show that for any non-interactive mechanism San satisfying our definition of privacy, there exist low-sensitivity functions f(x) which cannot be approximated ...
  32. [32]
    [PDF] The Algorithmic Foundations of Differential Privacy - UPenn CIS
    The definition of differential privacy is due to Dwork et al. [23]; the ... In Cynthia Dwork, editor, Symposium on Theory of Computing, pages 609–618 ...
  33. [33]
    A list of real-world uses of differential privacy - Ted is writing things
    Oct 1, 2021 · Google Trends uses differential privacy to select which gueries to proactively show on the website, e.g. as trending or related queries. It uses ...
  34. [34]
    Understanding Differential Privacy - U.S. Census Bureau
    Differential privacy, first developed in 2006, is a framework for measuring the precise disclosure risk associated with each release of confidential data. Fact ...Missing: Google | Show results with:Google
  35. [35]
    [1104.3913] Fairness Through Awareness - arXiv
    Apr 20, 2011 · View a PDF of the paper titled Fairness Through Awareness, by Cynthia Dwork and 4 other authors. View PDF. Abstract:We study fairness in ...
  36. [36]
    Fairness through awareness | Proceedings of the 3rd Innovations in ...
    Fairness through awareness. Authors: Cynthia Dwork. Cynthia Dwork. Microsoft ... This paper applies two argumentation schemes, argument from fairness and ...
  37. [37]
    Cynthia Dwork | American Academy of Arts and Sciences
    Sep 23, 2025 · Principal Researcher. Made fundamental contributions to several areas of computer science since the early eighties. First a leader in distributed computing.Missing: influences | Show results with:influences
  38. [38]
    ACM SIGACT Announces 2017 Awards
    Jun 19, 2017 · The 2017 Gödel Prize is awarded to Cynthia Dwork, Frank McSherry, Kobbi Nissim and Adam Smith for their work on differential privacy in their ...
  39. [39]
    Cynthia Dwork | NSF - National Science Foundation
    Cynthia Dwork's innovative research, analysis, and discoveries on differential privacy, fairness in algorithms, and statistical validity in adaptive data ...
  40. [40]
    On the Difficulties of Disclosure Prevention in Statistical Databases ...
    Jan 1, 2010 · Our results hold under very general conditions regarding the database, the notion of privacy violation, and the notion of utility. Contrary ...
  41. [41]
    [PDF] On the Difficulties of Disclosure Prevention in Statistical Databases ...
    Aug 31, 2008 · However, in this work privacy is paramount: we will first define our privacy goals and then explore what utility can be achieved given that the ...
  42. [42]
    [PDF] Strengths and Limitations of Differential Privacy
    Jan 30, 2023 · Differential privacy achieves moderate privacy, but cannot solely meet social demands, especially for self-control of personal information.
  43. [43]
    [1507.06763] Differentially Private Analysis of Outliers - arXiv
    Jul 24, 2015 · This paper investigates differentially private analysis of distance-based outliers. The problem of outlier detection is to find a small number of instances ...Missing: subgroups | Show results with:subgroups
  44. [44]
    [PDF] The Strengths, Weaknesses and Promise of Differential Privacy as a ...
    A common suggestion is to restrict queries which ask about individuals or small groups and limit only to larger subgroups.
  45. [45]
    [PDF] The Composition Theorem for Differential Privacy
    It is known that the privacy degrades under composition by at most the 'sum' of the differential privacy parameters of each access.Missing: composability issues
  46. [46]
    [PDF] Fool's Gold: An Illustrated Critique of Differential Privacy
    Fool's Gold: An Illustrated Critique of Differential Privacy. Jane Bambauer ... In other words, a useful version of differential privacy is not differential ...
  47. [47]
    The 2020 US Census Differential Privacy Method Introduces ...
    The differential privacy method likely introduced significant discrepancies for rural and non-white populations into 2020 census tabulations.
  48. [48]
    The use of differential privacy for census data and its impact on ...
    Oct 6, 2021 · We study the impact of the US Census Bureau's latest disclosure avoidance system (DAS) on a major application of census statistics, the redrawing of electoral ...Missing: Microsoft | Show results with:Microsoft
  49. [49]
    Algorithmic Fairness - Stanford Encyclopedia of Philosophy
    Jul 30, 2025 · Dwork, Cynthia, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel, 2012, “Fairness through Awareness”, in Proceedings of the ...
  50. [50]
    [PDF] "Un"Fair Machine Learning Algorithms
    However, in many cases, ensuring equal treatment leads to disparate impact particularly when there are differences among groups based on demographic classes.
  51. [51]
    [PDF] Differential Privacy - Apple
    The data-gathering features that use differential privacy are linked to the user setting for Device Analytics. Users are presented with the option of ...
  52. [52]
    Learning with Privacy at Scale - Apple Machine Learning Research
    Dec 6, 2017 · Differential privacy [2] provides a mathematically rigorous definition of privacy and is one of the strongest guarantees of privacy available.
  53. [53]
    Differential privacy semantics for On-Device Personalization
    Mar 12, 2025 · This document summarizes the privacy approach for On-Device Personalization (ODP) specifically in the context of differential privacy.
  54. [54]
    [PDF] Evaluating the Impact of Differential Privacy Using the Census ...
    This new approach, while deemed to be the best way to protect respondents' identity and privacy, will result in a significant trade-off in data accuracy, ...
  55. [55]
    How to Force Our Machines to Play Fair | Quanta Magazine
    Nov 23, 2016 · The computer scientist Cynthia Dwork takes abstract concepts like privacy and fairness and adapts them into machine code for the algorithmic age.<|separator|>
  56. [56]
    Algorithmic fairness: challenges to building an effective regulatory ...
    Aug 28, 2025 · Several influential definitions of algorithmic fairness have been developed focusing on the error-rate aspect of a predictive algorithm's output ...
  57. [57]
    Advancing Differential Privacy: Where We Are Now and Future ...
    Feb 1, 2024 · In this article, we present a detailed review of current practices and state-of-the-art methodologies in the field of differential privacy (DP),
  58. [58]
    Calibrating Noise to Sensitivity in Private Data Analysis - SpringerLink
    Cynthia Dwork & Frank McSherry. Ben-Gurion University, Israel. Kobbi Nissim ... Calibrating Noise to Sensitivity in Private Data Analysis. In: Halevi, S ...
  59. [59]
    TCC Test of Time Award - IACR
    Calibrating Noise to Sensitivity in Private Data Analysis by Cynthia Dwork (Microsoft Research), Frank McSherry, Kobbi Nissim (Ben-Gurion University), and Adam ...