Fact-checked by Grok 2 weeks ago

Deborah Raji


Inioluwa Deborah Raji is a Nigerian-born pursuing a PhD in and computer sciences at the , where her research centers on enhancing accountability in systems through algorithmic auditing and evaluation practices.
Raji's empirical investigations into commercial facial recognition technologies have revealed systematic performance disparities across demographic groups, particularly higher error rates for individuals with darker skin tones and women, prompting responses from industry leaders such as IBM's decision to cease general police use of its system, Microsoft's moratorium on sales to U.S. , and Amazon's temporary halt on Rekognition sales to police.
In collaboration with researchers including , she co-authored studies like "Actionable Auditing," which analyzed the effects of public disclosure on biased AI performance, and "Saving Face," which outlined ethical considerations in auditing facial recognition systems, contributing to frameworks for internal audits and third-party oversight in AI development.

Early life and education

Upbringing and influences

Inioluwa Deborah Raji was born in , . At the age of four, her family immigrated to , , , fleeing economic instability in their home country. The family later relocated to , where Raji spent much of her childhood. Raji grew up in a large family with many siblings close in age. Family time often involved gathering around the shared computer on lazy afternoons to play games and browse the , fostering her early curiosity about . Specific mentors or intellectual influences from this period remain undocumented in available accounts.

Academic background

Inioluwa Deborah Raji earned a in Engineering Science from the , graduating in 2019. Her undergraduate studies emphasized principles, during which she initiated independent on model robustness, including early work auditing commercial facial recognition systems. Subsequently, Raji enrolled in the PhD program in at the , focusing on algorithmic auditing and accountability mechanisms for systems. As of 2025, she remains a doctoral candidate, with her dissertation work building on empirical evaluations of governance frameworks. No is documented in her educational record, indicating a direct progression from bachelor's to doctoral studies.

Professional career

Initial industry roles

Raji began her industry experience as a engineering intern at , a New York-based startup, in 2017. There, she contributed to building models designed to help clients detect and flag inappropriate image content, drawing on her skills in robotics engineering from the . This role exposed her to real-world applications of systems, including early encounters with performance disparities in facial analysis technologies across demographic groups, which later informed her research on . Following her Clarifai internship, Raji joined AI's research mentorship cohort, a program supporting students from underrepresented backgrounds in computing research. As a mentee with 's Ethical AI team around 2018–2019, she collaborated on initiatives to operationalize ethical AI practices, including the development of frameworks to assess model behaviors and mitigate risks like amplification. Her contributions included co-authoring work on accountability mechanisms, such as proposals for systematic complementary to external evaluations, emphasizing measurable metrics for AI system . These early roles at Clarifai and marked her transition from academic projects to practical industry applications of , where she began advocating for rigorous testing protocols in deployed systems.

Research and academic positions

Raji earned a in from the in 2018 before advancing to graduate research roles. She subsequently served as a at the Partnership on AI, where she contributed to studies on accountability mechanisms. Following this, she held a fellowship at the AI Now Institute, focusing on empirical audits of commercial systems. In 2021, Raji began her PhD in at the , Berkeley's Department of Electrical Engineering and Computer Sciences, with research centered on algorithmic auditing, evaluation methodologies, and institutional frameworks for accountability. During her doctoral studies, she was affiliated with the Computational Healthcare for Equity and iNclusion (CHEN) Lab at UC Berkeley and UC San Francisco, contributing to projects on equitable applications in . Concurrently, she served as a Mozilla Fellow and Senior Trustworthy Fellow at the from approximately 2021 to 2023, developing tools and protocols for external oversight of performance in real-world deployments. As of 2025, Raji continues as a candidate at UC Berkeley while holding an Academic Fellowship at the Leadership Conference on Civil and , where her work emphasizes civil rights implications of automated decision systems, including submissions on biometric data policies and governance. This role builds on her prior fellowships by integrating empirical research with advocacy for regulatory standards in deployment.

Research contributions

Studies on facial recognition bias

Raji co-authored the 2018 study "Gender Shades," which audited three commercial facial analysis systems—IBM's, Microsoft's, and Face++'s—for accuracy using the Pilot Parliaments comprising 6,170 images of and parliamentarians balanced by perceived and type. The analysis revealed intersectional accuracy disparities, with error rates for darker-skinned females reaching 34.7% in IBM's system compared to 0.8% for lighter-skinned males, and similar patterns in the others, attributing differences to underrepresented demographics in training data. In a follow-up, "Actionable Auditing," Raji and Buolamwini re-evaluated the same vendors post-disclosure of the Gender Shades results, documenting targeted improvements: reduced errors on darker-skinned females from 34.7% to under 1%, from 21% to comparable levels, while Face++ showed limited gains without demographic-specific adjustments. The study outlined a structured auditing emphasizing and public naming to incentivize vendor accountability, demonstrating causal links between exposure of biases and algorithmic refinements. Raji extended this auditing to Amazon's Rekognition service in another 2019 analysis, finding misclassification rates for gender of darker-skinned women as high as 31% versus under 1% for lighter-skinned men on the same benchmark, prompting Amazon to impose a moratorium on police use of the tool in 2020 amid scrutiny. These findings aligned with NIST's Face Recognition Vendor Test (FRVT) Part 3 report, which quantified demographic differentials across 189 algorithms, showing false positive rates up to 100 times higher for Asian and African American faces in some one-to-one matching scenarios due to dataset skews. In "Saving Face" (2020), co-authored with Gebru, Mitchell, and Buolamwini, Raji examined ethical hurdles in facial recognition auditing, including access restrictions that risk terms-of-service violations and potential dual-use for harmful applications, while advocating for independent, reproducible evaluations to mitigate risks from opaque vendor practices. A 2021 survey, "About Face," co-led by Raji, reviewed over 100 facial datasets totaling 145 million images, exposing chronic underrepresentation—e.g., fewer than 5% from non-Western sources in many cases—and inconsistent demographic labeling, which perpetuate evaluation biases in downstream FR systems. Raji's work underscores that facial recognition biases arise primarily from imbalanced training data lacking diverse representations, rather than algorithmic intent, with from repeated s showing feasibility of mitigation through dataset augmentation and vendor responsiveness, though persistent gaps remain in high-stakes deployments.

Developments in AI auditing and accountability

Raji co-authored the seminal 2020 paper "Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing," which proposes a structured process for organizations to systems throughout their lifecycle, from to deployment and . The framework identifies key auditing stages—including pre-release model validation, post-deployment performance tracking, and documentation of decision points—to address gaps where systems can evade scrutiny due to opaque engineering practices. This work argues that internal audits must be proactive and integrated into workflows rather than reactive, emphasizing measurable criteria for fairness, robustness, and to hold developers accountable without relying solely on external . Building on internal mechanisms, Raji has advanced concepts for external and third-party auditing ecosystems. In a 2022 publication, she contributed to outlining designs for "outsider oversight," drawing lessons from financial and environmental auditing to propose independent verification bodies that could certify systems' compliance with ethical standards. This approach critiques self-reported audits as insufficient for genuine , advocating for standardized protocols, access to proprietary data under controlled conditions, and incentives for deployers to engage auditors, while cautioning against superficial "audit-washing" where nominal checks substitute for substantive reform. Her analysis highlights institutional barriers, such as limited auditor expertise and conflicts of interest, as persistent hurdles in scaling effective oversight. More recently, in 2024, Raji examined practical limitations in audit tooling through empirical study of practitioner experiences, identifying gaps in tools' ability to handle real-world complexities like dynamic model updates and interdisciplinary requirements. The work recommends enhancements in tool , for , and integration of socio-technical factors beyond purely technical metrics, underscoring that current tooling often falls short in enabling comprehensive s. These contributions collectively position Raji as a proponent of auditing as a bridge to , though she notes in presentations that technical audits alone cannot resolve deeper and deficiencies without complementary enforcement mechanisms.

Recognition and influence

Awards and accolades

In 2019, Raji co-authored the paper "Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial Systems," which received the Best at the Association for the Advancement of () Conference on , Ethics, and (AIES). In 2020, she was selected as one of MIT Technology Review's 35 in the Visionaries category, recognized for developing auditing techniques to expose racial and gender biases in recognition algorithms used by law enforcement and commercial vendors. That same year, Raji, along with and , received the () Pioneer for pioneering empirical methods to demonstrate demographic disparities in automated technologies, prompting industry-wide reevaluations of deployment practices. Raji was named to list in the Enterprise Technology category in 2021, highlighting her contributions to algorithmic accountability through scalable auditing frameworks that enable external verification of system performance across protected demographic groups. In 2023, she was included in TIME magazine's inaugural list of the 100 Most Influential People in , in the Thinkers category, for advancing methods to proprietary models and advocate for in evaluations. In 2024, Raji received the Tech for Humanity Prize, awarded by New America and the 's Tech for Humanity initiative, for her research and advocacy addressing racial and gender biases in data systems and deployment. She was also honored with the Rise 25 award, which recognizes emerging leaders driving responsible innovation.

Policy impact and public advocacy

Raji has advocated for mandatory third-party auditing mechanisms to enforce in deployment, proposing frameworks that enable external oversight of systems to detect biases and failures before widespread harm. In a 2021 policy proposal at Stanford's Human-Centered conference, she emphasized granting auditors legal access to and models, arguing this as essential for verifying claims of and equity amid industry opacity. Her work as a Fellow since 2020 has focused on developing tools for algorithmic audits to hold vendors responsible for system impacts, influencing discussions on regulatory standards for transparency. Her research on facial recognition inaccuracies contributed to corporate policy shifts, including IBM's 2020 decision to cease general-purpose facial analysis sales, and Amazon's restrictions on police use, and Amazon's one-year moratorium on Rekognition for announced on , 2020. These actions followed heightened scrutiny from her audits revealing demographic disparities, which amplified calls for bans amid 2020 protests. Raji noted that such corporate moves shape public and legislative debates, though she critiqued them as insufficient without broader federal guardrails. In federal forums, Raji participated in Senate Majority Leader Chuck Schumer's 2023 AI Insight Forum, countering tech executives' optimistic projections by stressing empirical evaluation of current systems' risks over speculative futures to inform . Her 2023 Atlantic reinforced this, urging policymakers to prioritize auditing present harms like in deployed rather than existential threats. At the state level, she contributed to California's reports, highlighting corporate resistance to measures during 2025 legislative deliberations. These efforts underscore her push for evidence-based reforms, including legal mandates for auditing, to address systemic inaccuracies in commercial .

Criticisms and debates

Methodological challenges in bias research

Research identifying bias in facial recognition systems, including studies co-authored by Deborah Raji such as the Gender Shades project, has encountered methodological critiques related to dataset selection and image quality control. Critics argue that evaluations often rely on curated s featuring high-resolution celebrity or athlete images, which minimize real-world variables like poor lighting, , or low resolution that could confound bias attributions; for instance, the Gender Shades audit used such polished photos, potentially understating algorithmic robustness under adverse conditions and overstating demographic disparities as inherent flaws rather than quality artifacts. Similar issues arise in failing to disentangle demographic effects from correlated factors, as NIST's Face Recognition Vendor Test (FRVT) reports from 2019 onward demonstrate that when image quality is standardized, false positive rates across demographics converge, with some algorithms exhibiting higher accuracy for darker-skinned faces at operational thresholds, challenging claims of systemic racial bias without such controls. Another challenge involves metric choice and operational context, where studies emphasize raw error rates or demographic in tasks without specifying thresholds or distinguishing (1:1 matching) from (1:N search) scenarios prevalent in deployment. Amazon's 2019 response to a Raji co-authored of Rekognition highlighted that the tested feature—intended for low-stakes applications—was misapplied to high-accuracy matching benchmarks, yielding misleading estimates; their internal testing showed error rates below 1% overall when used as designed, contrasting the audit's up to 35% disparities for darker-skinned females. This underscores a broader issue: fairness metrics like equalized or often against overall utility, as enforcing them can degrade system performance without addressing root causes like training data imbalances, per analyses in security industry reviews citing FBI and validations where controlled biases yield low false alarm rates across groups. Reproducibility and standardization further complicate bias research, as varying definitions of "bias" (e.g., statistical disparity vs. causal ) and lack of uniform protocols lead to divergent results; for example, while early audits like Gender Shades reported stark intersectional gaps, subsequent vendor improvements reduced them by over 90% in re-audits, yet persistent methodological variances hinder causal attribution to algorithms versus evolving datasets or preprocessing. These challenges highlight the need for causal realism in isolating algorithmic decisions from proxy variables, as emphasized in critiques noting that unadjusted evaluations risk conflating with , potentially inflating policy-driven moratoriums on technologies that, under rigorous testing, exhibit minimal deployment risks.

Broader implications for AI development and regulation

Raji's research emphasizes the integration of algorithmic auditing into AI development pipelines as a core practice to address systemic risks like performance disparities across demographics, necessitating end-to-end frameworks that span problem scoping, data curation, model training, evaluation, and post-deployment monitoring. This approach implies a from ad-hoc fairness interventions to institutionalized processes, such as mandatory via model reporting cards and iterative , to prevent the deployment of unreliable systems in critical domains. Her empirical demonstrations of commercial failures, including error rates up to 34.7% higher for darker-skinned females in facial analysis benchmarks, the causal link between inadequate auditing and real-world harms, prompting developers to prioritize diverse testing datasets and robustness against adversarial inputs from the outset. In terms of , Raji's proposals for third-party ecosystems advocate for policies enabling external verifiers to models and data under controlled conditions, addressing the limitations of self-regulation where companies may underreport flaws to avoid . This has implications for establishing standardized auditing protocols, potentially through agencies like NIST, to enforce transparency in high-risk applications such as , while highlighting institutional barriers like data restrictions and the risk of "audit-washing"—superficial compliance without substantive change. Her participation in U.S. congressional forums in provided evidence-based critiques of overpromises, influencing debates toward feasible guardrails that balance with verifiable , though she has noted corporate lobbying's role in diluting ambitious state-level bills, as seen in California's repeated delays on oversight . Overall, these contributions imply that effective requires hybrid mechanisms—internal audits for efficiency and external ones for credibility—calibrated to empirical risk assessments rather than blanket prohibitions, fostering causal where developers bear for foreseeable downstream effects without stifling technical progress. Challenges persist in scaling audits globally, particularly for resource-constrained entities, and in defining enforceable metrics for abstract harms like societal amplification.

References

  1. [1]
    Inioluwa Deborah Raji
    I'm a PhD student in Computer Science at UC Berkeley. I work on various topics related to the legal and institutional accountability required for machine ...Missing: affiliation | Show results with:affiliation
  2. [2]
    Actionable Auditing: Investigating the Impact of Publicly Naming ...
    Jan 24, 2019 · To analyze the impact of publicly naming and disclosing performance results of biased AI systems, we investigate the commercial impact of Gender Shades.
  3. [3]
    Investigating the Ethical Concerns of Facial Recognition Auditing
    Jan 3, 2020 · Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing. Authors:Inioluwa Deborah Raji, Timnit Gebru, Margaret Mitchell, ...
  4. [4]
    How a 2018 Research Paper Led Amazon, Microsoft, and IBM to ...
    Jun 11, 2020 · Buolamwini and Inioluwa Deborah Raji continued to push the issue. In 2019, the two published research called “Actionable Auditing,” which put ...
  5. [5]
    Actionable Auditing: Investigating the Impact of Publicly Naming ...
    Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products. Authors: Inioluwa Deborah Raji.
  6. [6]
    Defining an End-to-End Framework for Internal Algorithmic Auditing
    Jan 3, 2020 · In this paper, we introduce a framework for algorithmic auditing that supports artificial intelligence system development end-to-end.Missing: researcher biography
  7. [7]
    Inioluwa Deborah Raji | MIT Technology Review
    Jun 17, 2020 · Inioluwa Deborah Raji. Her research on racial bias in data used to train facial recognition systems is forcing companies to change their ways.
  8. [8]
    Meet Deborah Raji, the Nigerian on Time's 100 Most Influential ...
    Sep 13, 2023 · Raji was born in Port Harcourt, Nigeria, but moved to Mississauga, Ontario, when she was four years old. According to her, her family left ...Missing: childhood early
  9. [9]
    Digital Library: XRDS: Crossroads, The ACM Magazine for Students
    I grew up in a large family, with many siblings close to my age. On lazy ... Deborah Raji studies robotics engineering at the University of Toronto.
  10. [10]
    Inioluwa Deborah Raji (EngSci 1T9) named to MIT Technology ...
    Jun 23, 2020 · EngSci alumna Inioluwa Deborah Raji (EngSci 1T9) has been named to this year's list of Top Innovators Under 35 by MIT Technology Review, ...Missing: background | Show results with:background
  11. [11]
    Interview with Inioluwa Deborah Raji, Forbes Tech 30 Under 30 Pick
    Feb 3, 2021 · Deborah started on her path to Forbes recognition back in 2014, completing her Engineering Science degree at the University of Toronto. Upon ...
  12. [12]
    Meet EngSci alumna and U of T Groundbreaker Deborah Raji
    Jan 19, 2022 · Deborah's research, which she began as an EngSci undergraduate student, has made foundational contributions to AI.Missing: background | Show results with:background
  13. [13]
    Inioluwa Deborah Raji - Simons Institute - UC Berkeley
    Inioluwa Deborah Raji is an Academic Fellow at the Leadership Conference on Civil and Human Rights, and was formerly a Senior Trustworthy AI Fellow at the ...
  14. [14]
    Inioluwa Deborah Raji: The 100 Most Influential People in AI 2023
    Sep 7, 2023 · Raji's work has focused on developing methods to audit AI systems both within and outside of the companies creating them.Missing: affiliation | Show results with:affiliation
  15. [15]
    Who Is Making Sure the A.I. Machines Aren't Racist?
    Jun 23, 2023 · She said she had been fired after criticizing Google's approach to minority hiring and, with a research paper, highlighting the harmful biases ...
  16. [16]
    What Really Happened When Google Ousted Timnit Gebru - WIRED
    Jun 8, 2021 · In December 2017, Inioluwa Deborah Raji, a young Nigerian-Canadian coder at an AI startup called Clarifai, stood in the lobby of the Long Beach ...
  17. [17]
    CS Research Mentorship Program (2018 - 2023)
    CSRMP matches students from historically marginalized groups with peers and a Google mentor to support their pursuit of computing research pathways.
  18. [18]
    [PDF] AI Principles 2020 Progress update - Google AI
    June 2020. • Closing the Accountability Gap: Defining a Framework for Internal Algorithmic Auditing. Inioluwa. Deborah Raji, Andrew Smart, Rebecca N. White, ...
  19. [19]
    team | C.H.E.N. Lab
    Computational Healthcare for Equity and iNclusion Lab at UC Berkeley and UC San Francisco. ... Deborah Raji. PhD Student (UC Berkeley). Avatar · Jessica Dai. PhD ...
  20. [20]
    Safe and Secure AI Advisory Group - Canada.ca
    Aug 25, 2025 · Inioluwa Deborah Raji is an Academic Fellow at the Leadership Conference on Civil and Human Rights, and was formerly a Senior Trustworthy AI ...
  21. [21]
    Gender Shades
    Deborah Raji, Data Opps; Ethan Zuckerman, Advisor. Paper · Dataset · Algorithmic Justice League Project. Website · Facebook · Twitter. Help Fight Bias. It takes ...
  22. [22]
    Actionable Auditing: Investigating the Impact of Publicly Naming ...
    This paper 1) outlines the audit design and structured disclosure procedure used in the Gender Shades study, 2) presents new performance metrics from targeted ...
  23. [23]
    [PDF] Actionable Auditing: Investigating the Impact of Publicly Naming ...
    Jan 24, 2019 · To analyze the impact of publicly naming and disclosing performance results of biased AI sys- tems, we investigate the commercial impact of ...
  24. [24]
    [PDF] Face Recognition Vendor Test (FRVT), Part 3: Demographic Effects
    Dec 19, 2019 · NIST has conducted tests to quantify demographic differences in contemporary face recog- nition algorithms. This report provides details about ...
  25. [25]
    [2102.00813] About Face: A Survey of Facial Recognition Evaluation
    Feb 1, 2021 · About Face: A Survey of Facial Recognition Evaluation. Authors:Inioluwa Deborah Raji, Genevieve Fried.Missing: NIST | Show results with:NIST
  26. [26]
    Actionable Auditing Revisited – Communications of the ACM
    Jan 1, 2023 · This paper maps out the real-world consequences of the implementation of an “actionable audit,” that is, an algorithmic audit designed to galvanize corporate ...
  27. [27]
    Outsider Oversight: Designing a Third Party Audit Ecosystem for AI ...
    Jun 9, 2022 · We conclude that the turn toward audits alone is unlikely to achieve actual algorithmic accountability ... Inioluwa Deborah Raji, Peggy Xu, ...
  28. [28]
    AI Audit-Washing and Accountability - German Marshall Fund
    Nov 15, 2022 · This paper identifies the core questions that need answering to make algorithmic audits a reliable AI accountability mechanism.Missing: IBM | Show results with:IBM
  29. [29]
    From Algorithmic Audits to Actual Accountability - ACM Digital Library
    Jul 27, 2022 · Inioluwa Deborah Raji. Inioluwa Deborah Raji. University of California, Berkeley, Berkeley, CA, USA. View Profile. Authors Info & Claims. AIES ...
  30. [30]
    Gaps and Opportunities in AI Audit Tooling - arXiv
    Feb 27, 2024 · We outline challenges practitioners faced in their efforts to use AI audit tools and highlight areas for future tool development beyond evaluation.
  31. [31]
    Awards - Aies Conference
    Best student paper award. Inioluwa Deborah Raji and Joy Buolamwini Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance ...<|separator|>
  32. [32]
    Inioluwa Raji: Meet Nigerian on Time's 100 influential people in AI
    Sep 12, 2023 · Her unwavering commitment to addressing bias, promoting accountability, and advocating for transparency makes her a true pioneer in the world of ...<|separator|>
  33. [33]
    Inioluwa Deborah Raji - Forbes
    Inioluwa Deborah Raji has been campaigning on behalf of others. Now aged 24, she has channeled her advocacy work to change the world's biggest companies.Missing: background | Show results with:background
  34. [34]
  35. [35]
    Designing a Third Party Audit Ecosystem for AI Governance - arXiv
    Jun 9, 2022 · Outsider Oversight: Designing a Third Party Audit Ecosystem for AI Governance. Authors:Inioluwa Deborah Raji, Peggy Xu, Colleen Honigsberg, ...
  36. [36]
    Deborah Raji: Third-Party Auditor Access for AI Accountability
    Nov 17, 2021 · In this discussion from Stanford HAI's Fall Conference, Policy and AI: Four Radical Proposals for a Better Society, Deborah Raji, ...Missing: auditing IBM Amazon<|control11|><|separator|>
  37. [37]
    It's Time to Develop the Tools We Need to Hold Algorithms ...
    Feb 2, 2022 · [10] Raji, Inioluwa Deborah, et al. "Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing.
  38. [38]
    Why it matters that IBM is getting out of the facial recognition business
    Jun 10, 2020 · “[Big companies] heavily influence the public discussion on it, and they heavily influence the policy discussions on it,” Raji said of facial ...<|separator|>
  39. [39]
    Amazon bans police from using its facial recognition technology for ...
    Jun 10, 2020 · Amazon is announcing a one-year moratorium on allowing law enforcement to use its controversial Rekognition facial recognition platform.Missing: influence | Show results with:influence
  40. [40]
    Facial recognition has always troubled people of color. Everyone ...
    Jun 12, 2020 · When Amazon announced a one-year ban on its facial recognition tools for police, it should've been a well-earned win for Deborah Raji, ...
  41. [41]
    An inside look at Congress's first AI regulation forum
    Sep 25, 2023 · Researcher Inioluwa Deborah Raji says tech CEOs focused on big claims of what AI could do, but she was there to offer a reality check.Missing: testimony | Show results with:testimony
  42. [42]
    AI's Present Matters More Than Its Imagined Future - The Atlantic
    Oct 4, 2023 · AI's Present Matters More Than Its Imagined Future. Let's not spend too much time daydreaming. By Inioluwa Deborah Raji.Missing: testimony | Show results with:testimony
  43. [43]
    Why California backed off again from ambitious AI regulation
    Sep 13, 2025 · Raji said she witnessed corporate influence and pushback when she helped shape a report about how California can balance guardrails and ...<|control11|><|separator|>
  44. [44]
    [PDF] Deborah Raji - Penn Law School
    Raji helped prepare data for audits on AI facial recognition programs for large companies, such as. Amazon and Kairos. Through Raji and Buolamwini's ...
  45. [45]
    The Myth of Facial Recognition Bias - Clearview AI
    Nov 28, 2022 · Below are the methodological flaws we identified with this Gender Shades study: ... bias issues and improves overall accuracy. Since 2018 ...
  46. [46]
    Thoughts on Recent Research Paper and Associated Article ... - AWS
    Jan 26, 2019 · The research paper states that Amazon Rekognition provides low quality facial analysis results. This does not reflect our own extensive testing.Missing: Shades | Show results with:Shades
  47. [47]
    What Science Really Says About Facial Recognition Accuracy and ...
    Jul 23, 2022 · ... flawed test methods errantly claiming the results indicate the accuracy of facial recognition technology. A 2012 FBI Study Analyzing Now ...
  48. [48]
    The Flawed Claims About Bias in Facial Recognition - Lawfare
    Feb 2, 2022 · Instead, they're treated like any other flawed tool, minimizing their risks by using a variety of protocols from prescription requirements to ...
  49. [49]
    The Problem of Bias in Facial Recognition - CSIS
    May 1, 2020 · Bias in facial recognition algorithms is a problem with more than one dimension. Technical improvements are already helping contribute to the ...
  50. [50]
    Closing the AI accountability gap - ACM Digital Library
    Jan 27, 2020 · Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing. Authors: Inioluwa Deborah Raji.
  51. [51]
    [PDF] Designing a Third Party Audit Ecosystem for AI Governance - arXiv
    Jun 9, 2022 · Before 2019 and the intervention of the Algorith- mic Justice League's Gender Shades audit [20], the U.S. National ... of facial recognition ...<|separator|>