Fact-checked by Grok 2 weeks ago

AI for Good

AI for Good is a multi-stakeholder platform established in 2017 by the (ITU), a United Nations specialized agency for information and communication technologies, to identify and promote practical applications of (AI) that advance the UN (SDGs), such as improving healthcare access, enhancing , and reducing inequalities. The initiative organizes annual global summits in , starting with the inaugural event in 2017, which convene governments, tech companies, academia, and to showcase prototypes and foster collaborations on societal challenges. Its Innovation Factory accelerates startups by providing pitching opportunities and scaling support for solutions like -driven early warning systems for disasters for connecting underserved schools. UN agencies, including over 40 partners, have deployed such technologies to address issues like economic disparities and , with reports highlighting applications in for and . Despite these efforts, of widespread, scalable impact remains limited, as many projects prioritize demonstration over rigorous, long-term evaluation, raising questions about actual versus promised outcomes. Critics argue that "AI for Good" initiatives often overlook AI's inherent risks, including algorithmic biases that perpetuate inequalities, violations, and deployment failures due to poor or overreliance on unproven models, potentially undermining the very social goods they aim to serve. These concerns underscore a tension between optimistic promotion—frequently driven by corporate and institutional partnerships—and the causal realities of AI systems, which can amplify harms if prioritizes speed over verifiable efficacy.

Definition and Origins

Conceptual Foundations

"AI for Good" denotes the strategic deployment of techniques, including algorithms and , to address verifiable global challenges such as outbreaks, resource , and inefficient in developing regions. This framework prioritizes empirical validation through data-driven , enabling optimizations like enhanced forecasting of trajectories or targeted interventions in , which demonstrably improve human outcomes via measurable metrics such as reduced mortality rates or increased yields. Unlike broader discourses that frequently advocate precautionary constraints to avert hypothetical risks, AI for Good emphasizes utilitarian deployment for direct societal utility, evaluating success by concrete impacts rather than adherence to normative principles. Precursors to formalized AI for Good efforts trace to pre-2017 applications where computational models simulated real-world dynamics to inform policy. For instance, during the 2014-2016 West African Ebola outbreak, agent-based modeling integrated epidemiological data to project transmission patterns and assess intervention efficacy, aiding containment strategies in and by December 2014. Similarly, U.S. Centers for Disease models in September 2014 quantified growth under varying control scenarios, providing early of AI's capacity for causal prediction in humanitarian contexts without reliance on later institutional branding. These instances underscore a foundational shift from theoretical AI toward pragmatic tools grounded in observable mechanisms, predating structured initiatives and highlighting potential for scalable benefits when unencumbered by profit-obscuring narratives in philanthropic tech deployments.

ITU Initiative Launch (2017)

, a specialized agency of the , established the platform in 2017 as a multi-stakeholder initiative to identify and promote applications aligned with the UN (SDGs). The platform was announced on March 23, 2017, positioning ITU as a convener for global dialogue on harnessing AI to address challenges in areas such as healthcare, , and , while emphasizing ethical deployment within an international regulatory framework. This launch occurred amid heightened global attention to AI capabilities following breakthroughs like DeepMind's victory over world champion in March 2016, which demonstrated AI's potential for complex decision-making but also raised concerns about uncontrolled technological acceleration without . ITU's motivations centered on directing AI innovation toward "global good" outcomes, framed through the lens of UN priorities, to mitigate risks of uneven benefits or misuse in a landscape dominated by advancements. Critics have noted that such bureaucratic efforts, while aiming for inclusivity, often reflect institutional incentives to centralize influence over emerging technologies rather than purely market-driven solutions. The inaugural AI for Good Global Summit, held June 7-9, 2017, in , , and co-organized with the , marked the platform's operational debut, convening over 500 onsite participants including AI researchers, policymakers, and industry representatives to explore practical applications. Sessions focused on breakthrough strategies for in SDGs, such as and , establishing a model of collaborative forums hosted at ITU headquarters to foster standards and partnerships under UN auspices. By the following year's summit in May 2018, participation had expanded significantly to over 5,000 attendees, underscoring early momentum in scaling the initiative's reach.

Organizational Structure

ITU Platform and Governance

The AI for Good platform is governed by the (ITU), a specialized agency responsible for coordinating its multi-stakeholder activities, including partnerships with over 50 UN sister agencies and co-convening with the Government of . This structure emphasizes collaborative decision-making through forums such as annual global summits, which serve as primary venues for agenda-setting, policy discussions on standards, and identification of practical applications. The ITU's Telecommunication Standardization Bureau oversees technical aspects, integrating inputs from diverse stakeholders to advance and regulatory frameworks. Key operational resources include the AI for Good Innovation Factory, a UN-backed acceleration program launched to support prototyping and scaling of AI-driven solutions aligned with . This initiative facilitates startup pitching events, , funding access, and VIP networking opportunities, with regional editions such as those in and held as recently as 2025. While promoting open-source tools and inclusive AI development, the platform's reliance on collaborative data-sharing among participants introduces dependencies on standardized protocols rather than fully autonomous innovation cycles. Critics of the platform's centralized ITU-led model argue that UN multilateral processes, as evidenced in broader governance challenges, often prioritize high-level principles over rapid, rigorous , potentially diluting empirical testing in favor of broad agreement. Observers from organizations have expressed skepticism about the "AI for Good" framing, noting it may adopt a promotional tone that underplays risks like algorithmic biases or unequal access, thereby contrasting with more agile, decentralized private-sector approaches that enable faster causal validation through market feedback. This structure, while fostering global coordination, risks inefficiencies typical of bureaucracies, where diversity can slow adaptation to 's iterative nature.

Partnerships with Private Sector and Governments

The AI for Good platform fosters public-private partnerships to harness innovation for , with ITU emphasizing collaborative models that integrate corporate resources into UN-led frameworks. A foundational alliance emerged from ITU's collaboration with on the AI XPRIZE, initiated in 2016 to spur AI applications for humanitarian challenges, marking an early integration of private R&D into global AI governance. Subsequent engagements include Huawei's participation in summits since at least 2018, where the company showcased AI models for industrial applications, contributing to event outcomes like awards for generative AI innovations. In January 2025, ITU launched the AI Skills Coalition with AWS and to deliver free AI training, targeting underserved regions and policymakers to address skills disparities. Government partnerships center on logistical and policy support, notably the ongoing co-convening of annual Global Summits by the Swiss government since the initiative's 2017 launch, with events hosted in to facilitate international dialogue. These arrangements enable access to public data and regulatory alignment, though broader public-private models reveal tensions over rights and value repatriation, where private entities retain control over proprietary algorithms developed in joint projects. Private sector alliances underscore market-driven , as corporate investments in and often outpace public efforts constrained by bureaucratic funding cycles; for instance, tech firms' provision of resources and grants has enabled quicker deployment of tools compared to government-led programs reliant on international aid. This dynamic highlights a reduced on public subsidies, with private contributions driving empirical progress through incentivized innovation rather than top-down directives.

Key Application Domains

Healthcare and Disease Management

applications in healthcare diagnostics have centered on enhancing accuracy in image interpretation and detection, with empirical evidence from controlled studies showing incremental improvements when integrated with clinician oversight. models, particularly convolutional neural networks, analyze such as chest X-rays and CT scans to identify abnormalities associated with diseases like , , and cancers, often outperforming unaided human readers in specificity for certain tasks. These tools process vast datasets to flag potential pathologies, reducing perceptual errors where radiologists might overlook subtle features due to fatigue or volume overload. In disease outbreak prediction, AI models employed during the 2020 response utilized time-series from reported cases, mobility patterns, and environmental factors to forecast trajectories. For example, artificial neural network-based predictors estimated daily case and death counts in regions like the UAE, achieving mean absolute percentage errors below 10% in short-term horizons under stable conditions. However, model performance degraded with sparse or noisy inputs, such as incomplete testing or evolving variants, underscoring that predictive efficacy derives from granularity and causal epidemiological priors rather than algorithmic sophistication alone. AI-assisted diagnostics in have demonstrated mitigation in peer-reviewed evaluations, with algorithms aiding in the detection of fractures, tumors, and infections by quantifying image features humans might miss. A scoping review of studies from 2018 to 2024 found integration lowered false negatives in chest interpretations for conditions like , with some systems boosting radiologist accuracy by up to 11% in task-specific benchmarks. Large-scale analyses across 140 radiologists and 15 diagnostic tasks revealed heterogeneous gains, where assistance improved performance most for lower-experience practitioners, though overall reductions averaged 5-15% depending on task and . These advancements rely on curated, annotated datasets from diverse populations to avoid to biased samples prevalent in academic archives. For and disease-specific targeting, initiatives like Health aimed to expedite hypothesis generation by mining literature and molecular databases for candidate compounds. Launched in the mid-2010s, analyzed to suggest drug repurposing pathways, but by 2019, IBM halted sales of its drug discovery tool due to sluggish adoption and failure to deliver accelerated pipelines beyond incremental insights. Critics attributed underwhelming results to challenges in handling proprietary biotech and overreliance on correlative patterns without robust validation against biological causality, highlighting that AI's utility in this domain demands integration with wet-lab experimentation rather than standalone computation. Despite such setbacks, narrower applications, like AI-driven predictions via models such as since 2020, have enabled structure-based for including targets, with verified hits entering clinical trials by 2022. Overall, diagnostic successes correlate with verifiable improvements in from high-fidelity inputs, while broader tools falter without addressing silos and regulatory hurdles in clinical deployment.

Environmental Monitoring and Sustainability

Artificial intelligence applications in environmental monitoring leverage satellite imagery and sensor data to track deforestation, with platforms like Global Forest Watch integrating AI algorithms to analyze Landsat and satellite feeds for near-real-time detection of tree cover loss exceeding 30% canopy density. These systems process vast datasets to identify change events, such as commodity-driven clearing in tropical regions, enabling alerts within days of occurrence and supporting enforcement in areas like the where annual losses reached 1.1 million hectares in 2022. Collaborations, including with Orbital Insight, have employed to differentiate oil palm plantations from natural forests, improving classification accuracy over traditional methods by distinguishing subtle spectral signatures. In management, optimizes operations by forecasting variable solar and wind outputs, integrating them into power systems to minimize curtailment and balance supply-demand fluctuations. For instance, models predict photovoltaic generation with errors reduced to under 5% in operational pilots, allowing dynamic adjustments in to prevent overloads during renewable influxes. Google's DeepMind application on its centers demonstrated directing yaw angles to boost energy capture by 20% from 2016 to ongoing implementations, extending to broader stability by simulating scenarios for storage dispatch. Such optimizations rely on historical and real-time inputs, enhancing without altering underlying infrastructure constraints like capacities. AI-driven biodiversity assessments employ convolutional neural networks on camera trap and acoustic data to forecast species declines, though predictive accuracies vary; a 2023 genomic analysis across 240 mammal species achieved moderate success in classifying extinction risks via ensemble models, limited by incomplete training datasets. Ground-truth validation remains a bottleneck, as sparse field observations—often covering less than 10% of ecosystems—introduce biases in remote sensing extrapolations, with models overestimating habitat suitability in data-poor regions like Southeast Asia. While AI amplifies sensor capabilities for pattern detection, its efficacy in averting losses hinges on integration with verifiable on-site data, underscoring that technological predictions cannot compensate for governance shortfalls in habitat protection or illegal extraction enforcement.

Poverty Alleviation and Agriculture

AI applications in , such as drone imagery analysis and models for prediction, enable smallholder farmers in developing countries to optimize inputs like fertilizers and , leading to documented gains of 15-30%. These tools process and data to forecast yields and detect issues like infestations early, as evidenced in pilots across and Asia where output increases aligned with reduced resource waste. For instance, AI-driven platforms have supported farmers in and by providing actionable insights via mobile apps, enhancing decision-making without requiring extensive infrastructure. In parallel, AI enhances alleviation through by deploying alternative credit scoring algorithms that evaluate non-traditional data—such as mobile usage patterns and transaction histories—for individuals in regions like and . These models have expanded loan approvals for small-scale agricultural enterprises while lowering rates by 15-30%, thereby facilitating investments in seeds, equipment, and expansion. Empirical assessments in emerging markets indicate that such AI integration improves without proportionally increasing risk, as algorithms refine risk profiles over time. Sustainable poverty reduction via these AI tools hinges on complementary institutional factors, including secure property rights that encourage technology adoption and that converts efficiency gains into income growth. In contexts lacking robust , farmers face disincentives to invest in AI-enhanced practices, potentially limiting impacts to short-term yields rather than enduring economic uplift. Overreliance on subsidized AI deployments without market-oriented reforms risks entrenching , as productivity boosts fail to translate into self-sustaining livelihoods absent causal enablers like competitive pricing and ownership security.

Humanitarian Aid and Disaster Response

Artificial intelligence has been deployed in humanitarian aid and disaster response primarily through predictive modeling to forecast risks and optimize resource allocation during crises. For instance, machine learning algorithms analyze seismic data, satellite imagery, and historical patterns to predict earthquake impacts and prioritize aid delivery. In the February 2023 Kahramanmaraş earthquakes in Turkey and Syria, which caused over 50,000 deaths, AI models were used for post-event damage assessment and relief demand forecasting, enabling targeted distribution of supplies like medical kits and shelter materials by estimating needs based on building vulnerability and population density. Similar approaches have modeled refugee flows and camp dynamics during emergencies, such as floods in Mozambique in 2019, integrating real-time data to simulate displacement patterns and preposition resources. UN agencies have partnered with AI developers to implement chatbots and automated systems for aid coordination, providing refugees and affected populations with immediate information on distribution points, eligibility, and safety protocols. The (WFP) employs AI-driven chatbots within complaint feedback mechanisms to deliver 24/7 access to aid details, facilitating faster query resolution in remote or overwhelmed areas. UNHCR has similarly tested AI chatbots to communicate with refugees, offering personalized guidance on assistance amid conflicts or natural disasters, which streamlines and reduces manual burdens. These tools have demonstrated potential in simulations to accelerate response times, though real-world gains depend on connectivity; for example, ReliefWeb's AI-enhanced reporting cut situation analysis from weeks to hours, indirectly supporting aid logistics. Despite these applications, AI's effectiveness in is constrained by dependence on high-quality, , which is often scarce in low-infrastructure regions. In areas with disrupted power grids, limited , or underrepresented datasets—common in developing countries or conflict zones—models suffer from incomplete inputs, leading to inaccurate predictions and delayed interventions. Algorithmic reliance on historical data can exacerbate gaps in marginalized locales, where risk information is underreported, resulting in suboptimal during acute phases. challenges with systems further hinder rapid deployment, underscoring that AI augments rather than replaces human-led efforts in data-poor environments.

Documented Achievements

Empirical Case Studies

In sub-Saharan Africa, AI-powered tools have enabled smallholder farmers to achieve yield increases through optimized soil analysis and predictive modeling. For instance, pilots utilizing for crop management reported average yield improvements of 20-25% in targeted regions by integrating weather data and AI-driven recommendations to reduce waste and enhance irrigation efficiency. Similarly, initiatives like those highlighted by U.S. agricultural aid programs in have shown farmers experiencing significant yield gains via AI applications for pest detection and resource allocation, supported by longitudinal field trials demonstrating sustained productivity rises over multiple seasons. In healthcare, AI diagnostics have demonstrated measurable reductions in errors in resource-limited settings. In , deployments of AI-based digital health platforms in neonatal and general care have accelerated diagnostic processes by 63% while lowering misdiagnosis rates by 28%, based on comparative analyses of AI-assisted versus traditional methods across implementations from 2023-2025. These outcomes stem from models trained on local datasets for conditions like and risks, with validation through pre- and post-intervention audits showing improved accuracy in high-volume clinics. Targeted AI interventions in these domains have yielded ratios of 2-5x in select pilots, derived from cost savings in inputs and healthcare delivery offset against implementation expenses, as tracked in FAO-aligned agricultural projects and WHO-supported benchmarks. Such metrics underscore the value of randomized controlled trials and longitudinal monitoring in verifying causal impacts, with data from 2017-2025 initiatives confirming scalability in developing contexts when grounded in empirical validation.

Quantifiable Impacts from 2017-2025

The ITU AI for Good platform, launched in , has aggregated impacts across partner-led initiatives, with documented outcomes primarily from pilot-scale deployments and award-winning projects rather than broad-scale verified metrics. Self-reported data from UN-linked efforts indicate support for nearly 400 AI projects spanning the (SDGs), enabling advancements in targeted areas like diagnostics and resource optimization. Independent analyses, such as a 2020 study on AI-SDG interactions, attribute causal enabler effects to AI in 69% of health-related targets while noting inhibitory risks in 8%, based on systematic mapping of 169 targets. However, comprehensive independent audits isolating AI for Good's contributions remain limited, with outcomes often confined to narrow, verifiable subsets amid challenges in data standardization and . In healthcare, AI tools facilitated through the platform have supported over 800,000 weekly computational experiments in by pharmaceutical partners, accelerating candidate identification via predictive modeling. A related diagnostic application at Eye Hospital in , highlighted in AI for Good case compilations, detected cases, addressing rural specialist shortages, though specific beneficiary counts for this deployment are not quantified beyond pilot efficacy. Across UN AI activities, approximately 85 use cases target outcomes, contributing to faster development pipelines, but causal attribution relies on platform-enabled collaborations rather than randomized controls. Environmental and efforts under AI for Good have yielded efficiency gains in , where AI-optimized routing in freight operations reduced fuel consumption by up to 10% in documented applications, lowering associated CO2 emissions that constitute about 20% of global totals from . Broader platform-supported initiatives, including smart grid optimizations in , demonstrate potential emissions cuts, with AI enabling 80% of climate-related SDG targets per the same SDG mapping study, though AI systems themselves contribute minimally (0.01%) to global greenhouse gases. In and alleviation, an award-winning AI reached over 460,000 farmers across regions, delivering actionable advice to boost yields, representing one of the larger verified subsets. Humanitarian aid and disaster response metrics include early adopter reductions in logistics costs by 15% and inventory mismatches by 35% via predictive analytics, as applied in resilient pilots promoted by the platform. These gains offset some scaling barriers, such as infrastructure deficits in low-resource settings, but empirical evidence from small pilots—like a Rwandan education tool enhancing math proficiency for 90 students—highlights persistent challenges in generalizing to millions without robust verification. Overall, net positives appear in domain-specific efficiencies (e.g., 10-15% emissions savings in freight), but aggregate beneficiary claims exceeding pilots lack third-party causal validation, underscoring the need for standardized metrics to confirm broader attribution.

Criticisms and Practical Challenges

High Failure Rates in AI Projects

Numerous studies document high failure rates in projects, with estimates ranging from 70% to over 80% of initiatives failing to deliver intended outcomes. The RAND Corporation's report on root causes of failures specifies that such projects fail at more than twice the rate of non- efforts, attributing this to systemic issues like inadequate integration with organizational needs. Similarly, governance analyses reveal a 42% shortfall between anticipated and actual deployments, highlighting persistent gaps in scaling from prototypes to operational use. Key contributors to these failures include poor , which compromises model accuracy and reliability in real-world applications. Talent shortages exacerbate the problem, as organizations struggle to assemble teams with expertise in deployment amid limited availability of qualified professionals. Unclear (ROI) further hinders progress, particularly in non-commercial settings where quantifiable benefits are harder to define and measure. In AI for good efforts, such as those targeting or , these challenges intensify due to overambitious scopes that introduce , often resulting in stalled or abandoned pilots. Data scarcity in low-resource environments amplifies quality issues, while the diffuse impact metrics in social applications obscure ROI assessments, leading to higher-than-average discontinuation rates. For instance, many 2022-2023 initiatives aligned with global agendas failed to transition beyond experimentation, mirroring broader patterns where vague objectives prioritize novelty over feasibility.

Bias, Transparency, and Ethical Shortcomings

In AI systems applied to social domains such as healthcare and humanitarian identification, algorithmic biases often emerge from training data that underrepresents certain demographics, leading to elevated error rates that exacerbate inequalities. Facial recognition technologies, utilized for tasks like victim identification in disaster response or aid recipient verification, demonstrate false positive rates 10 to 100 times higher for subgroups including women of East African or American Indian descent compared to white males in middle age groups, per NIST demographic analyses of vendor algorithms. In healthcare management, algorithms proxying patient need via historical spending have systematically undervalued Black patients' severity, identifying them for intensive care at rates less than half those of white patients with equivalent clinical needs, thereby entrenching resource disparities. The opacity of many models deployed in these contexts—termed black boxes for their resistance to human interpretation—undermines and empirical scrutiny. ITU AI for Good case studies, including forecasting for sustainable , highlight traditional predictive methods yielding uninterpretable outputs to operators, restricting post-hoc audits and causal tracing of errors to artifacts rather than systemic flaws. Similarly, large models in medical diagnostics suffer deficits from phenomena like hallucinations, confining them to supportive roles pending verifiable explanations. Efforts to mitigate bias via imposed fairness constraints frequently compromise aggregate accuracy, as peer-reviewed examinations reveal inherent trade-offs where equalizing group metrics elevates overall error rates without guaranteed causal improvements in outcomes. Prioritizing outcome-oriented validation—through diverse datasets and longitudinal testing—over declarative adjustments aligns with causal , countering tendencies in academia and policy toward untested interventions that prioritize perceptual balance. Such critiques underscore the need for toward sources assuming sans performance benchmarks, given institutional incentives favoring narrative conformity.

Job Displacement and Economic Realities

AI initiatives aimed at improving and healthcare in developing nations have accelerated of routine tasks, leading to measurable job displacement among low-skill workers. For instance, precision farming tools powered by , such as autonomous machinery for planting and harvesting, have reduced manual labor requirements by enabling fewer workers to manage larger operations, with evidence from sector analyses indicating up to 15-25% reductions in labor needs for specific crop management tasks in regions like and . Similarly, in healthcare, -driven diagnostic systems and administrative have displaced roles in and basic , with a 2025 study documenting task-level displacement in low-skill healthcare , contributing to declines of approximately 5-10% in AI-exposed subsectors across emerging economies. This displacement highlights an irony in AI "for good" applications: programs designed to boost productivity and alleviate often overlook the immediate economic disruptions to vulnerable populations reliant on those jobs. In agriculture-focused poverty alleviation efforts, AI optimization of and pest detection has streamlined operations but eroded in manual fieldwork, where low-skill labor constitutes over 60% of jobs in many developing countries, per 2025 assessments. Such outcomes reflect a failure to fully account for , where short-term losses in obsolete roles precede broader gains, yet initiatives frequently prioritize technological deployment over integrated workforce transition strategies. Empirical data from projections indicate that while global AI-driven displacement could affect 92 million roles by 2030, developing nations face heightened risks due to limited complementary infrastructure for reskilling. From a causal , these disruptions stem from 's substitution of predictable, low-variability tasks, but historical patterns of technological adoption suggest offsetting effects through productivity-induced growth. Studies show can elevate output per worker by 20-40% in automated sectors, fostering demand for new roles in , oversight, and higher-value , as observed in analogs applicable to agrotech. Long-term net may rise, with IMF analyses estimating that -enhanced productivity could expand economic activity enough to create more opportunities than lost, provided markets adapt freely; however, short-term pain necessitates targeted retraining programs emphasizing transferable skills over regulatory halts on deployment. Interventionist measures like or taxes risk stifling , whereas evidence favors decentralized adaptation, as surges have historically generated unforeseen job categories without moratoriums.

Major Controversies

Hype Versus Verifiable Outcomes

Promotional narratives in AI for Good initiatives frequently assert transformative potential, as exemplified by the International Telecommunication Union's (ITU) AI for Good platform, which positions AI as a key enabler for achieving through innovations in sectors like healthcare and . These claims emphasize scalable solutions to global challenges, with summits and reports projecting widespread societal benefits from AI deployment. In contrast, internal assessments reveal more modest outcomes. The ITU's 2024 AI for Good Innovate for Impact report documents various pilot projects but concedes ongoing issues, including the perpetuation of biases in models and exacerbation of divides, which hinder equitable scaling beyond controlled demonstrations. Similarly, a 2025 review of UN-affiliated initiatives identifies only two out of 40 highlighted use cases as backed by substantive empirical validation, with most relying on preliminary or unscaled applications. Rigorous remains sparse, with randomized controlled trials (RCTs) comprising a negligible portion of evaluations; for instance, examinations of in resource-poor settings document few instances of such gold-standard testing, favoring instead correlational analyses prone to factors. Recent 2025 conference proceedings further highlight overstated scalability, attributing failures to translate pilots into population-level impacts to scarcity and infrastructural barriers, rather than inherent technological prowess. Verifiable progress demands over surrogate metrics, such as event participation or counts, which proliferate in promotional but evade falsification against counterfactual baselines.

Geopolitical and Corporate Influences

The and 's for Good platform, established as a multi-stakeholder initiative to advance standards, , and capacity building aligned with , has drawn scrutiny for embedding multilateral coordination that potentially erodes national sovereignty. Critics contend that its emphasis on global and -sharing best practices, as outlined in ITU's 2025 reports, prioritizes supranational frameworks over sovereign control of flows, fostering dependencies in the Global South through technology transfers that reinforce extractive dynamics. For instance, analyses highlight how such initiatives enable digital , where extraction from vulnerable populations in regions like and supports foreign systems under the guise of humanitarian applications, as seen in Microsoft's PTIS project influencing policy in . Corporate involvement in AI for Good amplifies these power imbalances, with tech giants like and partnering on projects that align with UN agendas but primarily serve profit motives through data monetization and market expansion. Examples include 's Project Lucy, launched in in March 2014 to apply AI to , and 's Project Ellora in rural , which collects data from low-income workers lacking digital access, ostensibly for social good but enabling corporate without equitable local returns. Such engagements, critiqued as corporate capture, legitimize Big Tech's role in reshaping development policy—e.g., via the World Economic Forum's Centre for the , established in 2017 and expanded to by 2019—while obscuring harms like biased outcomes for marginalized groups and geopolitical favoritism toward U.S.-based firms. In contrast, proponents of private-sector-led development argue that decentralized, market-driven outperforms globalist coordination by enhancing efficiency and , avoiding the inconsistencies and burdens of extraterritorial multilateral standards. entities, leveraging competitive experimentation, can iterate policies rapidly—such as through scalable testing in startups—without the accountability deficits of bodies, which stifling breakthroughs via fragmented regulations. This approach aligns with causal incentives where firms prioritize verifiable outcomes over bureaucratic consensus, potentially yielding more targeted "good" applications unencumbered by elite-driven agendas.

Alignment with UN Agendas and Potential Overreach

The AI for Good initiative, spearheaded by the (ITU) in partnership with over 50 UN agencies, positions as an accelerator for the UN's 17 (SDGs), targeting applications in , , , and . The 2024 AI for Good Global Summit in explicitly framed AI solutions to advance these goals, with endorsements from UN entities emphasizing AI's role in bridging implementation gaps. Critics contend that this alignment overlooks the SDGs' inherent flaws, including vague targets and unrealistic timelines that undermine empirical progress. For instance, SDG 1 aims to eradicate (defined as living below $2.15 per day) by 2030, yet as of 2023, approximately 700 million people remained in , with the goal showing regression or stagnation amid global setbacks like the . Overall SDG advancement lags severely, with only 18% of targets on track per the 2025 UN progress report, while nearly half show insufficient speed and 30% exhibit regression or no movement since 2015 baselines. Such outcomes fuel skepticism toward top-down SDG frameworks, which some analysts describe as utopian and imprecise, complicating measurable AI contributions. Integration with UN AI ethics mandates introduces risks of overreach, as global guidelines prioritize collective equity over individual prerogatives. UNESCO's 2021 Recommendation on the Ethics of AI, endorsed by over 190 countries, mandates that AI systems promote "fairness" through equitable distribution of benefits, risks, and costs, potentially compelling resource reallocation that encroaches on property rights and innovation incentives. UN system principles further require AI to avoid impairing human rights while enforcing just outcomes, which critics argue embeds collectivist imperatives that could justify surveillance or benefit mandates at the expense of personal autonomy and market freedoms. These frameworks, while aiming for human rights alignment, have drawn scrutiny for fostering fragmented governance that favors supranational control over decentralized decision-making. In contrast, bottom-up, market-driven AI deployments in economies with robust property rights demonstrate superior efficacy, unencumbered by globalist prescriptions. Generative AI applications, predominantly developed in free-market settings like the United States, are projected to boost global productivity by up to $15.7 trillion annually through private-sector innovations in coding, content creation, and operations. Studies indicate these gains stem from incentive-aligned R&D, with AI contributing 1.5% to GDP growth by 2035 in open economies, far outpacing top-down models that impose equity constraints. Empirical evidence from planned or heavily regulated systems highlights diminished returns, as centralized planning alters AI focus toward basic outputs rather than dynamic value creation. This suggests that SDG-aligned AI efforts may dilute effectiveness by subordinating technological advancement to unproven collective goals.

Empirical Effectiveness and Impact

Metrics from Independent Studies

Independent studies, including randomized controlled trials (RCTs) and comprehensive indices, reveal modest quantifiable benefits from AI applications aimed at social good, primarily in narrow domains like healthcare diagnostics, where AI-assisted interventions outperformed standard care in 77% of evaluated RCTs, yielding efficiency gains of approximately 10-20% in diagnostic accuracy and time savings. However, these gains are context-specific and do not generalize to broader societal deployments, such as or , where independent evaluations show limited causal impacts due to challenges in scalability and . The Stanford Human-Centered AI (HAI) AI Index Report 2025 documents productivity enhancements from AI integration, estimating 5-15% improvements in targeted and administrative tasks, but highlights regulatory-impact gaps where high (e.g., $200 billion in global private AI funding in 2024) has not translated to proportional outcomes, with only marginal advancements in equitable metrics. Methodological rigor in these analyses incorporates RCTs and to establish , avoiding self-reported biases prevalent in industry assessments. Public skepticism underscores these limitations, as evidenced by surveys indicating that 62% of U.S. adults doubt effective government regulation of , reflecting broader reservations about overhyped societal benefits amid verifiable underperformance in non-specialized applications. evaluations in contexts similarly report targeted detection improvements but systemic deployment failures, with tools failing to achieve sustained equity gains due to amplification in diverse populations. Overall, these third-party metrics emphasize the need for via RCTs over correlational claims, revealing for Good initiatives' constrained effectiveness beyond pilot scales.

Long-Term Causal Analysis

In AI applications aimed at social benefits, causal chains typically begin with data inputs and model training to generate predictions, but sustained outcomes hinge on intermediary human and institutional factors that convert forecasts into actionable interventions. For instance, AI-driven disaster forecasting systems process and meteorological to anticipate events like floods, yet reductions in mortality or arise not directly from the models but from downstream decisions such as pre-positioning or evacuations, which demand reliable and funding allocation. Empirical assessments reveal that predictive accuracy alone correlates with potential benefits but fails to causally guarantee them without these mediators; barriers including biases and forecast interpretation challenges often limit translation to prevention, as historical datasets may embed systemic errors that propagate inequities. Between 2017 and 2020, AI for good initiatives surged amid widespread optimism for rapid scalability in areas like humanitarian forecasting, fueled by demonstrations of correlative pattern recognition in vast datasets. By 2024-2025, analyses shifted to tempered realism, highlighting how initial hype overlooked causation pitfalls—such as mistaking spending disparities for health needs in algorithmic outputs—necessitating causal inference methods to validate long-term efficacy beyond surface-level correlations. This evolution underscores that while AI excels at identifying statistical associations, distinguishing true causal pathways requires rigorous testing of interventions, avoiding overreliance on black-box predictions that conflate correlation with effect. Projections grounded in economic modeling estimate AI could yield approximately 17.5% cumulative gains across sectors by enhancing in prediction-dependent fields like and , potentially adding $7 trillion to GDP over baseline forecasts. However, these gains presuppose institutional adaptations to mitigate risks of technological dependency, particularly in emerging economies where AI exposure affects 40% of and inadequate preparedness could amplify inequalities through uneven and gaps. Without reforms bolstering local capacities and , causal chains may invert benefits into harms, fostering reliance on external providers rather than endogenous resilience.

Future Directions and Debates

Emerging Technological Integrations

AI systems, which integrate processing of text, images, audio, and video, have advanced since large models post-2023 to enable real-time coordination in , such as analyzing alongside textual reports for . These capabilities support early warning systems and automated in crises, processing diverse data streams to predict patterns or optimize without human delays. However, deployment remains experimental, with humanitarian organizations reporting inconsistent due to data silos and model inaccuracies in low-resource contexts. Edge computing paired with AI facilitates on-device inference in low-connectivity regions, reducing latency and bandwidth needs for applications like remote health monitoring or environmental sensing in development aid. By processing data locally on devices, edge enables predictive maintenance for infrastructure in rural areas, bypassing cloud dependency that exacerbates digital divides. Real-world tests show up to 90% bandwidth savings in such setups, though hardware constraints limit model complexity to simpler tasks. In agriculture, 2025 integrations of with sensors for precision farming—such as soil moisture prediction and automated irrigation—promise yield increases of 15-20% in resource-scarce regions, as outlined in ITU standards. These systems leverage from connected devices to optimize inputs, with ITU reports documenting pilot deployments achieving reduced water usage by 30%. Yet, scalability hinges on addressing issues, as heterogeneous feeds often introduce noise affecting reliability. Continued reliance on empirical laws for these integrations faces potential plateaus, with analyses indicating without novel high-quality data sources, as models underperform predictions in 2025 benchmarks. Data scarcity could cap advancements, particularly for domain-specific applications in aid, where generation yields mixed results on generalization. This underscores the need for targeted data curation over brute-force compute escalation to sustain progress.

Policy Recommendations for Balanced Deployment

Policies promoting balanced AI deployment in initiatives aimed at societal benefits should emphasize market-driven incentives and rigorous empirical evaluation over expansive regulatory frameworks. The has resisted international oversight mechanisms, such as those proposed at the , arguing that global accords could hinder technological and by imposing uniform standards ill-suited to diverse national contexts. Instead, frameworks prioritizing incentives—such as targeted tax credits for AI applications addressing verifiable challenges—enable rapid iteration and accountability through consumer and investor feedback, aligning with causal mechanisms where corrects inefficiencies more effectively than centralized directives. Mandatory impact audits for AI systems deployed in public or subsidized "AI for Good" projects provide a targeted safeguard, requiring pre- and post-deployment assessments of outcomes against baseline metrics, such as cost savings or efficacy rates from independent pilots. agencies in the more than doubled their AI use cases from to , demonstrating operational efficiencies when guided by evidence-based guidelines rather than prohibitive rules. These audits, informed by from standardized , mitigate risks without broad mandates, as voluntary adherence to standards has shown flexibility in adapting to . Overregulation risks uneven enforcement and innovation suppression, as evidenced by the doubling proposed AI bills in compared to , potentially fragmenting development across sectors. Expert viewpoints diverge on scaling versus caution: optimists advocate accelerating deployment with responsible policies to harness growth potential, projecting substantial economic gains from integration, while skeptics urge restraint citing unproven existential risks, though empirical data underscores mitigability through iterative testing. supports voluntary standards over mandatory regimes for effectiveness, as the former foster best practices without the lag inherent in legislative processes, preserving individual and enterprise liberty to innovate. Market mechanisms, rather than supranational bodies like the UN, better facilitate corrections via liability and competition, as demonstrated by preferences for domestic guidelines amid global pushes for oversight.

References

  1. [1]
    About us - AI for Good - ITU
    AI for Good was established in 2017 by the International Telecommunication Union (ITU), the United Nations (UN) leading agency for digital technologies.UN AI Actions · AI/ML in 5G Challenge · AI/ML (Pre-) Standardization
  2. [2]
    AI for Good - ITU
    As a multi-stakeholder platform, AI for Good advances capacity building, AI skills development, standards, and governance.Summit 2025 · AI for Good Impact Awards · AI for Good Impact Africa · UN AI ActionsMissing: history | Show results with:history
  3. [3]
    Summit 25 - Identifying innovative AI applications to solve global ...
    AI for Good is organized by ITU in partnership with over 40 UN Sister Agencies and co-convened with the Government of Switzerland. Join the #AIforGood ...General Pass - AI for Good · Programme · Practical information · Speakers
  4. [4]
    Innovation Factory - AI for Good - ITU
    The AI for Good Innovation Factory is the leading UN-based startup pitching and acceleration platform, helping startups grow and scale their AI-powered ...
  5. [5]
    [PDF] AI for Good Impact Report
    ITU and more than 40 other UN agencies are using AI to connect schools, improve early warning systems, address social and economic inequalities, and much more.
  6. [6]
    A Definition, Benchmark and Database of AI for Social Good Initiatives
    Feb 17, 2021 · ... empirical evidence. In this Perspective, we address these ... AI for Good Global Summit (28−31 May 2019, Geneva, Switzerland). (AI ...
  7. [7]
    'AI for Social Good': Whose Good and Who's Good? Introduction to ...
    Aug 13, 2021 · This introduction sets out the aims and scope of the Special Issue and provides an overview of each of the research articles and commentaries that follow.
  8. [8]
    Stepping back from Data and AI for Good – current trends and ways ...
    May 9, 2023 · Various 'Data for Good' and 'AI for Good' initiatives have emerged in recent years to promote and organise efforts to use new computational ...
  9. [9]
    The Reputational Risks of AI | California Management Review
    Jan 24, 2022 · The common theme that runs across these failures is the integrity of the data used by the AI system. AI systems work best when they have access ...
  10. [10]
    AI for social good: unlocking the opportunity for positive impact
    May 18, 2020 · A definition, benchmark and database of AI for social good initiatives ... AI for Good Grants, Microsoft AI for Humanity, Mastercard Center ...
  11. [11]
    How to Design AI for Social Good: Seven Essential Factors - PMC
    ... AI for Good Global Summit” 2019) indicates the presence of a phenomenon, but ... Conceptual disalignment means that the receiver may find the ...
  12. [12]
    Modeling the 2014 Ebola Virus Epidemic – Agent-Based ...
    Mar 9, 2015 · We developed an agent-based model to investigate the epidemic dynamics of Ebola virus disease (EVD) in Liberia and Sierra Leone from May 27 to December 21, ...
  13. [13]
    Modeling in Real Time During the Ebola Response | MMWR - CDC
    Jul 8, 2016 · Models were used at the outset of the Ebola response in early September 2014 to estimate the impact of the epidemic with and without ...
  14. [14]
    Machine-learning Prognostic Models from the 2014–16 Ebola ...
    Jun 22, 2019 · This study harmonizes diverse datasets from the 2014–16 EVD epidemic and generates several prognostic models incorporated into the novel Ebola ...
  15. [15]
    ITU launches global dialogue on Artificial Intelligence for good...
    Mar 23, 2017 · Geneva, 23 March 2017​​ Organized by ITU and the XPRIZE Foundation – in partnership with UN agencies, including OHCHR, UNESCO, UNICEF, UNICRI, ...
  16. [16]
    The Inaugural AI for Good Global Summit Is a Milestone but Must ...
    Jun 7, 2017 · The summit comes at a critical time and should help increase policymakers' awareness of the possibilities and challenges associated with AI.
  17. [17]
    AI for Good Global Summit 2017 - ITU
    Jun 12, 2017 · The Summit aimed to accelerate and advance the development and democratization of AI solutions that can address specific global challenges.Missing: initiative launch
  18. [18]
    [PDF] AI for GOOD GLOBAL SUMMIT - ITU
    Nov 8, 2017 · The “AI for Good Global Summit” took place at ITU in Geneva, Switzerland, on 7-9 June 2017, organized by ITU and the XPRIZE Foundation, in.
  19. [19]
    Media Advisory: AI for Good Global Summit – 7-9 June 2017 - ITU
    May 4, 2017 · Geneva, 04 May 2017 ... Breakthrough sessions will invite participants to collaborate in proposing strategies for the development of AI ...
  20. [20]
    AI for Good Global Summit to ensure AI benefits humanity
    The 2nd AI for Good Global Summit at ITU Headquarters in Geneva, 15-17 May 2018, will take action to ensure that Artificial Intelligence (AI) accelerates ...
  21. [21]
    Featured speakers and demos at the AI for Good Global Summit 2025
    Jun 16, 2025 · The four-day summit will host talks from prominent AI figures, including: Doreen Bogdan-Martin, Secretary-General, ITU; H.E. Alar Karis, ...
  22. [22]
    AI for Good Innovation Factory India 2025
    The AI for Good Innovation Factory is the leading UN-based startup pitching and acceleration platform, helping startups grow and scale their AI-powered ...
  23. [23]
    [PDF] AI Governance Day - From Principles to Implementation
    ... AI is central to all these pillars. She pointed out the AI for Good platform, which brings together over 40 UN agencies and. 27 000 experts from 180 ...
  24. [24]
    What to expect from the ITU 'AI for Good' summit - SWI swissinfo.ch
    Jul 6, 2023 · Looking at the line-up of participants, executives from Amazon, Google DeepMind and other leaders from the fields of industry, diplomacy and ...
  25. [25]
    We Need Effective Governance to Shape AI for Good
    May 29, 2024 · To successfully govern AI for the benefit of all, we need our approach to be as dynamic, innovative and creative as the pursuit of AI itself.
  26. [26]
    XPRIZE and ITU: A Partnership with a Unique Mission
    May 21, 2019 · Just over three years ago, XPRIZE launched the USD 5 million IBM Watson AI XPRIZE with the mission of accelerating the AI for Good movement.
  27. [27]
    AI for Good Global Summit 2018 - ITU
    The 2018 summit, organized by ITU, focused on practical AI applications for sustainability, long-term benefits, and achieving Sustainable Development Goals.Missing: IBM Huawei
  28. [28]
    Huawei and partners win prestigious international awards for ...
    Jun 5, 2024 · About 6,000 participants joined in person the ITU-organized AI for Good summit, which showcased innovations in generative AI, robotics, and ...Missing: IBM 2018
  29. [29]
    ITU Launches AI Skills Coalition with AWS, Microsoft, and Partners ...
    Jan 21, 2025 · ITU's AI Skills Coalition empowers marginalized communities and policymakers with free AI education, bridging global skills gaps for equitable development.
  30. [30]
    AI For Good Global Summit 2025 - SDG Knowledge Hub
    The 2025 AI For Good Global Summit aims to identify practical applications of artificial intelligence (AI) to accelerate progress towards the SDGs.Missing: involvement | Show results with:involvement
  31. [31]
    Conflicts and complexities around intellectual property and value ...
    This article aims to shed light on stakeholders' perspectives on the intellectual property (IP) and value sharing of AI technologies developed by PPPs.
  32. [32]
    AI for social good: Improving lives and protecting the planet - McKinsey
    May 10, 2024 · Challenges and risks of scaling AI for social good​​ Aside from funding, the biggest barriers to scaling AI continue to be data availability, ...
  33. [33]
    Redefining Radiology: A Review of Artificial Intelligence Integration ...
    AI, particularly its subset machine learning, is radically improving radiology, strengthening image analysis, and mitigating diagnostic errors. AI algorithms ...
  34. [34]
    Role of AI in diagnostic imaging error reduction - PMC - NIH
    Aug 30, 2024 · The authors believe that AI could play a prominent role in reducing these error types. In fact, AI systems are unaware of the results of prior ...
  35. [35]
    A predictive analytics model for COVID-19 pandemic using artificial ...
    The aim of this study is to design a predictive model based on artificial neural network (ANN) model to predict the future number of daily cases and deaths ...Missing: empirical | Show results with:empirical
  36. [36]
    AI-powered COVID-19 forecasting: a comprehensive comparison of ...
    Mar 28, 2024 · The objective of this study was to assess the efficiency and accuracy of various deep-learning models in forecasting COVID-19 cases within the UAE.
  37. [37]
    Artificial intelligence for forecasting and diagnosing COVID-19 ...
    In this survey, authors investigated the main scope and contributions of AI in combating COVID-19 from the aspects of disease detection and diagnosis to ...Missing: analytics | Show results with:analytics
  38. [38]
    Role of Artificial Intelligence in Reducing Error Rates in Radiology
    Sep 10, 2025 · This scoping review examines how artificial intelligence (AI) can help reduce errors in radiology, an area where accuracy is critical to ...
  39. [39]
    Heterogeneity and predictors of the effects of AI assistance ... - Nature
    Mar 19, 2024 · This large-scale study examined the heterogeneous effects of AI assistance on 140 radiologists across 15 chest X-ray diagnostic tasks and identified predictors ...Missing: reduction | Show results with:reduction<|separator|>
  40. [40]
    AI in diagnostic imaging: Revolutionising accuracy and efficiency
    The review discusses how AI-enhanced image analysis significantly reduces errors and accelerates diagnostic processes, leading to quicker patient diagnosis and ...
  41. [41]
    IBM halting sales of Watson AI tool for drug discovery - STAT News
    Apr 18, 2019 · Citing lackluster financial performance, IBM is halting development and sales of a product that uses its Watson artificial intelligence ...Missing: criticisms | Show results with:criticisms
  42. [42]
    Farewell to "Watson For Drug Discovery" | Science | AAAS
    Apr 18, 2019 · STAT is reporting that IBM has stopped trying to sell their "Watson for Drug Discovery" machine learning/AI tool, according to sources within the company.Missing: criticisms | Show results with:criticisms
  43. [43]
    The Impact of Artificial Intelligence on Healthcare - NIH
    It examines the uses and effects of AI on healthcare by synthesizing recent literature and real‐world case studies, such as Google Health and IBM Watson Health, ...Missing: criticisms | Show results with:criticisms
  44. [44]
    Artificial intelligence in healthcare and medicine: clinical ...
    Sep 23, 2025 · Artificial intelligence (AI) has emerged as a transformative tool capable of addressing these issues by enhancing diagnostics, treatment ...
  45. [45]
    Global Forest Watch: Forest Monitoring, Land Use & Deforestation ...
    Global Forest Watch offers free, real-time data, technology and tools for monitoring the world's forests, enabling better protection against illegal ...Map · Global Forest Watch Pro · About · Global Deforestation Rates...
  46. [46]
  47. [47]
    Artificial Intelligence Helps Distinguish the Forests From the Trees
    Dec 17, 2018 · Orbital Insight and Global Forest Watch (GFW) are working together to leverage these cutting-edge technologies to create preliminary oil palm ...
  48. [48]
    The role of artificial intelligence in accelerating renewable energy ...
    AI optimizes renewable energy by enhancing forecasting, efficiency, and grid integration, driving sustainable transitions. •AI-driven tools analyze data to ...
  49. [49]
    AI in Renewable Energy: [Use Cases, Benefits & Solutions for 2025]
    Feb 12, 2025 · AI has boosted solar energy efficiency by 20% by optimizing panel orientations and tracking sunlight, as seen in Google's collaboration with ...Ai For Solar Energy · Ai In Hydropower · Ai For Energy Grid...<|separator|>
  50. [50]
    AI-informed conservation genomics | Heredity - Nature
    Dec 27, 2023 · In a recent study, Wilder et al. (2023) analysed genomic data from 240 mammal species to predict their extinction risk categories in the Red List.
  51. [51]
    What Are the Limits of AI in Environmental Monitoring? → Question
    Apr 28, 2025 · AI limits in environmental monitoring include data quality, model complexity, human expertise need, and cost.
  52. [52]
    Artificial intelligence for geoscience: Progress, challenges, and ...
    Sep 9, 2024 · AI accelerates understanding of Earth systems, but faces challenges like reliability, interpretability, ethical issues, data security, and high ...Missing: truth | Show results with:truth
  53. [53]
    Implementing artificial intelligence and machine learning algorithms ...
    Integrated, enterprise-scale platforms favor large farms, while mobile AI applications yield 15-30 percent gains for smallholders. Converging technologies ...<|separator|>
  54. [54]
    Enhancing precision agriculture: A comprehensive review of ...
    This review comprehensively analyses the potential of machine learning and artificial intelligence in transforming farming operations.
  55. [55]
    AI For Sustainable Agriculture: 7 Shocking Yield Boosts - Farmonaut
    AI-driven precision farming can increase crop yields by up to 30% through optimized resource allocation and real-time monitoring.
  56. [56]
    Can AI Technologies Help Expand Credit Access? - Ideas Matter
    Nov 7, 2024 · In Latin America, AI-driven credit scoring has been instrumental in reaching micro, small, and medium enterprises (MSMEs), as well as ...
  57. [57]
    Small Loans, Big Impact: AI Credit Scoring for the Unbanked
    May 30, 2025 · Reduced Default Rates: MFBS using AI scoring report 15-30% decreases in non-performing loans, directly improving bottom-line results. Expanded ...Small Loans, Big Impact: Ai... · The Credit Gap Challenge · The Ai Advantage In...
  58. [58]
    The Effect of AI-Enabled Credit Scoring on Financial Inclusion
    To summarize, the overall effect of the AI model is to improve profitability with little impact on loan approval. It is worth noticing that the “AI model update ...
  59. [59]
    Exploring the influence of property rights on rural revitalization efforts
    The reform of property rights systems has been shown to significantly promote the non-agricultural transfer of rural labor and improve rural household welfare ...
  60. [60]
    Digitization and Development: Property Rights Security, and Land ...
    Jul 21, 2021 · I test the land and labor market effects of a property rights reform that computerized rural land records in Pakistan.<|control11|><|separator|>
  61. [61]
    Agricultural Innovation & Technology Hold Key to Poverty Reduction ...
    Sep 16, 2019 · Agricultural innovation & technology hold key to poverty reduction in developing countries, says World Bank Report.Missing: AI | Show results with:AI
  62. [62]
    A Machine Learning Framework for Regional Damage Assessment ...
    The catastrophic Kahramanmaraş earthquakes (Mw 7.7 and 7.6), which occurred on 6 February 2023, in southeast Turkey, provide an opportunity to advance damage ...
  63. [63]
    An AI-based framework for earthquake relief demand forecasting
    Feb 15, 2024 · This paper presents an AI-based framework for earthquake relief demand forecasting, aiming to optimize the distribution of emergency resources.
  64. [64]
    World Food Programme (WFP) - AI for Good
    For example, humanitarian emergencies such as floods in Mozambique in 2019 can be included in the model (from M1). Data on refugee camps and their dynamic ...
  65. [65]
    Artificial Intelligence (AI) | United Nations
    AI includes a diverse range of technologies that can be defined as 'self-learning, adaptive systems.
  66. [66]
    ReliefWeb and Amazon distribute life-saving insights faster with ...
    Apr 14, 2025 · The original launch of ReliefWeb reduced the time it takes to issue situation reports from 2 weeks to 24 hours. Generative AI can cut the time ...Missing: partnered | Show results with:partnered
  67. [67]
    Challenges and Limitations of AI in Disaster Response Systems
    Apr 13, 2025 · These challenges include data quality and availability, algorithmic biases, integration with existing infrastructure, privacy concerns, and the ...
  68. [68]
    Kamal Kishore: How can AI help us tackle complex disaster risks?
    Jul 22, 2025 · AI can only work with the data it's given, and risk is often under-represented or misrepresented in marginalized areas. This is both a technical ...Missing: limitations | Show results with:limitations
  69. [69]
  70. [70]
    AI is improving Africa's harvests - U.S. Embassy in Uganda
    Mar 26, 2024 · Tiendrebeogo and other farmers in Africa are seeing significant increases in yields, thanks to new artificial intelligence tools.
  71. [71]
    AI-Driven Digital Health Ecosystems: Empowering India's Economy ...
    Mar 12, 2025 · AI based solutions proved to be 63% faster than traditional healthcare models in terms of diagnostic speed, and reduced misdiagnosis rate by 28% ...
  72. [72]
    The Role of AI in Medical Diagnostics in India - TechSci Research
    Jun 27, 2025 · Startups and hospitals in India are using AI to assist radiologists in identifying diseases such as tuberculosis, cancer, and fractures, thereby ...Missing: misdiagnosis | Show results with:misdiagnosis
  73. [73]
    AI and the future of agriculture - IBM
    Discover how artificial intelligence is transforming agriculture to tackle climate change and meet growing food demand.Missing: Africa | Show results with:Africa
  74. [74]
    AI can be a game-changing solution for farmers: FAO Innovation Chief
    Apr 2, 2025 · The goal is to harness the power of science and innovation to transform agrifood systems and deliver solutions directly to farmers and those who ...Missing: IBM | Show results with:IBM
  75. [75]
  76. [76]
    The Impact of AI on Efficiency and Operations in Logistics
    May 23, 2024 · For instance, implementing AI in route planning has been shown to reduce fuel usage by up to 10%, reflecting substantial cost savings and ...
  77. [77]
    [PDF] AI as a Catalyst to Decarbonize Global Logistics
    With support from AI, the freight logistics sector could potentially reduce its emissions by. 10-15%, while increasing both efficiency and service levels.<|separator|>
  78. [78]
    Meet the winners for the 2025 AI for Good Impact Awards - ITU
    Jul 9, 2025 · Auto translate (?). powered by ITU Translate; Disclaimer & feedback. Search. AI for Good stories ... Chat has already reached over 460,000 farmers ...
  79. [79]
    The Role of AI in Developing Resilient Supply Chains | GJIA
    Feb 5, 2024 · Early adopters of AI-enabled supply chain management have reduced logistics costs by 15 percent, improved inventory levels by 35 percent, and ...Missing: +10% 10
  80. [80]
    Between 70-85% of GenAI deployment efforts are failing to meet ...
    70-85% of current AI initiatives fail to meet their expected outcomes. In 2019, MIT cited that 70% of AI efforts saw little to no impact after deployment.
  81. [81]
    The Root Causes of Failure for Artificial Intelligence Projects ... - RAND
    twice the rate of failure for information technology projects that do not involve ...Missing: 2023 | Show results with:2023
  82. [82]
    [PDF] The Root Causes of Failure for Artificial Intelligence Projects ... - RAND
    Aug 13, 2024 · By some estimates, more than 80 percent of AI projects fail—twice the rate of failure for information technol- ogy projects that do not involve ...Missing: odds | Show results with:odds
  83. [83]
    AI Governance Unwrapped: Insights from 2024 and Goals for 2025
    Dec 18, 2024 · 2024 exposed a 42% shortfall between anticipated and actual AI deployments, alongside challenges like ungoverned third-party models ...
  84. [84]
    AI Fail: 4 Root Causes & Real-life Examples - Research AIMultiple
    Jul 24, 2025 · 1. Unclear business objectives · 2. Poor data quality · 3. Lack of collaboration between teams · 4. Lack of talent.Missing: humanitarian | Show results with:humanitarian
  85. [85]
    Why AI Projects Fail — The Data Quality Crisis
    70–80% of AI projects fail due to poor data quality. Discover why—and how to secure your initiatives with effective data governance.
  86. [86]
    Why do AI projects fail? - Autentika
    Nov 3, 2024 · Why do AI projects fail? · 1) Unrealistic expectations · 2) Failure to understand users' needs · 3) Poor data quality · 4) Talent shortages and lack ...
  87. [87]
    95% of AI Projects Fail. Here's Why That's a Good Thing - Medium
    Aug 25, 2025 · The study found that projects frequently fail due to a lack of high-quality data, bad data, or messy, unstructured data ... Talent Shortage: ...
  88. [88]
    Why Companies Fail in Their AI Projects: Lessons from the Frontlines
    Aug 24, 2025 · The first and perhaps most fundamental reason for failure is the disconnect between AI initiatives and business goals. Many companies embark on ...Missing: causes unclear humanitarian
  89. [89]
    The Surprising Reason Most AI Projects Fail – And How to Avoid It at ...
    twice the rate of failure for information technology projects that do not involve AI.6 ...
  90. [90]
    An MIT report that 95% of AI pilots fail spooked investors ... - Fortune
    Aug 21, 2025 · An MIT report that 95% of AI pilots fail spooked investors. But it's the reason why those pilots failed that should make the C-suite anxious.
  91. [91]
    [PDF] Face Recognition Vendor Test (FRVT), Part 3: Demographic Effects
    Dec 19, 2019 · NIST intends this report to inform discussion and decisions about the accuracy, utility, and limitations of face recognition technologies.
  92. [92]
    Dissecting racial bias in an algorithm used to manage the health of ...
    Oct 25, 2019 · Bias occurs because the algorithm uses health costs as a proxy for health needs.Missing: perpetuating | Show results with:perpetuating
  93. [93]
    [PDF] AI for Good-Innovate for Impact - ITU
    In an age of rapid technological progress, artificial intelligence. (AI) is a crucial tool for meaningful global progress in areas such as climate action, ...<|separator|>
  94. [94]
    Exploring Fairness-Accuracy Trade-Offs in Binary Classification
    Apr 27, 2024 · In this paper, we explore the trade-off between fairness and accuracy when data is biased and unbiased. We introduce two versions of a ...
  95. [95]
    Inherent Limitations of AI Fairness - Communications of the ACM
    Jan 18, 2024 · Hence, AI fairness suffers from inherent limitations that prevent the field from accomplishing its goal on its own.
  96. [96]
    [PDF] Fairness-Accuracy Trade-Offs: A Causal Perspective
    At the same time, en- forcing fairness constraints may reduce the overall disparity between groups. Based on this, we developed an algorithm for attributing ...
  97. [97]
    Understanding the potential applications of Artificial Intelligence in ...
    These robots can help farmers save money on manual labour and reduce worker workload.15, 16, 17 The main aim of this paper is to study the different ...Missing: evidence | Show results with:evidence
  98. [98]
    How AI is Solving the Biggest Operational Challenges in Agriculture
    Oct 7, 2025 · Autonomous Machinery, Reduces manual labor dependency, improves efficiency, 3–5 years (higher upfront cost). AI-Powered Supply Chain & ...
  99. [99]
    (PDF) Assessing the Impact of Artificial Intelligence on Job and Task ...
    Sep 22, 2025 · A widely held assertion is that artificial intelligence replaces specific tasks within jobs rather than entire occupations. This paper ...
  100. [100]
    Artificial intelligence and the wellbeing of workers - PMC
    Jun 23, 2025 · (2025) find evidence that AI exposure led to job losses across US commuting zones, particularly for low-skill and production workers, while ...
  101. [101]
    [PDF] AI in Agriculture: Opportunities, Challenges, and Recommendations
    Mar 2, 2025 · Another concern among agricultural work- ers is that AI and automation may lead to job displacement. However, AI should ideally be viewed as a ...
  102. [102]
    [PDF] Future of Jobs Report 2025 - World Economic Forum: Publications
    Advancements in technologies, particularly AI and information processing (86%); robotics and automation. (58%); and energy generation, storage and distribution ...
  103. [103]
    The Fearless Future: 2025 Global AI Jobs Barometer - PwC
    PwC analysed close to a billion job ads from six continents to uncover AI's global impact on jobs, skills, wages, and productivity.
  104. [104]
    AI Will Transform the Global Economy. Let's Make Sure It Benefits ...
    Jan 14, 2024 · In advanced economies, about 60 percent of jobs may be impacted by AI. Roughly half the exposed jobs may benefit from AI integration, enhancing ...
  105. [105]
    Is AI Contributing to Rising Unemployment? | St. Louis Fed
    Aug 26, 2025 · If generative AI drives sustained productivity growth, it could ultimately create new jobs and industries, potentially offsetting displacement ...
  106. [106]
    [PDF] AI Standards for Global Impact: From Governance to Action - ITU
    The mission of AI for Good is to unlock AI's potential to serve humanity through building skills, AI standards, and advancing partnerships. As the leading UN ...
  107. [107]
    AI for Good – Innovate for Impact 2024 - ITU
    AI for Good – Innovate for Impact 2024. The AI industry is driving a ... However, experts caution about the widening digital divide and the perpetuation of biases ...
  108. [108]
    [PDF] 490 International Journal of Advance and Applied Research ISSN ...
    This is evidenced by two UN reports: the AI for Good: Innovate for Impact report lists only 2 out of 40 use cases, while the UN Activities on AI report ...<|separator|>
  109. [109]
    (PDF) Artificial intelligence (AI) and global health: How can AI ...
    Aug 29, 2018 · ... randomised controlled trials in settings such as. Kenya19 ... 2006. http://www. dartmouth. edu/~ vox/ 0607/ 0724/ ai50. html. 2. AI for Good ...
  110. [110]
    [PDF] AAAI 2025 Presidential Panel on the Future of AI Research
    AI ethics and safety, AI for social good, and sustainable AI have become central themes in all major AI conferences. Moreover, research on AI algorithms and.Missing: overstated | Show results with:overstated
  111. [111]
    The Annual AI Governance Report 2025: Steering the Future of AI
    ITU AI for Good Summit 2025: AI Governance Day (July 2025), International AI Standards Exchange (with ISO/IEC) (New Delhi, Oct 2024, and Geneva, July 2025).
  112. [112]
    The 'AI for Good' Agenda: For Whose Benefit? | TechPolicy.Press
    Jun 24, 2025 · AI for good discourse can reinforce inequality, enable digital colonialism, and fuel harmful nationalism, write María Jurado and Suvradip ...
  113. [113]
    AI for social good and the corporate capture of global development
    This article focuses on the AI for Social Good (AI4SG) movement, which aims to leverage Artificial Intelligence (AI) and Machine Learning (ML) to achieve ...Missing: globalist | Show results with:globalist
  114. [114]
    The Case for Private AI Governance | The Regulatory Review
    Aug 26, 2025 · Private governance and regulatory sandboxes are the key to democracy, efficiency, and innovation in AI regulation.
  115. [115]
    AI for Good Global Summit 2024 - ITU
    Join the AI for Good Summit 2024 in Geneva to accelerate UN SDGs with AI solutions for health, climate, gender equality, and more.
  116. [116]
    AI for good: Why AI has the potential to achieve the SDGs | EY - Global
    Sep 18, 2024 · AI for good: Why AI has the potential to achieve the SDGs ... UN agencies, host country governments and development consulting firms. The ...
  117. [117]
    The U.N. Sustainable Development Goals Are Beyond Saving
    Sep 26, 2023 · The SDGs are replete with imprecise goals and targets such as “substantially reduc[ing] corruption and bribery in all their forms.”
  118. [118]
    No poverty | SDG 1 - World Bank
    SDG 1 End poverty in all its forms everywhere calls for ending poverty by 2030. On the eve of the pandemic, 659 million people struggled on less than $2.15 a ...Missing: unmet | Show results with:unmet
  119. [119]
    Sustainable Development Goals: Are we on track for 2030?
    Sep 12, 2025 · Only 18% of the Sustainable Development Goals (SDGs) are on track, with nearly half progressing too slowly and close to a fifth even regressing.
  120. [120]
    World risks big misses across the Sustainable Development Goals ...
    Jul 10, 2023 · Furthermore, more than 30 per cent of these targets have experienced no progress or, even worse, regression below the 2015 baseline. According ...
  121. [121]
    The UN must suspend the SDGs to tackle more urgent crises
    Jul 28, 2023 · The Sustainable Development Goals are vague, utopian and unlikely to be met by 2030. Instead, the world faces four immediate challenges.
  122. [122]
    (PDF) A Critical Analysis of the Sustainable Development Goals
    Abstract. The ambitious UN-adopted sustainable development goals (SDGs) have been criticized for being inconsistent, difficult to quantify, implement and ...
  123. [123]
    Ethics of Artificial Intelligence | UNESCO
    UNESCO produced the first-ever global standard on AI ethics – the 'Recommendation on the Ethics of Artificial Intelligence' in November 2021.
  124. [124]
    [PDF] Principles for the Ethical Use of Artificial Intelligence in the United ...
    Sep 20, 2022 · United Nations system organizations should aim to promote fairness to ensure the equal and just distribution of the benefits, risks and costs, ...Missing: criticisms | Show results with:criticisms<|separator|>
  125. [125]
    10 Ethical Use Principles for Artificial Intelligence in the UN System
    Oct 26, 2022 · AI systems should not lead to individuals being deceived or unjustifiably impaired in their human rights and fundamental freedoms. 5 ...Missing: criticisms | Show results with:criticisms
  126. [126]
    Legal and human rights issues of AI: Gaps, challenges and ...
    This article focusses on legal and human rights issues of artificial intelligence (AI) being discussed and debated, how they are being addressed, gaps and ...
  127. [127]
    Economic potential of generative AI - McKinsey
    Jun 14, 2023 · Generative AI's impact on productivity could add trillions of dollars in value to the global economy—and the era is just beginning.
  128. [128]
    The Projected Impact of Generative AI on Future Productivity Growth
    Sep 8, 2025 · We estimate that AI will increase productivity and GDP by 1.5% by 2035, nearly 3% by 2055, and 3.7% by 2075. AI's boost to annual ...<|separator|>
  129. [129]
    The Economics of Generative AI | NBER
    Apr 24, 2024 · A growing body of work has explored how new AI tools might impact productivity in applications as diverse as coding, writing, and management consulting.Missing: efficacy | Show results with:efficacy
  130. [130]
    Artificial intelligence and modern planned economies: a discussion ...
    Jan 5, 2024 · We conclude that a CCEP economy would need to have a very different outlook from current market practices, with a focus on producing basic “interlinking” ...
  131. [131]
    Randomized Controlled Trials of Artificial Intelligence in Clinical ...
    In 77% (30/39) of the RCTs, AI-assisted interventions outperformed usual clinical care, and clinically relevant outcomes improved with AI-assisted intervention ...
  132. [132]
    Clinical impact and quality of randomized controlled trials involving ...
    Oct 28, 2021 · This study was a cross-sectional survey on RCTs involving traditional statistical or artificial intelligence (TS/AI) tool interventions in peer- ...
  133. [133]
    Economy | The 2025 AI Index Report | Stanford HAI
    1. Global private AI investment hits record high with 26% growth. · 2. Generative AI funding soars. · 3. The U.S. widens its lead in global AI private investment.4. Use Of Ai Climbs To... · 5. Ai Is Beginning To... · 6. Use Of Ai Shows Dramatic...
  134. [134]
    [PDF] Artificial Intelligence Index Report 2025 | Stanford HAI
    Feb 2, 2025 · Recognized globally as one of the most authoritative resources on artificial intelligence, the AI Index has been cited in major media outlets ...
  135. [135]
    How the US Public and AI Experts View Artificial Intelligence
    Apr 3, 2025 · Experts and the public aren't confident that the government will regulate AI effectively: 62% of U.S. adults and 53% of the experts we surveyed ...Key findings · AI and jobs · AI-related concerns, bias and...
  136. [136]
    AI for good: Research insights from financial services | Brookings
    Aug 4, 2022 · Melissa Koide explains the fairness opportunities and deployment complexities of using AI and machine learning in financial services.
  137. [137]
    Harnessing AI for humanitarian action: Moving from response to ...
    Dec 13, 2024 · This column discusses the potential for AI-based forecasting systems to inform anticipatory action in the humanitarian sector as well as the barriers to ...
  138. [138]
    The Case for Causal AI - Stanford Social Innovation Review
    A closer look at causal AI will show how it can open up the black box within which purely predictive models of AI operate. Causal AI can move beyond correlation ...
  139. [139]
    AI in 2025: From Hype to Reality – What's Next? - Hyperight
    Dec 25, 2024 · In 2025, it feels like we're at a turning point. AI is finally starting to show its true potential in real-world applications. But there's still a lot to ...Missing: good | Show results with:good
  140. [140]
    Beyond the AI Hype - Centre for Future Generations
    Apr 15, 2025 · This report provides a structured, evidence-based examination of AI's current state, its potential for impact, and the key uncertainties that shape its future.
  141. [141]
    How AI can boost productivity and jump start growth
    Jul 16, 2024 · The cumulative productivity gain would be about 17.5% or $7 trillion beyond the current Congressional Budget Office projection for GDP.<|separator|>
  142. [142]
  143. [143]
    [PDF] The Global Impact of AI: Mind the Gap, WP/25/76, April 2025
    AI's global impact depends on countries' sectoral exposure, preparedness, and data access, exacerbating income inequality, disproportionately benefiting ...
  144. [144]
    How are humanitarians using artificial intelligence in 2025 ...
    Aug 7, 2025 · The research reveals that humanitarian workers are not waiting for permission to engage with AI; they are already experimenting and adapting ...
  145. [145]
    AI in humanitarian healthcare: a game changer for crisis response
    Jul 1, 2025 · Prominent applications include AI-powered early warning systems, chatbots for displaced populations, telemedicine platforms, and automated ...Missing: multimodal | Show results with:multimodal
  146. [146]
    Report | Artificial intelligence in the humanitarian sector: mapping ...
    Aug 5, 2025 · As the first comprehensive baseline study of AI in humanitarian work, published in August 2025, this research provides essential insights for ...
  147. [147]
    What Is Edge AI? | IBM
    Edge AI refers to the deployment of AI models directly on local edge devices to enable real-time data processing and analysis without reliance on cloud ...Key Differences Between Edge... · Benefits Of Edge Ai For End... · Edge Ai Use Cases By...
  148. [148]
    How AI is widening the global/human development gap
    Jun 24, 2025 · The gap between nations with strong digital infrastructures and those without is widening, raising concerns about the equitable distribution of AI's advantages.
  149. [149]
    Edge Computing and Artificial Intelligence - Scaleout Systems
    Edge AI: Uses less bandwidth by processing data locally, which is beneficial in areas with limited connectivity. Cloud AI: Requires more bandwidth since raw ...
  150. [150]
    Supplement ITU-T Y Suppl. 83 (07/2024)
    The integration of AI and IoT technologies in digital agriculture presents a transformative approach to enhancing productivity, sustainability and ...
  151. [151]
    [PDF] Use Cases for AI and IoT for Digital Agriculture - ITU
    This report analyzes how AI and IoT are revolutionizing digital agriculture, including use cases, objectives, innovations, and data collection methods.Missing: Good | Show results with:Good
  152. [152]
    [PDF] ITU-T Technical Report YSTR.DataModelling-Agri (07/2024)
    It illuminates how digital agriculture is revolutionizing crop management, resource utilization, and sustainability practices, ultimately paving the way for a ...
  153. [153]
    Five Key Issues to Watch in AI in 2025
    Dec 13, 2024 · Recent developments provide some evidence that the leading models may be underperforming the expectations of scaling laws, though others point ...Missing: plateau | Show results with:plateau
  154. [154]
    Can AI scaling continue through 2030? - Epoch AI
    Aug 20, 2024 · We investigate four key factors that might constrain scaling: power availability, chip manufacturing capacity, data scarcity, and the “latency ...Missing: CSET | Show results with:CSET
  155. [155]
    AI Plateau? Expert Analysis on Large Language Model Performance ...
    Jul 10, 2025 · Explore the evidence of a potential performance plateau in 2025, expert analysis, and implications for AI's future. Are Large Language Models ...Missing: CSET | Show results with:CSET
  156. [156]
    U.S. rejects international AI oversight at U.N. General Assembly
    Sep 27, 2025 · The United States clashed with world leaders over artificial intelligence at the United Nations General Assembly this week.Missing: market | Show results with:market
  157. [157]
    Countries Consider A.I.'s Dangers and Benefits at U.N.
    Sep 25, 2025 · The United States, under the Trump administration, has resisted global efforts to regulate A.I., fearing they might hamper American tech ...Missing: oversight alternatives
  158. [158]
    Balancing market innovation incentives and regulation in AI
    Sep 24, 2024 · Some AI experts argue that regulations might be premature given the technology's early state, while others believe they must be implemented immediately.
  159. [159]
    AI in Action: 5 Essential Findings from the 2024 Federal AI Use Case ...
    Jan 15, 2025 · 1. Compared to 2023, Federal agencies have more than doubled their AI use in the last year, citing improvements to operational efficiency and the execution of ...Missing: regulations | Show results with:regulations
  160. [160]
    Fairness in machine learning: Regulation or standards? | Brookings
    Feb 15, 2024 · Regulations can establish a baseline of mandatory ML security and ethics requirements, while standards can provide guidance on best practices ...Background: Standards and... · AI standards and regulations · Discussion and...
  161. [161]
    Mapping the Rise in State-Level AI Regulation in the US - MBHB
    Sep 25, 2024 · So far in 2024, Congress has doubled the number of proposed bills seeking to study and regulate AI compared with 2023.
  162. [162]
    At What Point Do We Decide AI's Risks Outweigh Its Promise?
    Jones calculates that if AI spurs a 10% annual growth rate, global incomes will increase more than 50-fold over 40 years.
  163. [163]
    Full article: Evidence-based AI risk assessment for public policy
    Aug 5, 2025 · The article evaluates 10 AI risks, finding them mitigatable. Recommendations include investing in AI education, regulating AI business models, ...
  164. [164]
    AI Regulation is Coming- What is the Likely Outcome? - CSIS
    Oct 10, 2023 · Voluntary rules are seen by many as a stop-gap measure, but a divided Congress is unlikely to pass a major law with new mandatory rules.
  165. [165]
    U.S. breaks with UN on global AI oversight - Yahoo
    Sep 27, 2025 · “The ability to fabricate and manipulate audio and video threatens information integrity, fuels polarisation and can trigger diplomatic crises .