Fact-checked by Grok 2 weeks ago

Volunteer computing

Volunteer computing is a type of in which individuals voluntarily donate the idle processing power, , and of their personal devices—such as desktops, laptops, tablets, and smartphones—to support large-scale scientific projects, thereby aggregating these resources into a virtual, global capable of performing exascale computations at minimal cost to researchers. The concept emerged in the mid-1990s with early initiatives like the (GIMPS) in 1996 and distributed.net in 1997, which demonstrated the feasibility of harnessing public computing resources for cryptographic and mathematical challenges. It gained widespread popularity in 1999 through landmark projects such as , developed by the , which searched for using over 1 million volunteers. from , launched in 2000 and focused on simulations, further exemplified the approach and notably scaled to over 2 exaFLOPS during the . In 2002, with funding from the U.S. , the Berkeley Open Infrastructure for Network Computing (BOINC) was established as open-source middleware to enable general-purpose volunteer computing, allowing scientists to easily deploy and manage projects across diverse applications. As of 2020, by the 2010s, volunteer computing had scaled to involve approximately 700,000 active devices worldwide, delivering over 93 petaFLOPS of computational throughput—equivalent to the power of the top supercomputers of the era—while supporting around 30 ongoing projects and contributing to more than 400 peer-reviewed publications in journals such as Nature and Science. Notable projects include Einstein@home, which detects gravitational waves; Rosetta@home, advancing protein structure prediction; World Community Grid, tackling global health and sustainability issues; and LHC@home, aiding CERN's Large Hadron Collider data analysis by simulating particle interactions. This model not only provides cost-effective high-throughput computing—estimated at $200,000 annually for 100 teraFLOPS, far below commercial alternatives—but also fosters public engagement with science by allowing volunteers to contribute to diverse fields like astrophysics, medicine, and climate modeling. Despite challenges such as declining participation due to competition from cloud services and device energy concerns, initiatives like Science United, launched in 2017, have emerged to coordinate resources more efficiently across projects.

Definition and Fundamentals

Core Concept

Volunteer computing is a form of in which individuals voluntarily donate idle computational resources, including processing power (such as CPU and GPU cycles), storage capacity, and network bandwidth, from their personal devices—including desktops, laptops, tablets, and mobile phones—to support large-scale scientific research projects. This model harnesses the untapped potential of everyday , enabling computationally intensive tasks without requiring dedicated or for participants. By aggregating these contributions, volunteer computing democratizes access to high-performance processing power, primarily for academic and nonprofit endeavors in fields like astronomy, , and climate modeling. The basic mechanics involve volunteers installing lightweight client software on their devices, which connects to a central project over the . The client periodically downloads small, independent "work units"—discrete computational tasks with input data—and executes them locally during periods of device idleness, such as when the user is away or the system is not under heavy load. Once processing is complete, the client uploads the results back to the for validation and integration into larger datasets, with the entire cycle repeating seamlessly to minimize disruption to the volunteer's normal usage. This opportunistic approach ensures computations run at low priority, preserving system responsiveness and . Unlike , which relies on dedicated resources shared among organizations such as supercomputers and institutional clusters, volunteer computing draws from publicly owned personal devices with no formal agreements or administrative oversight. In contrast to , where users pay for on-demand access to provider-owned infrastructure, volunteer computing operates on a no-cost basis, emphasizing voluntary and the use of existing idle hardware rather than rented services. This public-driven model fosters broader accessibility but introduces variability due to the heterogeneous and intermittent nature of volunteer contributions. The scale potential of volunteer computing is immense, as the aggregation of idle resources from millions of devices can achieve processing power rivaling or exceeding that of traditional supercomputers. For instance, computers are typically idle for 80-90% of the time, allowing volunteers to contribute a substantial portion of this unused capacity—often around 80% of their on-time availability—without impacting daily use. Projects have demonstrated sustained rates of 10 petaFLOPS or more from hundreds of thousands of participants, surpassing many institutional supercomputers at a fraction of the cost.

Operational Principles

Volunteer computing operates through a structured task that begins with the partitioning of complex scientific problems into small, independent work units suitable for execution on volunteer devices. These units are designed to be self-contained, allowing without interdependencies, and are often sized to ensure roughly uniform completion times across varying to optimize resource utilization. Once partitioned, the central distributes tasks to volunteers via periodic communication, where clients request new work, input files, perform computations locally, and results upon completion. This pull-based model accommodates the intermittent availability of volunteer nodes. Volunteer selection in the workflow integrates credit systems to prioritize reliable participants and incentivize quality contributions. Credits, typically measured in floating-point operations (FLOPs), are awarded post-validation to reflect the computational effort expended, with normalization across diverse devices to ensure fairness. Result validation is a critical step to mitigate errors or sabotage, primarily achieved through replication: multiple instances of the same work unit are assigned to different volunteers, and consensus is established by comparing outputs, often using majority voting or project-specific equivalence functions. Adaptive replication techniques further enhance efficiency by dynamically increasing redundancy only for suspicious results, reducing overall overhead while maintaining accuracy. To sustain participation, volunteer computing relies on non-monetary incentives that appeal to intrinsic motivations such as , scientific , and community recognition. Computational credits serve as a primary motivator, quantifying contributions without financial value and enabling volunteers to track their impact on progress. Leaderboards individuals and teams by total credits or recent average credit (with a short to emphasize ongoing activity), promoting healthy and often leading to recruitment through social . Badges and achievements, awarded for milestones like sustained participation or high performance, add elements of , while team-based structures foster collaboration and a sense of belonging, encouraging long-term engagement without coercive measures. Handling heterogeneity is foundational to , as volunteer pools encompass diverse configurations—including varying architectures, capacities, and processing units—as well as software environments like multiple operating systems. Core principles involve adaptive task , where applications are compiled for supported platforms and scheduled based on device-reported capabilities, ensuring and preventing failures. Network variability, characterized by fluctuating , firewalls, and churn (frequent joining/leaving of nodes), is managed through robust communication protocols, such as for retries and estimation models for completion times, allowing the system to gracefully handle disruptions while maximizing throughput. Ethical principles underpin the model's integrity, emphasizing voluntary participation where individuals freely donate idle resources without obligation or penalty for withdrawal. is preserved, as volunteers typically register with minimal like an , unlinked to real-world identities, protecting and reducing . Open-source mandates for core software and applications ensure , enabling , customization, and trust that computations serve legitimate scientific goals without hidden agendas.

Historical Development

Origins and Early Projects

The concept of volunteer computing emerged from earlier academic experiments in during the late and early , when researchers began exploring ways to aggregate computational resources across networked machines for large-scale problems. One notable precursor was a project by the DEC System Research Center, which distributed tasks via to volunteers, marking an initial foray into harnessing idle -connected computers for collaborative computation. These efforts laid the groundwork for the model's formalization in the mid-1990s, as the proliferation of consumer PCs and broadband made widespread volunteer participation feasible, shifting from institutional grids to public involvement. The first major public volunteer computing initiative was the (GIMPS), launched in January 1996 by mathematician George Woltman, which invited users to software for testing Mersenne numbers in the quest for new prime discoveries. This project pioneered the model by relying on volunteers' spare CPU cycles without centralized funding, quickly attracting participants worldwide and discovering several record-breaking primes, such as the 35th known later that year. Following closely, distributed.net began in 1997 as a nonprofit effort to tackle challenges, notably the RSA RC5-56 contest, by coordinating brute-force key searches across thousands of volunteer machines, demonstrating the approach's scalability for compute-intensive tasks. By 1999, the paradigm expanded into scientific domains with , conceived in 1995 by David Gedye and launched on May 17 by the , to analyze data for signs of , amassing over 200,000 downloads in its first week. Concurrently, debuted in October 2000 under Vijay Pande at , focusing on simulating dynamics to advance biomedical research, addressing the high computational demands of molecular modeling that traditional supercomputers struggled to meet affordably. These projects were primarily motivated by the need for vast, low-cost processing power in underfunded fields like astronomy and , where grant limitations hindered progress on data-heavy simulations and analyses. Early implementations faced significant hurdles, including managing unreliable volunteer connections due to varying stability and heterogeneity, which often led to incomplete tasks or . Basic result verification was another key challenge, addressed through simple redundancy checks where multiple volunteers computed the same to cross-validate outputs, ensuring accuracy despite the decentralized nature of participation. These obstacles were overcome via custom, software designs that tolerated intermittency and prioritized fault-tolerant protocols, setting precedents for future scalability.

Evolution of Platforms

The evolution of volunteer computing platforms in the early marked a transition from bespoke, project-specific software to more versatile and scalable , enabling broader adoption and resource sharing across multiple initiatives. One of the pioneering efforts was XtremWeb, introduced in 2001 as an experimental global designed to emulate desktop grids for large-scale distributed processing. XtremWeb emphasized modularity, supporting multiple applications and users in a volunteer environment while addressing challenges like resource discovery and in heterogeneous networks. Building on this foundation, the Berkeley Open Infrastructure for Network Computing (BOINC) emerged in 2002 as an open-source system developed at the , specifically tailored for volunteer computing. BOINC facilitated the creation and management of multiple scientific projects on a single platform, allowing volunteers to participate in cross-project computations and track their contributions through integrated statistics and credit systems. This design reduced the need for custom client software per project, promoting efficiency and . In parallel, commercial platforms like United Devices' Grid MP, launched in the early 2000s, offered enterprise-grade integration for volunteer resources, focusing on high-performance for both public and private applications. Grid MP provided tools for job scheduling and security in volunteer settings, bridging the gap between academic prototypes and practical deployments. By the mid-2000s, these developments spurred a key standardization milestone: a shift from isolated, project-specific clients to modular architectures that supported , cross-platform compatibility, and easier volunteer . This evolution expanded the volunteer base dramatically, growing from tens of thousands in early projects to over a million active participants across BOINC-supported initiatives by 2010. The lowered barriers to setup—such as simplified installation and unified interfaces—significantly boosted adoption, exemplified by the launch of in , which leveraged these platforms to advance humanitarian research in areas like disease modeling and climate studies through donated computing power.

Modern Developments

Since 2010, volunteer computing has experienced significant growth, particularly highlighted by the project's surge during the . In 2020, the platform achieved a peak performance of approximately 1.5 exaFLOPS, driven by over 400,000 new volunteers contributing computational resources for protein simulations aimed at . This expansion was facilitated by the integration of graphics processing units (GPUs), which accelerated simulations; for instance, GPUs contributed substantially to the exaFLOP-scale performance, marking a shift from CPU-dominant computing to heterogeneous resource utilization in volunteer platforms. Recent innovations have focused on enhancing reliability and . In 2024, researchers introduced (P2P) frameworks for volunteer computing to eliminate single points of failure inherent in centralized systems like BOINC, enabling decentralized task distribution across volunteer nodes. Hybrid models have also emerged, combining volunteer resources with cloud bursting techniques to handle variable workloads; these approaches dynamically scale by offloading excess tasks to public clouds when volunteer capacity is insufficient, as demonstrated in platforms for distributed . From 2023 to 2025, volunteer computing expanded into climate science applications, with the (ITU) hosting sessions on leveraging volunteer resources for sustainable computing and environmental modeling. Pilots for and have integrated volunteer frameworks for model training, such as the Smart Data Factory platform, which uses on distributed volunteer nodes to accelerate tasks. Post-pandemic, volunteer participation has rebounded, incorporating mobile and edge devices to broaden resource pools. BOINC, a core platform, reported approximately 5,600 active volunteers and 88,500 computers as of November 2025, reflecting stabilized engagement after the 2020 surge, with growing support for Android-based contributions.

Technical Framework

Middleware and Software

Volunteer computing relies heavily on to orchestrate the distribution of computational tasks across heterogeneous volunteer resources. The Berkeley Open Infrastructure for Network Computing (BOINC) serves as the dominant open-source platform, enabling projects to harness idle power from personal devices worldwide. BOINC employs a client- architecture where the side manages job scheduling and validation, while the client software runs on volunteer machines to execute tasks. Key components include work unit generators, which create independent computational tasks from scientific applications; validators, which compare multiple results from different volunteers to ensure accuracy and detect errors; and assimilators, which process validated outcomes for scientific analysis. BOINC clients support a wide range of platforms, including Windows, macOS, , and , allowing seamless operation across desktops, laptops, and mobile devices. This cross-platform compatibility extends to diversity, accommodating multi-core CPUs, GPUs from , , and , as well as ARM-based architectures like those in devices. Alternative middleware systems have emerged to address specific needs in volunteer and environments. XtremWeb, a Java-based platform, facilitates global computing by enabling the deployment of distributed applications on volunteer s, with features for fault-tolerant execution and data management in heterogeneous grids. Apple's Xgrid, though discontinued in 2012 with the release of , was influential in demonstrating easy clustering of Macintosh computers for distributed tasks, influencing later setups by simplifying pooling without dedicated . Core functionalities of these middleware systems focus on reliable task amid volunteer uncertainties. In BOINC, task occurs via periodic client-server communications, where clients request work units based on their capabilities, downloading applications and input data as needed; progress is reported through messages that update task status and enable allocation for completed work. Cross-platform compatibility is achieved through anonymous platforms and wrappers, allowing applications to run on Windows or macOS without recompilation. Handling churn—where volunteers frequently go offline—is managed by redundancy, such as sending the same to multiple clients and using deadline-based scheduling to reassign unfinished tasks promptly. As of 2025, BOINC has seen enhancements to broaden hardware support and optimize resource use. Updates in version 8.x series include improved architecture detection and execution, enabling efficient runs on devices like and low-power embedded systems, as well as support for applications to facilitate containerized deployments. These improvements maintain BOINC's , supporting millions of active volunteers while minimizing overhead in heterogeneous environments.

Resource Allocation and Management

In volunteer computing systems, scheduling algorithms are essential for efficiently assigning computational work units to participating devices while accounting for variability in volunteer reliability and hardware capabilities. These algorithms typically evaluate volunteer reliability scores, derived from historical success rates of completed tasks, alongside hardware profiles such as CPU speed, , and GPU presence to match tasks appropriately. For instance, in BOINC-based platforms, the server estimates task runtime using the formula est_flop_count(J) / proj_flops(H, V), where est_flop_count(J) is the estimated floating-point operations for job J, and proj_flops(H, V) represents the projected performance of host H for application version V, enabling precise distribution. Priority queuing mechanisms further support time-sensitive tasks by implementing high-priority modes that preempt lower-priority jobs when deadlines are at risk, ensuring critical computations complete within bounds. Validation techniques in volunteer computing rely heavily on replicated computing to ensure result accuracy amid untrusted and heterogeneous environments. Tasks are typically assigned to 2-3 volunteers, with outcomes compared using majority voting to identify ; discrepancies trigger additional replications until a is reached. Adaptive replication refines this by maintaining per-host reliability metrics, such as the number of consecutive validated jobs (N), to dynamically adjust the replication factor—skipping for high-reliability hosts while increasing it for others to bound error rates. handles errors through probabilistic estimation, where the overall reliability can be modeled as $1 - (1 - p)^r, with p as the base error probability per computation and r as the replication factor; this quantifies the probability of at least one correct result assuming errors, guiding minimal r values for desired levels. Churn management addresses the transient nature of volunteer participation, where devices frequently join or leave the pool, using predictive models to forecast availability and mitigate disruptions. These models analyze historical traces, such as session durations and uptime patterns from projects like , to predict aggregate availability for groups of volunteers over extended periods, enabling proactive task reassignment. Checkpointing complements this by allowing applications to save intermediate states every few minutes, facilitating resumption on the same or alternative devices without full recomputation, thus minimizing lost progress from interruptions. Scalability in volunteer computing is achieved through load balancing across distributed servers and adaptive task sizing to accommodate heterogeneous devices. Global servers employ weighted round-robin policies to distribute incoming requests evenly, partitioning job identifiers among multiple processes to handle millions of daily tasks without bottlenecks. Adaptive task sizing involves generating work units in multiple sizes based on host performance quantiles—such as small tasks for low-power mobiles and larger ones for high-end desktops—ensuring efficient utilization across diverse hardware while preventing overload.

Applications

Scientific Fields

Volunteer computing has found prominent applications in several scientific domains, particularly those involving computationally intensive tasks that can be decomposed into independent units. In , it supports efforts to detect faint cosmic phenomena, such as periodic from rotating neutron stars, by analyzing vast datasets from radio telescopes. In , the paradigm aids simulations and processes, where algorithms predict molecular structures and interactions to accelerate therapeutic development. Climate modeling leverages volunteer resources for atmospheric simulations, running predictions to quantify uncertainties in patterns and long-term environmental changes. Mathematics benefits through searches for large prime numbers and optimization problems, employing probabilistic tests and sieving methods across distributed nodes to explore conjectures. The suitability of volunteer computing stems from its alignment with embarrassingly parallel workloads, where tasks consist of independent work units that require minimal intercommunication, allowing seamless distribution across heterogeneous, volunteered devices without synchronization overhead. This makes it ideal for scenarios, such as of simulations or data analyses, but less appropriate for tightly coupled simulations that demand low-latency data exchange between nodes, as volunteer environments exhibit variable availability and network unreliability. Cross-disciplinary applications extend volunteer computing into integrations with , particularly for in massive datasets, where distributed models train on volunteered resources to identify anomalies or structures in scientific data. Humanitarian uses include computational modeling for mapping, simulating epidemic spreads and identifying at-risk populations through parallel geospatial analyses to inform responses. By enabling petascale computations—equivalent to trillions of floating-point operations per second—volunteer computing has facilitated breakthroughs in resource-limited fields, such as enabling large-scale that would otherwise require dedicated supercomputers, thus democratizing access to high-performance resources for scientific discoveries.

Notable Projects

One of the pioneering volunteer computing projects is , launched in 1999 by the , to analyze data from the in search of signals. The project distributed computational tasks to volunteers worldwide, enabling the processing of vast datasets that would otherwise require supercomputing resources. Although task distribution paused in 2020, data analysis continued into 2025, resulting in peer-reviewed publications on signal detection algorithms and findings from historical observations. Over its active period, attracted millions of volunteers and delivered sustained power equivalent to tens of teraFLOPS on average, contributing to breakthroughs in radio frequency interference removal and identification. Folding@home, initiated in 2000 at and now led by a global consortium, simulates dynamics to advance understanding of diseases such as Alzheimer's and cancer. During the 2020 , the project peaked at over 1 exaFLOPS of aggregate computing power from more than 400,000 new volunteers, enabling rapid simulations of protein structures that informed therapeutic development. By 2025, its performance stabilized at approximately 17 petaFLOPS (as of July 2025), supporting ongoing research with contributions to over 200 peer-reviewed publications on biomolecular mechanisms. This scale has established as a for volunteer-driven biophysical modeling. Einstein@Home, started in 2005 as a collaboration between the University of Wisconsin-Milwaukee and the Max Planck Institute, harnesses volunteer resources to search for from spinning using data from and detectors. Active through 2025, the project has engaged nearly 500,000 volunteers historically, providing computing power on the order of hundreds of teraFLOPS to conduct all-sky surveys and targeted analyses. In September 2025, the project announced the discovery of four additional pulsars. Key outputs include the discovery of over 20 new pulsars and multiple publications on continuous limits, enhancing astrophysical models of populations. World Community Grid (WCG), founded in 2004 by , focuses on humanitarian applications by aggregating volunteer compute for and challenges. It supported the Outsmart Ebola Together project (2014–2016), which screened millions of compounds for antiviral therapies, and multiple cancer initiatives like Help Conquer Cancer (2007–2010) and the ongoing Mapping Cancer Markers (2019–present), yielding insights into tumor markers and drug candidates. In 2024–2025, WCG shifted emphasis to through the Africa Rainfall Project, improving climate-resilient forecasts using volunteer simulations integrated with weather data. With over 818,000 members contributing more than 2.6 million years of runtime and 7.6 billion results returned as of 2025, WCG has facilitated dozens of publications and real-world applications in disease control and environmental modeling. Recent volunteer computing efforts have expanded into climate science via BOINC platforms, exemplified by climateprediction.net, which since 2003 has run ensemble climate models on volunteer machines to project regional impacts up to 2080. Highlighted in the 2024 ITU WSIS+20 Forum session on volunteer computing for climate science, these initiatives leverage distributed resources for high-resolution simulations addressing .

Advantages

For Researchers

Volunteer computing offers researchers unprecedented access to massive, scalable computational resources that often exceed the capabilities of traditional institutional hardware. For instance, during the 2020 , the project achieved peak performance of approximately 1.5 exaFLOPS, surpassing the world's fastest at the time, IBM's , by more than sevenfold. This scale enables complex simulations, such as dynamics or large-scale molecular modeling, that would be infeasible on dedicated clusters due to hardware limitations. A key advantage is the near-zero cost for acquisition and , as volunteers donate idle cycles from their devices, shifting expenses primarily to server operations. BOINC-based projects, for example, can sustain medium-scale operations—delivering around 100 teraFLOPS with 10,000 participants—on an annual of approximately $200,000 for staffing and infrastructure, in stark contrast to the multimillion-dollar costs of building and running equivalent dedicated supercomputing facilities. This cost efficiency is further highlighted by comparisons showing that achieving 0.1 petaFLOPS via volunteer computing costs about $125,000, versus up to $175 million on commercial cloud platforms. Researchers benefit from accelerated timelines to scientific results, particularly during urgent scenarios, as volunteer networks can rapidly without delays. In the 2020 response, Folding@home's participant base expanded from 30,000 devices pre-pandemic to over one million by May, delivering a surge in compute power equivalent to orders of magnitude increase within weeks and enabling swift analyses for . Finally, volunteer computing lowers barriers for underfunded research groups, including those in developing countries, by providing high-performance resources without substantial upfront investment. Projects like those on BOINC allow scientists in resource-constrained environments, such as institutions, to tackle compute-intensive tasks by tapping global volunteer contributions, bypassing the need for local infrastructure. Additionally, the open-source nature of platforms like BOINC promotes and interdisciplinary collaborations, fostering equitable participation in global scientific efforts.

Broader Impacts

Volunteer computing has significantly democratized access to scientific research by enabling public participation, thereby educating volunteers on complex topics in fields like astronomy, biology, and climate science. Platforms such as BOINC facilitate this engagement by allowing individuals to contribute idle computing resources while providing educational materials, message boards, and progress updates that enhance understanding of ongoing research. For instance, the Einstein@Home project, built on BOINC, directly involves the public in gravitational wave detection, increasing awareness of scientific methods and goals through volunteer contributions. Historically, BOINC has engaged millions of unique users across its projects, fostering a sense of citizen science and collective impact. In terms of scientific advancements, volunteer computing has accelerated breakthroughs with real-world applications, particularly in and environmental modeling. The project has simulated protein dynamics to identify potential drug targets, contributing to the development of therapeutics for diseases such as by revealing cryptic binding pockets and molecular interactions that inform antiviral design. Similarly, climateprediction.net has produced extensive ensembles of climate simulations using donated resources, aiding in the refinement of global models that underpin policy decisions on emissions reduction and adaptation strategies. These efforts have enabled more accurate predictions that influence international agreements like the Paris Accord. Economically, volunteer computing bridges resource disparities, particularly for researchers in the Global South, where access to is limited by infrastructure costs. By pooling donated cycles from volunteers worldwide, projects like BOINC provide equivalent computational power to supercomputers valued at hundreds of millions of dollars annually, allowing under-resourced scientists to conduct large-scale simulations without institutional funding barriers. For example, initiatives tailored for researchers leverage volunteer grids to address local challenges in and , promoting equitable participation through accessible connections. This model has inspired estimates that global donated compute equates to billions in avoided hardware and energy costs over two decades. Culturally, volunteer computing has influenced participatory models in other domains, including development, by demonstrating the of crowdsourced contributions. The success of distributed compute networks has paralleled the rise of volunteer-driven data annotation efforts, such as those in platforms where individuals label datasets to train models for . This has encouraged hybrid approaches, blending computational volunteering with human input to advance open-source tools, thereby extending the ethos of collective problem-solving beyond into technology ethics and community-driven innovation.

Challenges

Participant Concerns

Participants in volunteer computing often experience reduced device performance due to the allocation of CPU resources to scientific tasks, which can slow down everyday applications if usage is not properly managed. For instance, the BOINC platform, a common for such projects, allows users to configure CPU utilization limits—typically set to 20-50% of available processing power—to prevent noticeable slowdowns during interactive use. This throttling ensures tasks run primarily when the device is idle, minimizing interference with user activities. Energy consumption rises when devices execute volunteer tasks, as active computing draws more than states; for example, a typical setup may increase from 100 watts to 150 watts active, resulting in about 110 kWh monthly usage versus 73 kWh for 24/7 operation. This can elevate electricity bills by approximately $3 per month (at 8 cents per kWh), or higher in regions like where rates exceed 20 cents per kWh. Prolonged high-load operation generates additional heat, which, if not mitigated by adequate cooling, may accelerate wear such as degradation or component failure. Users can address these through configurable preferences to limit runtime or schedule tasks during off-peak hours. Initial setup requires downloading and installing the client software, a process that generally takes a few minutes, followed by account creation and project selection via a simple interface. Once installed, background processes operate unobtrusively but can occasionally delay daily tasks if CPU limits are exceeded, prompting users to adjust settings for better balance. may include occasional notification pop-ups from the client software about task progress or updates, which some find intrusive, alongside perceptions of risks from uploading computation results—though no is transmitted, only anonymized scientific outputs. like BOINC provides controls to manage these aspects, such as delaying uploads.

Systemic Issues

Volunteer computing platforms face significant security threats due to the distributed nature of untrusted volunteer devices and potential vulnerabilities in task . One primary risk is the of through compromised project servers, where attackers could exploit BOINC to propagate malicious executables to volunteers' machines, as seen in the SocGholish campaign that infected thousands of computers via BOINC projects. To mitigate this, platforms like BOINC employ to verify application integrity and prevent unauthorized executables from running. Another threat involves result falsification, where volunteers or attackers submit incorrect computations to sabotage projects, addressed through replication where tasks are run on multiple devices and results validated by . is further protected via sandboxing mechanisms, such as BOINC's account-based sandboxes on Windows, macOS, and , which restrict applications to isolated directories, and optional support for stronger isolation of untrusted tasks. Volunteer privacy is enhanced by minimizing tracking during task communication, though servers may log IPs for operational needs, reducing exposure risks. Privacy concerns in volunteer computing arise from the collection and processing of volunteer , which platforms require for efficient resource management but must handle carefully to avoid misuse. Metadata such as hardware specifications (e.g., CPU type, RAM) and uptime patterns are gathered to optimize task allocation, potentially revealing usage habits or device fingerprints if not anonymized. For instance, BOINC clients periodically report host characteristics and availability metrics to servers, raising risks of re-identification if aggregated improperly. To address this, projects like process in compliance with regulations, limiting collection to essentials and providing options for statistics. As of 2025, compliance with GDPR-like regulations has become standard; for example, the BOINC@TACC project anonymizes volunteer data by default on public websites and restricts access to identifiable information, ensuring lawful processing and user consent. These measures help mitigate threats from metadata exposure while enabling platform functionality. The environmental impact of volunteer computing stems from its aggregate energy consumption across millions of devices, though it often leverages existing idle power more efficiently than dedicated infrastructure. Platforms like BOINC collectively utilize energy comparable to small data centers, with global ICT projections estimating up to 21% of total electricity use by 2030, partly driven by distributed computing. Energy per task in volunteer systems can be higher due to lower peak efficiency compared to data centers (ratio approximately 1.5-10x depending on availability), but offsets occur when devices replace heating loads in cold climates or run on diverse renewable mixes. To promote sustainability, green scheduling techniques prioritize low-power devices and time-of-day submissions to minimize waste from evictions, achieving 30-53% energy reductions via reinforcement learning-based optimization in multi-use systems. Reliability in volunteer computing is challenged by high churn rates, where devices frequently join and leave the pool, disrupting task completion. In BOINC, host averages 60% for desktops and 40% for mobiles, implying significant monthly turnover as volunteers disconnect due to changes or technical issues, with daily net losses of hundreds of hosts observed across projects. This churn leads to deadline misses and requires , such as sending extra task instances if projected completion lags. Solutions include volunteer reputation systems, like BOINC's adaptive replication, which tracks host reliability per application version and reduces redundant computations on proven reliable devices, maintaining error rates below 1% while cutting overhead. Cross-project credit aggregation further incentivizes sustained participation by building volunteer reputation across initiatives.

Future Directions

Emerging Technologies

The integration of (AI) and (ML) into volunteer computing is advancing through frameworks that enable distributed model training on heterogeneous volunteer resources, particularly via paradigms on edge devices. In 2024, DistML.js emerged as a JavaScript-based tailored for volunteer computing environments, allowing browser-based participants to contribute to ML training tasks without dedicated hardware, thus democratizing access to workloads. Similarly, the Smart Distributed Data Factory (SDDF) platform leverages AI-driven orchestration to distribute computations across volunteer nodes, achieving scalable processing of complex simulations while maintaining data privacy through edge-based federated updates. These developments address the computational demands of large-scale AI by partitioning models across volunteer devices, with frameworks like optimizing training on mobile heterogeneous setups to achieve up to 3x speedup compared to baseline approaches. Hybrid systems combining volunteer grids with commercial cloud services are gaining traction to handle variable workloads, incorporating auto-scaling mechanisms that seamlessly shift tasks to providers like AWS during peak demands. The GPUnion platform, introduced in 2025, exemplifies this by enabling campus-scale GPU sharing with hybrid cloud integration, allowing volunteer resources to preemptively offload intensive jobs while preserving autonomy for contributors through containerized deployments. Blockchain technology further enhances these hybrids by providing verifiable credit systems for resource contributions; for instance, incentive mechanisms in blockchain-enabled volunteer platforms ensure tamper-proof tracking of donated compute cycles, mitigating disputes in decentralized task allocation. Such integrations not only boost reliability but also reduce search latency twofold in data-intensive workflows, as demonstrated in Volunteer Edge-Cloud (VEC) scheduling models that dynamically balance volunteer and cloud resources using reinforcement learning. Expansion into mobile and ecosystems is broadening volunteer computing's reach, with dedicated and applications facilitating contributions from smartphones during idle periods. DreamLab, a Vodafone-backed app available on both platforms from 2020 until its discontinuation in April 2025, enabled users to donate device processing power for simulations and previously aggregated over 1 million volunteers globally for distributed computations. World Community Grid's mobile client similarly supports devices in tackling humanitarian projects, processing tasks in the background to harness collective smartphone idle time. For , volunteer frameworks are incorporating data processing, as seen in the IoT-EMS system, which deploys low-power devices in volunteer networks to monitor environmental parameters collaboratively, fusing inputs for applications like climate modeling without central aggregation. This approach scales contributions by treating edge s as volunteer nodes, enhancing granularity in distributed . Advancements in (P2P) architectures are reducing reliance on central servers, fostering resilient volunteer computing through 2024-2025 frameworks that emphasize . A novel P2P volunteer framework developed in 2024 eliminates single points of failure by distributing task coordination across nodes, improving in high-performance environments via gossip protocols for workload propagation. Swarmchestrate, proposed in 2025, builds on principles for self-organizing P2P orchestration in cloud-to-edge continua, with potential applications to adaptive resource discovery and execution in volunteer settings without hierarchical . GenTorrent extends P2P to large language model serving, allowing volunteers to share inference resources in a torrent-like manner, achieving scalable deployment across global peers while rewarding contributions through token-based incentives. These innovations enhance system robustness, with P2P models demonstrating up to 50% lower downtime in simulated volunteer networks compared to traditional client-server designs.

Sustainability and Growth

Volunteer computing initiatives increasingly incorporate practices to minimize environmental impact, particularly by optimizing energy use and reducing . For instance, reinforcement learning-based scheduling in volunteer systems can achieve 30% to 53% reductions in wasted energy by adaptively allocating tasks to more efficient volunteer devices and minimizing evictions, based on analysis of traces. Additionally, comparisons of volunteer computing's against operations highlight its potential efficiency, as distributed idle resources often leverage existing power sources more sustainably than centralized facilities, though precise metrics depend on volunteer and grids. Strategies like off-peak task timing, where computations align with periods of surplus or lower grid demand, further support eco-impact reduction, drawing from broader carbon-aware scheduling principles adaptable to volunteer platforms. Community building efforts focus on volunteer retention through targeted engagement strategies, recognizing the diverse motivations of participants. elements, such as leaderboards, badges, and points systems tailored to user types—like competitive "super-crunchers" who respond to rankings or collaborative "lay public" who value narratives—have been proposed to boost participation and reduce dropout rates in volunteer cloud computing projects. Intergenerational outreach complements this by fostering inclusive environments, such as programs that pair younger tech-savvy volunteers with older participants, enhancing long-term involvement across demographics in efforts. Effective retention also relies on clear communication, regular feedback on computational contributions, and flexible options, which help sustain active volunteer bases over time. Policy advocacy and institutional funding play crucial roles in ensuring the long-term viability of volunteer computing, with projections indicating substantial growth potential through expanded global participation. (NSF) has provided foundational support, funding the development of platforms like BOINC in 2002 to enable scalable public-resource computing across scientific domains. Ongoing NSF programs, such as those in the Computer and Information Science and Engineering Directorate, continue to back infrastructure that integrates volunteer resources, promoting equitable access and innovation. With broader institutional adoption and policy incentives, volunteer networks have potential to scale significantly by the 2030s, harnessing idle global devices for high-impact simulations in and . Barriers to scaling volunteer computing include the , which limits participation among underserved populations lacking reliable devices or , thereby constraining resource aggregation. Addressing this involves integrating (CSR) initiatives, where companies donate refurbished devices to nonprofits that redistribute them to low-income communities, enabling broader involvement in volunteer platforms. Examples include programs from organizations like Digitunity and Compudopt, which refurbish and provide computers to bridge access gaps, potentially increasing volunteer computing's reach by empowering marginalized users with necessary hardware.

References

  1. [1]
    What is volunteer computing? | LHC@home - CERN
    “Volunteer computing” is a type of distributed computing in which computer owners can donate their spare computing resources (processing power, storage and ...Missing: key | Show results with:key
  2. [2]
    [PDF] Coordinating Volunteer Computing - Science United
    Volunteer computing (VC) is the use of consumer digital devices, such as desktop and laptop computers, tablets, and smartphones, for high-throughput scientific ...
  3. [3]
    [PDF] Volunteer computing: the ultimate cloud - BOINC
    In 2002, with funding from the National Science Foundation, the BOINC project was established to develop general-purpose middleware for volunteer computing, ...Missing: definition | Show results with:definition
  4. [4]
    BOINC in Retrospect - David P. Anderson
    BOINC is a software system for volunteer computing: it lets people donate time on their home computers and smartphones to science research projects.
  5. [5]
    [PDF] Celebrating Diversity in Volunteer Computing - BOINC
    Volunteer computing is a form of distributed computing in which the general public volunteers processing and storage resources to computing projects. BOINC ...
  6. [6]
    API paper - BOINC
    Volunteer computing is a form of distributed computing in which processing and storage resources are volunteered, primarily by members of general public.
  7. [7]
    [PDF] A Platform for Volunteer Computing - BOINC
    Dec 21, 2018 · BOINC is an open-source middleware system for volunteer computing, which uses consumer devices for high-throughput scientific computing.
  8. [8]
    [PDF] The Computational and Storage Potential of Volunteer Computing
    Dec 8, 2005 · “Volunteer computing” uses Internet-connected computers, volunteered by their owners, as a source of computing power and storage.Missing: personal | Show results with:personal
  9. [9]
  10. [10]
    History of Distributed Computing Projects - CS Stanford
    The first Internet-based distributed computing project was started in 1988 by the DEC System Research Center. The project sent tasks to volunteers through email ...
  11. [11]
    GIMPS History - PrimeNet
    1996-11-13 – Joel Armengaud discovered the 35th known Mersenne prime, 2-1 · 1997-08-24 – Gordon Spence discovered the 36th known Mersenne prime, 2-1 · 1998-01-27 ...
  12. [12]
    distributed.net: distributed.net
    Founded in 1997, our network has grown to include thousands of volunteers around the world donating the power of their home computers, cell phones and tablets ...Statistics · Project RC5 · Project OGR · Projects
  13. [13]
    SETI@home: An Experiment in Public-Resource Computing
    In 1995, David Gedye proposed doing radio SETI using a virtual supercomputer composed of large numbers of Internet-connected computers, and he organized the ...
  14. [14]
    The Folding@home project has been running for over 20 years ...
    How did it begin? In 1999, Vijay Pande started as a professor at Stanford University, heading a research group. In order to push the envelope on what could ...
  15. [15]
    [PDF] Volunteer Computing Luis F. G. Sarmenta - People | MIT CSAIL
    This thesis presents the idea of volunteer computing, which allows high-performance paral- lel computing networks to be formed easily, quickly, and ...
  16. [16]
    XtremWeb: a generic global computing system - IEEE Xplore
    XtremWeb is an experimental global computing platform dedicated to provide a tool for such studies. The paper presents the design of XtremWeb. Two essential ...
  17. [17]
    About - World Community Grid
    World Community Grid is the biggest volunteer computing initiative devoted to humanitarian science, and is as powerful as some of the world's fastest ...
  18. [18]
    The coronavirus pandemic turned Folding@Home into an exaFLOP ...
    Apr 14, 2020 · “When SETI@Home began in 1999, it attracted a lot of volunteers with its colorful screensaver and the new, exciting idea that anyone with a ...
  19. [19]
    SARS-CoV-2 simulations go exascale to predict dramatic spike ...
    May 24, 2021 · We conservatively estimate the peak performance of Folding@home reached 1.01 exaflops. This performance was achieved at a point when ...
  20. [20]
    Folding@home Assembles an Exaflop to Fight COVID-19
    Apr 1, 2020 · Bowman estimates performance has surpassed 1.5 exaflops, fueled in part by more than 356,000 NVIDIA GPUs. The group's blog site will share more ...Missing: peak | Show results with:peak
  21. [21]
    Development of a new framework for high performance volunteer ...
    Dec 20, 2024 · In this paper, we present the design of a new Peer-to-Peer (P2P) volunteer computing framework that addresses the SPoF issue. This framework ...
  22. [22]
    A hybrid GPU cluster and volunteer computing platform for scalable ...
    This work presents the hybrid cluster and volunteer computing platform that scales out GPU clusters into volunteer computing for distributed deep learning. The ...
  23. [23]
    Session 215— Volunteer computing for climate science, sustainable ...
    Volunteer computing for climate science, sustainable ... 2024 | 27–31 May 2024, Geneva. Co-hosted by: ITU logo. International Telecommunication Union (ITU) ...
  24. [24]
    Smart distributed data factory volunteer computing platform for active ...
    Feb 28, 2025 · This paper presents the smart distributed data factory (SDDF), an AI-driven distributed computing platform designed to address challenges in drug discovery.
  25. [25]
    BOINC computing power
    BOINC computing power. Totals. 24-hour average: 24.759 PetaFLOPS. Active: 18,564 volunteers, 88,958 computers. Daily change: +24 volunteers, -851 computers.Missing: history growth 2010
  26. [26]
    BOINC overview - GitHub
    Jan 31, 2024 · BOINC overview · Heterogeneous: they have different processor and GPU types different operating systems (Windows, Mac OS, Linux, Android).
  27. [27]
    BoincPlatforms · BOINC/boinc Wiki - GitHub
    A given BOINC client supports one or more platforms (e.g., a Win64 client can run Win64 or Win32 applications). The client reports its platform to the ...
  28. [28]
    [PDF] Xgrid - Apple
    Leveraging the power of Mac OS X Server v10.4, Xgrid is an ideal distributed computing platform for individual researchers, specialized collabora- tors, and ...
  29. [29]
    Apache Ignite Introduction - GridGain Systems
    GridGain is built on the Apache Ignite open source in-memory computing platform which includes an in-memory data grid, in-memory database and more.Missing: volunteer | Show results with:volunteer
  30. [30]
    Anonymous platform · BOINC/boinc Wiki - GitHub
    Feb 24, 2025 · When the BOINC client requests work from the project's server, the client tells the server its platform, and the server gives it the appropriate ...<|separator|>
  31. [31]
    Client release notes · BOINC/boinc Wiki - GitHub
    New: Processor support updates. (We now support ARM, MIPS, and x86 ... Fix: Various changes to CPU and GPU scheduling and work-fetch. Fix: Updated ...
  32. [32]
    GpuSched · BOINC/boinc Wiki - GitHub
    The main design goal of the new scheduler is to use all resources. In particular, we try to always use the GPU even if that means overcommitting the CPU.
  33. [33]
    Thread 'Is there a way to get BOINC to give priority to Tasks with ...
    Mar 27, 2022 · If the short deadline tasks are in danger of not completing then Boinc will put them into high priority mode and drop any other tasks until they ...Thread 'How do i set priorities between tasks ?' - BOINCThread 'Setting project priority manually?' - BOINCMore results from boinc.berkeley.eduMissing: sensitive | Show results with:sensitive
  34. [34]
    [PDF] Long-term availability prediction for groups of volunteer resources
    Oct 30, 2011 · The traces were collected using the BOINC middleware [1] for volunteer computing. BOINC serves as the basis for projects such as SETI@home, ...
  35. [35]
    [PDF] BOINC: A Platform for Volunteer Computing 1. Introduction - arXiv
    Dec 9, 2018 · The scale of VC is larger: up to millions of computers and millions of jobs per day. Each of these factors presents challenges that a VC ...
  36. [36]
    Folding@home: Achievements from over 20 years of citizen science ...
    The Folding@home distributed computing project has pioneered a massively parallel approach to biomolecular simulation, harnessing the resources of citizen ...
  37. [37]
    PrimeGrid: a Volunteer Computing Platform for Number Theory
    Since 2005, PrimeGrid has grown from a small project factorising RSA numbers by brute force to one of the largest volunteer computing projects in the world.
  38. [38]
    [PDF] Web-based volunteer distributed computing for handling time-critical ...
    Dec 28, 2022 · Web-based volunteer computing uses website visitors' CPU time for urgent tasks, leveraging the web browser in the background.Missing: sensitive | Show results with:sensitive
  39. [39]
    Volunteer computing: Requirements, challenges, and solutions
    Aug 6, 2025 · ... Volunteer computing is a distributed computing approach that allows users to share their idle computing resources to help execute ...
  40. [40]
    Distributed Deep Learning Using Volunteer Computing-Like Paradigm
    Mar 16, 2021 · We design a distributed solution that can run DL training on a VC system by using a data parallel approach.Missing: hardware | Show results with:hardware
  41. [41]
    Science wikinomics. Mass networking through the web creates new ...
    “The majority of the world's computing power is no longer in supercomputer centers and institutional machine rooms. ... volunteer computing… Figure 1 ...
  42. [42]
    [PDF] The Computational and Storage Potential of Volunteer Computing
    Resources are trusted, so that validation techniques like redundant computing are not needed. Workstations can be contacted dynamically (in BOINC, all.
  43. [43]
    SETI@home
    SETI@home is in hiberation.​​ We are no longer distributing tasks. The SETI@home message boards will continue to operate, and we'll continue working on the back- ...Login · Join · Applications · Graphics
  44. [44]
    [2506.14737] SETI@home: Data Analysis and Findings - arXiv
    Jun 17, 2025 · SETI@home is a radio Search for Extraterrestrial Intelligence (SETI) project that looks for technosignatures in data recorded at the Arecibo Observatory.Missing: metrics | Show results with:metrics
  45. [45]
    Seti@home: an Experiment in Public-Resource Computing
    Nov 1, 2002 · The Macintosh and Windows versions of the client were released in May 1999. Within a week, about 200,000 people had downloaded and run the ...
  46. [46]
    Project Timeline - Folding@home
    Folding@Home began in October 2000 in the lab of Dr. Vijay Pande at Stanford University. Since 2019, the project has been in the hands of Dr. Gregory Bowman.Missing: 1999 | Show results with:1999
  47. [47]
    The 'New' Supercomputer Researching Coronavirus Is Powered By ...
    Mar 24, 2020 · According to Folding@Home lead Dr. Greg Bowman, the recent influx of more than 400,000 users means that the project now boasts 470 PetaFLOPs of ...Missing: exaflops | Show results with:exaflops
  48. [48]
    How fast is Folding@Home today? - Reddit
    Jul 11, 2025 · The statistics page shows 17 600 TFLOPS x86 equivalent. That's 17PFLOPS and change. I remember news about F@H breaking into the ExaFLOPS region during the 2020 ...I'm new to folding, is Folding@home still good to use? - RedditRapidly increasing the world's computing power. Path to 1 Zettaflop ...More results from www.reddit.com
  49. [49]
    Einstein@Home
    Einstein@Home uses your computer's idle time to search for weak astrophysical signals from spinning neutron stars (often called pulsars)Community · About us · News · FAQMissing: metrics | Show results with:metrics
  50. [50]
    Einstein@Home is twenty years old today!
    Feb 19, 2025 · Since that time, almost half a million people have contributed computing power to Einstein@Home. Einstein@Home carries out the most ...Important news on BRP7 and FGRPB1 work on E@HBuy the 20Petaflop computer from Nvidia - Einstein@HomeMore results from einsteinathome.orgMissing: total | Show results with:total
  51. [51]
    Results from an Einstein@Home Search for Continuous ...
    Jun 18, 2025 · We conduct two searches for continuous, nearly monochromatic gravitational waves originating from the central compact objects in the supernova remnants ...
  52. [52]
    World Community Grid: Home
    World Community Grid enables anyone with a computer or Android device to donate their unused computing power to advance cutting-edge scientific research on ...Log in · Global Statistics · Research Overview · ProjectsMissing: 2004 humanitarian
  53. [53]
  54. [54]
    Research Overview - World Community Grid
    This project uses massive computing power, data from The Weather Company, and other data to provide more accurate rainfall forecasts, which will help farmers ...Missing: metrics 2025<|control11|><|separator|>
  55. [55]
    Global Statistics - World Community Grid
    Global statistics history, total run time (hours), points generated, results returned, statistics by project, current members by region.Missing: metrics | Show results with:metrics
  56. [56]
    climateprediction.net | University of Oxford
    A volunteer-based computing initiative dedicated to climate modeling. Through harnessing the computational power of individuals' home computers.
  57. [57]
    Our Mission — Computing for Humanity
    Computing for Humanity provides High Performance Computing power to citizen scientists focused on the project Folding @ Home which embraces different angles ...
  58. [58]
    2020 in review, and happy new year 2021! - Folding@home
    Jan 5, 2021 · Folding@home became the first exascale computer, having over 5-fold greater performance than the world's fastest supercomputer at the time.
  59. [59]
    [PDF] Introduction to Volunteer computing and BOINC
    Volunteer computing (VC) is arrangement in which people. (volunteers) provide computing resources to projects, which use the resources to do distributed ...Missing: gamification | Show results with:gamification
  60. [60]
    [PDF] Distributed Deep Learning Using Volunteer Computing-Like Paradigm
    We plan to run experiments for other deep learning problems such as NLP, machine translation and time-series forecasting because they can impose new challenges.<|separator|>
  61. [61]
    Crowdsourcing a cure for COVID-19: How the cloud and Folding ...
    The number of devices running Folding@home grew from 30,000 devices pre-pandemic to over one million by May 2020, crossing one exaflop in ...
  62. [62]
    [PDF] Volunteer Computing: Application for African Scientist - ICVolunteers
    volunteer computing can access a high percentage of com- puting resources from the rest of the world and other- wise difficult computing intensive research ...
  63. [63]
    [PDF] BOINC and Volunteer Computing Fact Sheet - Einstein@Home
    In addition, by directly involving the public in science, volunteer computing increases public awareness of scientific goals, methods, and progress. Most ...<|separator|>
  64. [64]
    BOINC in Retrospect - David P. Anderson
    Hence I think that volunteer computing - and BOINC in particular - is an important chapter in the history of science. This essay tells the story of BOINC from ...
  65. [65]
    Folding@home: achievements from over twenty years of citizen ...
    Folding@home has also been making important contributions to our ability to identify and drug cryptic pockets. These pockets are absent in known crystal ...
  66. [66]
    The climateprediction.net BBC climate change experiment - Journals
    Dec 16, 2008 · Predictions made with climate models are widely and increasingly used in policy making (Schellnhuber et al. 2006). Since forecasts are of ...
  67. [67]
    The Computational and Storage Potential of Volunteer Computing
    This paper studies the potential capacity of volunteer computing. We analyzed measurements of over 330,000 hosts participating in a volunteer computing project.
  68. [68]
    BOINC-Based Volunteer Computing Projects: Dynamics and Statistics
    Dec 16, 2022 · In this paper we analyze changes in the number and structure of volunteer computing projects, share of fundamental and applied science, number of volunteers ...
  69. [69]
    Understanding Confusion: A Case Study of Training a Machine ...
    Dec 9, 2024 · Work to date has shown that deep neural network–based mechanisms have vastly reduced volunteer efforts by quickly labeling “easy” data, while ...
  70. [70]
    Turning crowds into communities: The collectives of online citizen ...
    BOINC projects routinely deliver badges to individual users and teams ... (2012) SETI@home, BOINC, and volunteer distributed computing. Annual Review ...
  71. [71]
    How BOINC works - BOINC
    ### Summary of How BOINC Works
  72. [72]
    Heat and energy considerations - BOINC
    ### Summary of BOINC Heat and Energy Considerations
  73. [73]
    SocGholish Malware Exploits BOINC Project for Covert Cyberattacks
    Jul 22, 2024 · SocGholish malware campaign exploits BOINC project, infecting thousands of computers. AsyncRAT and V8 JavaScript used to evade detection in ...
  74. [74]
    SecurityIssues · BOINC/boinc Wiki - GitHub
    Many types of attacks are possible in volunteer computing. BOINC provides mechanisms to reduce the likelihood of some of these attacks.Missing: threats malware
  75. [75]
    Security and privacy threats to volunteer computing - Academia.edu
    The vision of volunteer computing is to provide large-scale computational infrastructure by using dynamic collections of donated desktop computers.Missing: perceptions | Show results with:perceptions
  76. [76]
    Privacy Policy | Einstein@Home
    We process personal data gathered when visiting our websites or while participating in the “Einstein@Home” volunteer computing project in compliance with ...
  77. [77]
    [PDF] Scalable Software Infrastructure for Integrating Supercomputing with ...
    Abstract. Volunteer Computing (VC) is a computing model that uses donated com- puting cycles on the devices such as laptops, desktops, and tablets to do.
  78. [78]
    [PDF] VOLUNTEER COMPUTING: - CERN Indico
    Data-center to Volunteer computing energy consumption ratio. Per task energy ... • Volunteer computing can be a feasible alternative for sustainability in ...
  79. [79]
    Reduction of wasted energy in a volunteer computing system ...
    It makes use of the idle time on computers in order to progress the computation, relinquishing control back to the normal user when they require it.Missing: personal | Show results with:personal
  80. [80]
    An Incentive-Based Mechanism for Volunteer Computing using ...
    Sep 24, 2020 · This article introduces a blockchain-enabled resource sharing and service composition solution through volunteer computing. Device resource, ...
  81. [81]
    The next big thing in science is already in your pocket | Digital Trends
    when personal computers had made their way ...<|separator|>
  82. [82]
    IoT-EMS: An Internet of Things Based Environment Monitoring ...
    In this paper, a novel approach is proposed to build a cost-effective standardized environment monitoring system (IoT-EMS) in volunteer computing environment.
  83. [83]
  84. [84]
    [PDF] BOINC Workshop 2024 Impact Report - CERN
    Sep 9, 2024 · Background. The Berkeley Open Infrastructure for Network Computing (BOINC) is an open-source middleware system for volunteer computing.Missing: 2025 | Show results with:2025<|control11|><|separator|>
  85. [85]
    EcoLife: Carbon-Aware Serverless Function Scheduling for ... - arXiv
    Sep 3, 2024 · EcoLife aims to make serverless computing sustainable and high-performant by performing carbon footprint-aware scheduling of serverless ...Missing: volunteer | Show results with:volunteer
  86. [86]
    (PDF) Gamification for Volunteer Cloud Computing - ResearchGate
    Gamification is a method for increasing people motivation and changing their behaviour towards certain tasks in a nongame context. This position paper advocates ...
  87. [87]
    Strategies for Inclusive Volunteerism: Engaging Across Generations
    Jun 4, 2023 · In this practical guide to multigenerational volunteer involvement, we'll explore the nuances of generational volunteer engagement and tips for engaging each ...Missing: computing | Show results with:computing
  88. [88]
    Retaining volunteers in volunteer computing projects - Academia.edu
    Aug 2, 2010 · This paper develops recommendations for scientists and software engineers setting up or running VCPs regarding which strategies to pursue in ...
  89. [89]
    Directorate for Computer and Information Science and Engineering ...
    CISE supports a wide range of academic institutions, research centers and community-based organizations across the United States and its territories. These ...Funding Rates · CISE/IIS · About CISE · CISE/CCF
  90. [90]
    Bridging or Deepening the Digital Divide: Influence of Household ...
    We find home Internet access has an independent influence on volunteering even after controlling for socioeconomic status. Those with access are more likely to ...
  91. [91]
    Shaping Systems for Computer Ownership & Digital Inclusion
    Anyone, in any community, will have the ability to obtain a computer free from barriers to ownership, made possible by integrated, sustainable systems. Explore ...
  92. [92]
    Donate Your Devices! - Compudopt
    We take your devices, give them a new life, and distribute them at no cost to families and individuals who do not have a device in their homes!Missing: responsibility | Show results with:responsibility