Fact-checked by Grok 2 weeks ago

Online algorithm

In , an online algorithm is a computational that processes a of incrementally, making irrevocable decisions based solely on the observed the point, without knowledge of future or the ability to revise prior choices. This contrasts with offline algorithms, which have complete access to all input in advance to optimize decisions globally. Online algorithms are essential in systems where arrives dynamically, such as in networking, , and processing. The performance of online algorithms is typically evaluated using competitive analysis, a framework introduced by Sleator and Tarjan in their seminal work on list update and paging problems, which measures an algorithm's efficiency by comparing its cost on any input sequence to that of an optimal offline algorithm that knows the entire sequence beforehand. Under this analysis, an algorithm is c-competitive if its cost is at most c times the optimal offline cost for every possible input, where c is the competitive ratio—a key metric that quantifies robustness against worst-case scenarios. This approach, formalized in Borodin and El-Yaniv's comprehensive text, extends beyond deterministic strategies to include randomized algorithms, where expected performance is assessed against adversarial inputs. Prominent examples of online algorithms include paging (or caching), where the goal is to evict items from a fixed-size memory to minimize future faults, achieving competitive ratios as low as k for k-page replacement policies like LRU and FIFO; the k-server problem, involving movement costs on a metric space to service requests; and load balancing, which distributes tasks across machines as jobs arrive. These problems, rooted in practical applications like virtual memory management and network routing, highlight the trade-offs inherent in online decision-making, often yielding higher costs than offline optima but enabling feasible real-world deployment. Recent advancements incorporate machine learning predictions to augment traditional online algorithms, improving ratios when forecasts are accurate while maintaining robustness otherwise, though classical competitive analysis remains foundational.

Fundamentals

Definition

Online algorithms are computational methods designed to handle input sequences that arrive incrementally over time, processing each item as it becomes available and making irrevocable decisions based solely on the data observed up to that point, without any foresight into future inputs. This sequential nature requires the algorithm to commit to actions immediately upon receiving each input element, ensuring real-time responsiveness in dynamic environments. In contrast, offline algorithms receive the complete input sequence in advance, enabling them to analyze all data holistically before producing an output or sequence of decisions, which often results in optimal solutions for the given problem instance. algorithms, however, operate under , as they cannot revisit or alter prior commitments, leading to potentially higher costs compared to their offline counterparts in worst-case scenarios. Key properties of online algorithms include their inherent adaptivity to streaming or real-time data flows, where the system state evolves incrementally with each new input, and the absence of lookahead, meaning no peeking ahead to inform current choices. These characteristics make them suitable for applications involving unpredictable input sequences, such as network routing or in interactive systems. A simple illustrative example is online finding, in which numerical elements arrive sequentially, and after each arrival, the algorithm must compute and output the of the accumulated elements; for instance, starting with an , the first element serves as the , and subsequent insertions require updating the without access to future values, often using balanced data structures like heaps to maintain order statistics efficiently. The performance of online algorithms is typically evaluated using measures like the competitive ratio, which quantifies how much worse their solution is compared to the optimal offline solution in the worst case, a explored further in subsequent sections.

Historical Context

The field of online algorithms emerged in the and 1980s, primarily driven by challenges in scheduling and within , such as multiprocessor task scheduling and . Early work addressed anomalies in list scheduling, where greedy online strategies could perform worse than optimal offline ones, as analyzed by Ronald L. Graham in 1966, laying groundwork for later competitive measures. By the late , paging problems in operating systems, inspired by Peter Denning's model from 1968, highlighted the need for algorithms processing requests sequentially without future knowledge. Key figures including Richard M. Karp, Robert E. Tarjan, and Andrew C.-C. Yao advanced early models of online computation during this period. Yao's 1980 analysis of online bin packing established asymptotic performance bounds, showing no online algorithm could achieve a ratio better than 1.5 times the offline optimum, influencing studies. A pivotal milestone came in 1985 with Sleator and Tarjan's introduction of competitive analysis, applied to list update problems; they proved the move-to-front is 2-competitive and extended bounds to paging algorithms like LRU, formalizing worst-case guarantees relative to offline optima. In the , online algorithms evolved into a distinct subfield, spurred by the rise of and real-time systems amid growth and massive data volumes. Seminal work by , Yossi Matias, and Mario Szegedy in formalized streaming models for computing moments with sublinear space, bridging online processing to massive datasets in and databases. This era also saw integrations with randomized techniques for real-time applications like web caching, solidifying the field's relevance to dynamic environments.

Formal Model

Input and Decision Process

In the formal model of online algorithms, the input is presented as a \sigma = (\sigma_1, \sigma_2, \dots, \sigma_n), where each request \sigma_i arrives at time step i and must be processed without knowledge of future requests. The algorithm A responds to each \sigma_i by producing a decision a_i immediately upon its arrival, relying solely on the of the seen so far, namely \sigma_1, \sigma_2, \dots, \sigma_i, along with any prior decisions made by the algorithm itself. This decision-making process is captured by the behavior of the algorithm on the full input sequence, denoted as A(\sigma) = (a_1, a_2, \dots, a_n), where each a_i = f(A, \sigma_1 \dots \sigma_i) for some f that depends only on the algorithm's internal and the observed up to time i. The irrevocability of decisions is a core feature: once a_i is output in response to \sigma_i, it cannot be altered or revoked, even if subsequent requests reveal information that might have influenced a different choice. This constraint models real-world scenarios where actions, such as or scheduling commitments, must be finalized on the spot without revision. The input \sigma is typically generated under an adversarial model, where an adversary selects the requests to maximize the cost incurred by the algorithm in the worst case. In the standard oblivious adversarial setting, the entire is fixed in advance, independent of the algorithm's random choices (if any), ensuring a rigorous worst-case analysis. Adaptive variants exist, where the adversary may adjust future requests based on observed decisions, but the foundational model emphasizes the oblivious case to highlight the challenges of foresight absence. This framework underscores the sequential and irrevocable nature of computation without foresight of future inputs, distinguishing it from offline algorithms that process the entire input at once.

Performance Metrics

In online algorithms, performance is primarily evaluated through a that quantifies the resources consumed by the as it processes an input sequence. For an input sequence \sigma, the cost C_A(\sigma) represents the total cost incurred by algorithm A, which may include metrics such as , or other problem-specific resources accumulated across the sequence of irrevocable decisions. This cost is computed based on the input model of sequential requests, where each decision contributes additively to the overall measure. A key aspect of optimality in this framework involves comparing C_A(\sigma) to the cost of an offline optimum, denoted OPT(\sigma), which is the minimum cost achievable by an algorithm with complete foreknowledge of \sigma. This comparison highlights the inherent limitations of online processing, as OPT(\sigma) serves as a lower bound on performance, revealing the penalty for lacking future information. Absolute performance measures extend beyond relative comparisons to include direct assessments, such as the C_A(\sigma) itself or the of decision-making per request, ensuring the algorithm remains computationally feasible in real-time settings. To address robustness against varying inputs, metrics like provide a smoothed over , averaging high-cost operations with lower ones to assess long-term . This approach measures to input variations by considering the divided by the length or using potential functions to bound per-step expenses, offering insights into across diverse scenarios.

Analysis Methods

Competitive Analysis

Competitive analysis provides a for evaluating deterministic online by their performance relative to an optimal offline algorithm that has full of the input in advance. This approach focuses on worst-case guarantees, ensuring that the online algorithm's remains bounded by a constant multiple of the offline optimum's , plus possibly an additive constant. Formally, an online algorithm A is c-competitive if, for every input sequence \sigma, its cost satisfies C_A(\sigma) \leq c \cdot \mathrm{OPT}(\sigma) + b, where \mathrm{OPT}(\sigma) denotes the cost of the optimal offline algorithm on \sigma, c \geq 1 is the competitive ratio, and b is a constant independent of \sigma. This definition, introduced by Sleator and Tarjan, allows for the quantification of how closely an online algorithm approximates the hindsight-optimal solution, emphasizing robustness against adversarial inputs. The additive term b accounts for fixed overheads that do not scale with the input size, making the measure practical for problems where exact matching is impossible online. Lower bounds on the competitive ratio for deterministic algorithms are typically established by constructing specific adversarial input sequences that force any online algorithm to incur high costs relative to the offline optimum. Yao's principle relates the worst-case performance of randomized algorithms to the average-case performance of deterministic ones over fixed input distributions and is primarily used to derive lower bounds for randomized algorithms. Upper bounds on the competitive are often proven using methods, which assign a non-negative potential \Phi to each configuration of the online algorithm, transforming the analysis into an amortized accounting of costs. For instance, in the k-server problem—where servers must move to serve requests on a —a based on distances between server positions and an optimal configuration can show that the Work Function Algorithm achieves a competitive of $2k-1, as demonstrated by Koutsoupias and Papadimitriou (1995). These functions ensure that increases in potential offset excess costs, providing a telescoping sum that bounds the total . Recent work has also disproven the randomized k-server conjecture, establishing a lower bound of \Omega(\log k / \log \log k) on the competitive for randomized algorithms (Bubeck et al., 2022). Despite its strengths, competitive analysis has limitations, particularly in yielding tight bounds only for certain problems; for example, in the paging problem with k pages, the best deterministic algorithms achieve a competitive ratio of k, which is tight even for small k such as 2, where no algorithm can do better than ratio 2 in the worst case. Randomized variants can sometimes achieve better ratios in expectation, but deterministic competitive analysis remains foundational for worst-case guarantees.

Randomized Approaches

A randomized online algorithm employs random bits to influence its decisions at each step, effectively forming a over deterministic online algorithms. The performance of such an algorithm A is evaluated through the expected cost E[C_A(σ)] incurred on any input sequence σ, where the is taken over the algorithm's internal . This approach contrasts with deterministic algorithms by providing probabilistic guarantees rather than worst-case bounds for every possible input. To establish lower bounds on the competitive of randomized online algorithms, applies a over distributions on inputs and deterministic algorithms. Formally, for a minimization problem, the competitive of the optimal is at least the maximum, over all distributions D on input sequences, of the minimum, over all deterministic algorithms, of the expected cost under D; that is, \min_R \max_\sigma \mathbb{E}[C_R(\sigma)/C_{OPT}(\sigma)] \geq \max_D \min_A \mathbb{E}_{\sigma \sim D}[C_A(\sigma)/C_{OPT}(\sigma)], where R ranges over randomized algorithms and A over deterministic ones. This principle facilitates proving impossibility results by analyzing deterministic performance on carefully chosen input distributions. One key benefit of in online algorithms is the ability to achieve logarithmic competitive in problems where deterministic algorithms perform poorly, often linearly or worse in the worst case. For instance, in the online load balancing problem under the restricted assignment model—where jobs can only be assigned to a of machines—randomized algorithms attain an O(\log n) competitive against the optimal offline solution, matching known lower bounds and the O(\log n) achievable by deterministic strategies. This improvement arises because allows the algorithm to hedge against adversarial inputs by probabilistically distributing decisions. Techniques for designing and analyzing randomized online algorithms often involve mixing multiple deterministic strategies with appropriate probabilities or adapting to bound . In mixing strategies, the algorithm selects from a set of base algorithms randomly, ensuring the blends their strengths. For , the potential function is defined to telescope in , accounting for probabilistic choices; this has been used, for example, in randomized variants of online bin packing, where the is bounded relative to the optimum by maintaining an over random item placements.

Key Examples

Paging Algorithms

In the paging problem, a cache of fixed size k stores pages from a larger , and requests arrive online one at a time. If the requested page is not in the cache (a ), the algorithm must evict one resident page to load the new one, with the objective of minimizing the total number of page faults over the request sequence. Among deterministic online algorithms for paging, evicts the page that has been in the cache the longest, while Least Recently Used (LRU) evicts the page whose most recent access occurred furthest in the past. Both algorithms achieve a competitive ratio of k, meaning their number of page faults is at most k times that of the optimal offline algorithm plus a constant. No deterministic online paging algorithm can achieve a better competitive ratio than k, establishing this as a tight bound. For randomized paging algorithms, the expected competitive ratio improves significantly. The marking algorithm, which randomly selects among pages based on access history markers, achieves an expected competitive ratio of O(\log k) times the optimal offline performance. This harmonic bound reflects the fact that randomized strategies can balance eviction probabilities to approximate the offline optimum more closely than deterministic ones. However, no randomized algorithm can achieve an expected competitive ratio better than H_k \approx \ln k, where H_k = \sum_{i=1}^k \frac{1}{i} is the k-th harmonic number, providing a tight lower bound.

Scheduling Problems

Online scheduling problems involve assigning jobs to a set of identical machines as they arrive sequentially, without knowledge of future jobs, to minimize the , defined as the maximum completion time across all machines. Each job specifies its processing time upon arrival, and the algorithm must irrevocably assign it to one machine, where it runs to completion without interruption in the non-preemptive case. This problem models in systems like environments, where jobs represent computational tasks. A fundamental is list scheduling, which greedily assigns each arriving job to the machine with the current smallest total assigned processing time. Introduced by Graham, this approach ensures a competitive ratio of $2 - \frac{1}{m} against the optimal offline , where m is the number of machines; for large m, the ratio approaches 2, reflecting the challenge of online decisions without future information. Another related , longest processing time first (LPT), processes jobs in decreasing order of processing time when possible within the online constraint, though it relies on arrival order approximating this ; when combined with list scheduling, it can yield better empirical performance but maintains the same worst-case bound in fully online settings. Randomized variants improve upon deterministic guarantees for identical machines. For instance, on two machines, a achieves a of \frac{4}{3} in by probabilistically assigning jobs based on current loads and machine states, outperforming the deterministic of \frac{3}{2}. These methods introduce in assignment decisions to hedge against adversarial inputs, with extensions to more machines yielding approaching \frac{e}{e-1} \approx 1.582 for general m. In preemptive variants, jobs can be interrupted and migrated between machines during execution, allowing greater flexibility to balance loads dynamically. Algorithms for preemptive online scheduling achieve competitive ratios such as \frac{e}{e-1} \approx 1.582 for identical machines in the , with bounds up to e \approx 2.718 upper and at least 2.112 lower in related settings; analysis frequently leverages techniques like for optimal ratios depending on machine speeds.

Applications and Extensions

Real-World Systems

In operating systems, online algorithms are pivotal for managing through page replacement policies, where decisions must be made in as pages are requested without knowledge of future accesses. , for instance, employs an approximation of the Least Recently Used (LRU) algorithm for page replacement, utilizing a two-list structure with active and inactive lists to approximate recency and evict pages efficiently under memory pressure. This approach, refined over time, includes the Multi-Generational LRU (MGLRU) framework introduced in kernel version 5.15, which groups pages into generations based on access patterns to better handle diverse workloads like those in modern servers and desktops. The MGLRU enhances performance by reducing scan times during reclamation compared to prior variants. In networking, online scheduling underpin packet routing and load balancing in , where traffic arrives unpredictably and must be routed to minimize without lookahead. For example, per-packet load-balanced routing in Clos-based topologies uses randomized online decisions to distribute flows across paths, ensuring near-full utilization while bounding . A -aware assigns incoming flows to paths with the minimum , approaching a competitive of 1 against offline optima and achieving up to 70% performance gains over traditional methods like ECMP in simulations on real traces. These methods draw brief inspiration from classic paging for eviction-like decisions in buffer management but adapt to multi-path environments. Content delivery networks (CDNs) leverage variants of paging algorithms for web caching to store popular content at edge servers, deciding evictions online based on request sequences to maximize hit rates. Major CDNs, including Akamai and Cloudflare, predominantly use LRU or its adaptations like Segmented LRU (SLRU), which partitions caches into protected and probationary segments to balance recency and frequency, achieving cache hit ratios of up to 87% for web content and around 40% for video in production deployments. Recent enhancements as of 2022 incorporate latency-aware policies, such as those evicting items based on both access time and delivery delay, improving user-perceived performance in global traces. These systems process billions of requests daily, with algorithms tuned to handle skewed popularity distributions without offline preprocessing. Post-2010 adaptations of online algorithms in cloud computing focus on dynamic resource allocation for virtual machines (VMs) and containers, enabling providers like AWS and Google Cloud to scale resources elastically in response to varying demands. In AWS EC2 spot markets, online posted-price mechanisms allocate underutilized capacity by accepting bids in real-time, using competitive algorithms that achieve near-optimal revenue while minimizing fragmentation, as demonstrated in analyses of production workloads. Google Cloud's Borg cluster manager employs online bin-packing variants for task placement, achieving high utilization through efficient packing that is 3-5% better than best-fit heuristics in large-scale deployments. These systems handle heterogeneous resources like CPU and memory, with algorithms updated iteratively to incorporate machine learning for better approximation under uncertainty.

Variants and Generalizations

Advice-augmented models extend the online algorithm framework by allowing algorithms to receive partial hints about future inputs, often in the form of bits of , to improve performance while maintaining robustness against faulty or adversarial . This approach quantifies the minimal future knowledge needed to achieve better competitive ratios, bridging the gap between purely solutions. For instance, in the advice complexity model, an algorithm reads a string of advice bits alongside the input stream, and the complexity measures the worst-case number of bits required for a given performance guarantee. Seminal work formalized this by showing that advice complexity relates closely to randomization, enabling complexity classes analogous to those in . Recent integrations with provide predictions as advice, such as request sequences in caching, yielding algorithms that are consistent (matching offline optima when advice is perfect) and robust (no worse than classical online when advice errs). In paging, machine-learned advice has achieved competitive ratios approaching 1 with high-probability robustness bounds. Streaming algorithms represent a specialized variant of online algorithms tailored for processing massive data streams in a single pass with bounded , focusing on approximate answers to queries like frequency estimation or distinct elements counting. Unlike general online algorithms that may allow multiple decisions per input, streaming emphasizes sublinear sketches that summarize data on-the-fly for downstream computations. The Count-Min exemplifies this, using a two-dimensional of counters updated via functions to estimate item frequencies with guaranteed error bounds, achieving O(1/ε log(1/δ)) for ε-accuracy with probability 1-δ. This structure supports turnstile streams (where updates can be positive or negative) and has been foundational for applications in and . Multi-objective online algorithms generalize the to optimize several conflicting criteria simultaneously, such as minimizing both and delay, rather than a single objective. Competitive analysis extends to Pareto fronts or scalarized objectives, where performance is measured by the bi-criteria ratio or the size of the Pareto set approximation. In , these algorithms balance speed and energy by deciding task offloading to edge servers, modeling the problem as minimizing while constraining power usage, with solutions achieving near-optimal trade-offs via online techniques. For example, in for mobile edge computing, multi-objective frameworks use weighted sums or evolutionary methods to pareto-optimize and execution time, outperforming single-objective baselines by 20-30% in aggregate utility. In the 2020s, quantum online algorithms have emerged as a promising generalization, leveraging and entanglement to achieve speedups in under uncertainty, particularly for problems with space-bounded memory. These algorithms process inputs quantumly, often improving competitive ratios or over classical counterparts in models like the k-server problem or request-answer games. For instance, quantum variants of online in Markov decision processes have demonstrated quadratic speedups in for finite-horizon average-reward settings, using quantum linear algebra subroutines to accelerate value iteration, as shown in research up to 2025. Research has shown that with sublogarithmic , quantum online algorithms can outperform classical ones in advice and for minimization problems, opening avenues for quantum advantages in streaming-like online scenarios.

References

  1. [1]
    [PDF] Amortized Efficiency Of List Update and Paging Rules
    TARJAN. ABSTRACT: In ... Sleator. Computing Science Re- search Center. AT&T Bell Laboratories, 600 Mountain Avenue, Murray. Hill, NJ 07974. Robert E. Tarjan.
  2. [2]
    Amortized efficiency of list update and paging rules
    Amortized efficiency of list update and paging rules. Authors: Daniel D ... PDFeReader. Contents. Communications of the ACM. Volume 28, Issue 2 · PREVIOUS ...
  3. [3]
    [PDF] Online algorithms: a survey - IME-USP
    May 28, 2003 · Online algorithms generate output without knowing the entire input, as input data arrives in the future and is not accessible at present.<|control11|><|separator|>
  4. [4]
    [PDF] Online Algorithms 19.1 Introduction
    In this section, we will study algorithms that are given the input a piece at a time and are forced to make a decision based solely on the input they have seen ...Missing: seminal | Show results with:seminal
  5. [5]
    [PDF] Online Algorithms
    Feb 14, 2006 · Online algorithms makes decision given the input of information ... Borodin and Rl El-Yaniv, Online Computation and Competitive Analysis,.
  6. [6]
    [PDF] Online Algorithms - Computer Science
    Mar 14, 2019 · ) 1. The MTF algorithm is very important in theoretical and practical aspects of computer science. In addition to the obvious applications ...
  7. [7]
    [PDF] Online Algorithms
    This book chapter reviews fundamental concepts and results in the area of online algorithms. We first address classical online problems and then study.
  8. [8]
    Find Median from Running Data Stream - GeeksforGeeks
    Oct 15, 2025 · If the count is even, take the arithmetic mean of the two middle elements. · If the count is odd, take the middle element as the median.
  9. [9]
    [PDF] 22 Online Algorithms
    Online algorithms see requests one by one, without knowing the full sequence upfront, and must serve each request before seeing the next.
  10. [10]
    [PDF] A Historical Context for Data Streams - arXiv
    Oct 18, 2023 · Data stream concepts date back to the 1950s, with dataflow programming in the 1960s and the term "data streams" emerging in the 1970s.
  11. [11]
  12. [12]
    Competitive algorithms for on-line problems - ACM Digital Library
    This paper presents several general results concerning competitive algorithms, as well as results on specific on-line problems.
  13. [13]
    [PDF] On the Power of Randomization in Online Algorithms
    The Borodin, Linial and Saks [3] and Manasse, McGeoch and Sleator [12] papers primarily dealt with deterministic online algorithms, in which case the ...
  14. [14]
    [PDF] Online Algorithms
    A randomized online algorithm can be viewed as a probability distribution over deterministic online algorithms; at each execution, the randomized algorithm ...
  15. [15]
    [PDF] On-line Load Balancing - TAU
    General: The machine load balancing problem is de ned as follows: There are n parallel machines and a number of independent tasks (jobs); the tasks arrive.Missing: logarithmic | Show results with:logarithmic
  16. [16]
    [PDF] Competitive Algorithms for Server Problems
    We shall examine the k-server problem using an approach to on-line algorithms that was pioneered by Sleator and Tarjan [ll]. They compared the move-to-front ...
  17. [17]
    Competitive paging algorithms - ScienceDirect.com
    We develop the marking algorithm, a randomized on-line algorithm for the paging problem. ... (Where Hk is the kth harmonic number, which is roughly ln k.) ...
  18. [18]
    [PDF] On-Line Scheduling | A Survey
    It achieves competitive ratio 2:3875 for all m. The scheme for rejecting jobs is similar as in the previous case, but the optimal algorithm for preemptive ...
  19. [19]
    [PDF] Online Preemptive Scheduling on Parallel Machines
    The objective is to find a schedule of all jobs in which the maximal completion time (makespan) is minimized. In the online problem, jobs arrive one-by-one and ...
  20. [20]
    [1508.00142] Online Contention Resolution Schemes - arXiv
    Aug 1, 2015 · We introduce a new rounding technique designed for online optimization problems, which is related to contention resolution schemes.Missing: preemptive scheduling<|control11|><|separator|>
  21. [21]
    Multi-Gen LRU - The Linux Kernel documentation
    The multi-gen LRU is an alternative LRU implementation that optimizes page reclaim and improves performance under memory pressure.Missing: replacement | Show results with:replacement
  22. [22]
    Chapter 10 Page Frame Reclamation - The Linux Kernel Archives
    During discussions the page replacement policy is frequently said to be a Least Recently Used (LRU)-based algorithm but this is not strictly speaking true as ...
  23. [23]
    [PDF] Per-packet Load-balanced, Low-Latency Routing for Clos-based ...
    DRB is based on a sufficient condition which enables per- packet load-balanced routing to fully utilize network band- width without causing bottlenecks for ...
  24. [24]
    [PDF] A Simple Congestion-Aware Algorithm for Load Balancing in ...
    Specifically, we propose a myopic algorithm that assigns every arriving flow to an available path with the minimum marginal cost (i.e., the path which yields ...
  25. [25]
    [PDF] Towards Latency Awareness for Content Delivery Network Caching
    Jul 13, 2022 · Caches are pervasively used in content delivery networks. (CDNs) to serve requests close to users and thus reduce content access latency.
  26. [26]
    [PDF] Theory and Practice of Cache Provisioning in a Global CDN | Akamai
    ABSTRACT. Modern CDNs cache and deliver a highly-diverse set of traffic classes, including web pages, images, videos and software down-.<|control11|><|separator|>
  27. [27]
    [PDF] RL-Cache: Learning-Based Cache Admission for Content Delivery
    Major CDNs employ LRU and its variants, such as Segmented LRU (SLRU) [15], for cache eviction. Researchers have proposed a large number of more sophisticated.
  28. [28]
    Optimal Posted Prices for Online Cloud Resource Allocation
    We study online resource allocation in a cloud computing platform, through a posted pricing mechanism: The cloud provider publishes a unit price for each ...
  29. [29]
    [PDF] Dynamic Resource Allocation for Spot Markets in Cloud Computing ...
    The paper addresses how to match fluctuating customer demand for virtual machines in cloud computing to maximize revenue and minimize energy cost.
  30. [30]
    [PDF] Dynamic Resource Allocation in the Cloud with Near-Optimal ...
    Aug 15, 2019 · We study this fundamental tradeoff by formulating a resource alloca- tion model that captures basic properties of cloud computing systems, ...Missing: post- | Show results with:post-