Fact-checked by Grok 2 weeks ago

Concurrent user

A concurrent user is an individual who accesses a software application or at the same time as others, with licensing models restricting the total number of such simultaneous users to a predefined maximum, often referred to as concurrent licensing or floating licenses. This approach measures usage based on access rather than assigning s to specific named individuals, allowing multiple users to share a pool of licenses as long as the concurrency limit is not exceeded. For instance, a for 10 concurrent users permits up to 10 people to use the software simultaneously, regardless of whether they are the same or different individuals over time, enabling efficient resource sharing in environments like shift-based operations or distributed teams. Concurrent user licensing differs from named user or per-seat models, where each license is tied to a unique or , by focusing solely on peak simultaneous , which can reduce costs for organizations with variable usage patterns. Vendors typically enforce these limits through software , license servers, or cloud-based monitoring that track active sessions and release licenses when users log out. Key benefits include cost efficiency for enterprises spanning multiple time zones, greater flexibility for remote or mobile access, and without capping the total number of potential users, though it requires accurate forecasting of loads to avoid access disruptions. Common applications appear in (ERP) systems, (CAD) tools, and content management software, where usage fluctuates but simultaneous access must be controlled.

Core Concepts

Definition

A concurrent user refers to an individual or process actively accessing a software application, system, or resource simultaneously with others, where the count is based on the number of active sessions at a given moment. This measurement focuses on usage rather than total possible access, capturing instances where multiple entities interact with the system without necessarily requiring dedicated resources for each. Key characteristics of concurrent users include their emphasis on , often tied to session-based mechanisms such as logged-in states or ongoing that indicate active . Unlike the total number of registered or authorized users, which may be unlimited, concurrent users highlight the system's capacity to handle overlapping activities, ensuring supports multiple interactions without degradation. The concept of concurrent users originated in the early 1960s with the advent of systems, such as MIT's (CTSS) in 1961, which enabled multiple users to share mainframe resources through multiprogramming and interactive access. Systems like UNIX, developed at starting in 1969 and evolving into a full environment by the early , further exemplified this by allowing several terminals to connect to a single PDP-11 computer, facilitating collaborative computing on shared hardware. For example, a might support 100 concurrent users by permitting that many simultaneous logins or active sessions, even if the total base exceeds thousands.

Distinctions from Other User Types

Concurrent users differ from named user licensing, where licenses are assigned to specific individuals and remain tied to them irrespective of simultaneous usage. In named user models, each designated user can access the software at any time without impacting the for others, as the license is not pooled but allocated per person. This contrasts with concurrent licensing, which limits access based on the maximum number of users active at the same time, allowing organizations to purchase fewer s than total employees if peak usage is low. The floating user model is essentially synonymous with concurrent licensing, emphasizing a shared pool of licenses that any authorized can as needed, provided the total simultaneous do not exceed the purchased limit. servers play a key role here, dynamically checking availability and granting or denying in real-time to enforce the concurrency cap—for instance, in environments like CAD software, a requesting a session queries the server, which releases the license upon logout for reuse by others. This resource-pooling approach promotes efficiency in multi-user settings but requires robust network infrastructure to manage check-ins and check-outs seamlessly. Unlike total users, which encompass all registered or entitled individuals over time regardless of activity, concurrent users specifically count only those engaged in active sessions at a given moment, excluding idle or historical accounts. This active-session focus enables precise scaling for systems handling variable demand, such as or tools, where monitoring tools track interactions rather than cumulative enrollment. Hybrid models, such as named-user floating licenses, combine elements of concurrent and named licensing by assigning licenses to specific users while allowing them to share a pool subject to concurrency limits, offering flexibility and individual accountability.

Licensing and Business Models

Concurrent User Licensing

Concurrent user licensing refers to a model where software vendors grant rights to a fixed number of simultaneous users, rather than individual installations or named individuals, allowing licenses to be dynamically allocated based on demand. This approach is commonly enforced through dedicated license management systems, such as (formerly known as FLEXlm), which operate on a central to and . When a initiates the software, the system performs a "check-out" process to reserve a if one is available within the purchased limit; upon completion of the session, the license is "checked in," making it available for another . This ensures that the maximum number of concurrent users—typically set according to the organization's expected peak usage—never exceeds the licensed capacity, preventing unauthorized while optimizing resource utilization. Concurrent licenses are available in two primary forms: perpetual and subscription-based. Perpetual concurrent licenses involve a one-time upfront payment for indefinite to a specific software version, subject to the concurrent user enforced by the license manager, though they often exclude ongoing or updates beyond an initial period. In contrast, subscription-based concurrent licenses require recurring payments, typically monthly or annually, providing continuous , updates, and while maintaining the same dynamic allocation of seats. Exceeding the concurrent in either model may result in denied for additional users or, in some agreements, overage fees calculated at a premium rate—such as 25% above the standard subscription price—to cover excess usage during peak periods. Major vendors implement concurrent user licensing through tailored metrics and tools. For instance, E-Business Suite supports concurrent user licensing, allowing shared access limited by simultaneous sessions. The Named User Plus metric, used in , licenses a minimum of 25 distinct users per for non-simultaneous access but is a named user model rather than strictly concurrent; alternatively, the metric licenses the underlying hardware, permitting unlimited concurrent users on fully licensed processors without individual user tracking. , in its multi-user licenses, provided floating access where the number of simultaneous users was restricted to the purchased seats, with software installable on unlimited devices but limited to the concurrent count enforced via a license server. These examples illustrate how concurrent licensing differs from named user models by focusing on simultaneous rather than total unique users. Legally, concurrent user licensing agreements include detailed clauses defining user counting—such as simultaneous sessions via logs—and require organizations to maintain through accurate tracking. Contracts often grant vendors to verify adherence, specifying parameters like audit frequency (e.g., no more than once every 12-36 months), notice periods (30-60 days), and cost responsibilities (vendor bears expenses unless significant under-licensing is found, typically over 5% of fees). This historical shift toward concurrent models gained prominence in the , as client-server computing replaced standalone installations, moving away from rigid per-seat licensing to more efficient usage-based limits that aligned costs with actual simultaneous demand.

Economic Implications

Concurrent user licensing models offer significant cost benefits to organizations by allowing them to purchase licenses based on peak simultaneous usage rather than the total number of potential users, thereby reducing upfront and ongoing expenses. For instance, a with 1,000 employees might only need 24 concurrent licenses if peak usage rarely exceeds that level, compared to acquiring 80 or more named-user licenses, potentially lowering licensing costs by up to 70% in scenarios with intermittent access needs. This approach is particularly advantageous for businesses with shift-based workforces, seasonal demands, or hybrid environments where not all staff require simultaneous access, enabling better resource allocation without overprovisioning. From the vendor perspective, concurrent licensing facilitates scalable revenue streams by tying income to actual usage patterns, allowing for tiered and upsell opportunities as organizations expand without proportional license increases. It also helps mitigate software through real-time monitoring and enforcement of concurrency limits, which centralizes control and reduces unauthorized sharing, thereby protecting and ensuring . However, this model introduces administrative overhead for both parties, including the need for accurate usage and potential denials during unexpected peaks, which can complicate budgeting and operations. Case studies illustrate these economic impacts in implementations. A non-profit organization managing 1,500 devices across setups adopted concurrent licensing for its operating system, requiring only 800 for usage instead of 1,500 named ones, resulting in a 46% reduction in costs while supporting flexible staff mobility. Similarly, in , concurrent models have enabled firms to serve 5 to 50+ employees per , achieving 50-90% savings over named-user by minimizing idle licenses and accommodating growth without additional upfront investments. These examples highlight a broader shift in the toward concurrent approaches in enterprise tools, including SAP's platform, where switching to concurrent session-based licensing reduced costs for mixed-user environments by optimizing access for internal and external users. Market trends underscore the growing adoption of concurrent and usage-based models in post-2010, driven by the need for cost efficiency in scalable environments. forecasted in 2019 that end-user spending would reach $116 billion in 2020, with public services growing 17% year-over-year, and by 2020, more than 80% of software vendors had shifted to subscription models that often incorporate concurrency limits. This rise, fueled by platforms like with and session concurrency controls, has led to widespread enterprise adoption, with over 95% of organizations using solutions by 2023, enabling pay-for-peak economics that align expenses with variable workloads. By 2025, hybrid models integrating concurrency with consumption-based metrics have become prominent in and -native applications, further enhancing flexibility.

Technical Applications

In Software Systems

In software systems, concurrent users refer to multiple individuals accessing and interacting with an application simultaneously, requiring architectures designed to manage shared resources without degradation in performance or . Session management is a foundational technique for tracking these users, typically employing mechanisms such as , , or JSON Web Tokens (JWT) to maintain state across requests while enabling scalability. For instance, in web applications, load balancers distribute incoming requests from concurrent users across multiple servers, ensuring that each user's session is preserved through unique identifiers without centralizing all state in a . To support high concurrency, developers often adopt stateless designs, where application servers do not retain user-specific data between requests, instead relying on external stores like databases or caches for session information. This approach facilitates horizontal scaling by allowing any server to handle any user's request, reducing bottlenecks in multi-threaded environments. In languages like , concurrency is further enhanced through threading models, such as the implemented in frameworks like Akka, which isolates state within independent actors to prevent interference and enable efficient among concurrent processes. Practical examples illustrate these concepts in . Applications like 365 employ real-time collaboration features, allowing concurrent edits to shared documents by multiple users, with algorithms merging changes to avoid conflicts. In contrast, single-user tools like traditional desktop word processors lack such mechanisms, limiting them to one active session at a time. This distinction highlights how concurrent user support transforms software from isolated utilities into collaborative platforms. A key challenge in handling concurrent users is managing race conditions, where simultaneous access to shared data can lead to inconsistencies, such as overwritten updates in a database. Solutions like mitigate this by allowing concurrent reads but validating changes before commits, using version numbers or timestamps to detect and resolve conflicts without locking resources during the entire operation. These techniques ensure reliability in high-traffic environments, though they require careful implementation to balance performance and correctness.

In Hardware and Infrastructure

In hardware and infrastructure, the capacity to support concurrent users is fundamentally constrained by physical resources such as CPU cores, , and storage I/O, which determine how many simultaneous sessions or connections a system can maintain without degradation. For instance, server hardware limits arise from the allocation of processing threads or processes to handle incoming requests; exceeding these limits leads to queuing, timeouts, or failures. In web servers like , the worker Multi-Processing Module (MPM) employs a hybrid multi-process, multi-threaded architecture where multiple child processes each manage a pool of threads, allowing the server to handle a large number of concurrent connections efficiently on multi-core systems. This configuration can support thousands of simultaneous connections, depending on available and CPU— for example, tuning the MaxRequestWorkers directive to 1000 or more enables handling 1,000 concurrent sessions via worker processes, though actual performance varies with hardware like 8-16 GB and multi-core CPUs. Databases in infrastructure settings similarly face concurrency limits tied to connection management and resource contention. In systems like , the max_connections system variable caps the number of simultaneous client , typically defaulting to 151 but configurable up to tens of thousands on high-end , beyond which new are rejected with "Too many connections" errors. To optimize for concurrent users, employs a one-thread-per-connection model by default, but Enterprise Edition offers a that queues and dispatches queries across a fixed number of worker threads, reducing overhead for high-concurrency workloads. Query concurrency is further managed through locking mechanisms in storage engines like , where row-level locks prevent conflicts during simultaneous reads and writes, ensuring but potentially causing contention under heavy loads from multiple users. Network imposes bandwidth constraints on concurrent user support, as simultaneous sessions share the available throughput, leading to or if demand exceeds capacity. For example, in a 1 Gbps link, hundreds of concurrent users streaming video might saturate the pipe, requiring load balancers or higher- connections like 10 Gbps Ethernet to maintain performance. In cloud environments, Amazon EC2 Auto Scaling addresses this by dynamically adjusting the number of instances based on metrics like CPU utilization or request counts, ensuring scales to handle spikes in concurrent loads— for instance, an Auto Scaling group might launch additional EC2 instances to distribute traffic across more network endpoints during peak user activity. The evolution of concurrent user support in hardware traces back to the 1960s mainframe era, when IBM introduced time-sharing systems like the System/360 Model 67, enabling dozens to hundreds of users to interact concurrently via virtual terminals without interfering, a breakthrough over batch processing. By the 1970s, IBM's VM operating system further advanced this through full virtualization, partitioning mainframes into multiple virtual machines that each supported concurrent sessions, allowing over 100 users per physical system. Post-2000, the rise of x86-based virtualization with hypervisors like VMware ESX (introduced in 2001) extended these capabilities to commodity servers, enabling resource pooling and overcommitment where a single physical host could run dozens of virtual machines, each serving concurrent users and scaling to thousands overall through clusters. This shift to virtualized and cloud infrastructures has dramatically increased concurrency limits, from mainframe-era hundreds to modern data centers supporting millions via distributed scaling.

Measurement and Optimization

Monitoring Methods

Monitoring concurrent user activity in software systems involves collecting and analyzing key performance indicators to assess system behavior under simultaneous access. Essential metrics include the number of active sessions, which tracks the count of ongoing user interactions at any given time; throughput, measured as requests per second (RPS), indicating the volume of transactions handled concurrently; and response times under load, which evaluate from request initiation to completion during peak usage. Open-source tools like enable real-time of concurrent users by scraping metrics from application endpoints, such as active connections and session counts, using its time-series database and PromQL query language. Commercial solutions like provide application performance (APM) capabilities to track and sessions through of applications, offering dashboards for concurrent load . For logging-based , the ELK stack—comprising for storage, Logstash for processing, and for —allows aggregation and analysis of user activity logs to derive concurrent user patterns, such as peak session overlaps from timestamped events. Load testing tools simulate concurrent users to proactively measure system capacity. , a Java-based , configures thread groups to mimic multiple users, with ramp-up periods defining the gradual introduction of load—for instance, starting 20 users per second over 50 seconds to reach 1,000 concurrent s, avoiding abrupt spikes that could skew results. Similarly, , a Python-scriptable tool, spawns users incrementally to simulate realistic concurrency, adjusting hatch rates (e.g., 2 users per second) to to loads like 20 concurrent users while monitoring response times for saturation points. These tools facilitate peak simulation by maintaining steady-state loads post-ramp-up to observe sustained performance. Standards like ISO/IEC 25010 define performance efficiency as the degree to which a delivers functions within specified time and throughput constraints under varying concurrent loads, encompassing sub-characteristics such as time (e.g., response times) and resource utilization during multi-user scenarios. This framework also ties into by ensuring efficiency in task completion for multiple users, providing a for evaluating under concurrency without excessive resource demands.

Strategies for Management

Managing concurrent users in software systems involves a range of scaling strategies to ensure performance and reliability under varying loads. Horizontal scaling, also known as scaling out, distributes workload across multiple machines or instances, enhancing and accommodating unpredictable spikes in concurrent users by adding nodes dynamically. In contrast, vertical scaling, or scaling up, increases capacity on a single machine by upgrading hardware resources like CPU or memory, which is simpler for stable workloads but limited by physical constraints and potential downtime during upgrades. A practical implementation of horizontal scaling is the use of ' Horizontal Pod Autoscaler (), which automatically adjusts the number of pods based on metrics such as requests per second, a proxy for concurrent user activity, using the formula desiredReplicas = ceil[currentReplicas × (currentMetricValue / desiredMetricValue)] to maintain target utilization. To prevent system overload from excessive concurrent requests, throttling and queuing mechanisms limit access rates while allowing controlled bursts. via the algorithm enforces this by maintaining a of that replenish at a fixed rate; each request consumes a token, and requests without tokens are queued or rejected, enabling APIs to handle bursts up to the bucket size while sustaining a steady rate, as implemented in frameworks like with configurable parameters such as token limit and replenishment period. This approach ensures equitable resource distribution among concurrent users, mitigating denial-of-service risks without fully blocking legitimate traffic. Best practices for managing concurrent users emphasize proactive and resilience testing. Capacity planning relies on analyzing historical data, such as peak (QPS), to forecast needs; for instance, in a feed system with 500 million daily averaging 10 pageviews each, historical patterns reveal a peak QPS of approximately 138,000 during high-traffic hours, guiding provisioning to 2-3 times that load for margins. Netflix employs , through tools like Chaos Monkey, to simulate concurrent user stress by randomly terminating production instances, ensuring systems remain resilient under failure conditions equivalent to sudden load surges from millions of simultaneous streams. Emerging trends leverage AI-driven prediction to anticipate concurrent user peaks, enabling preemptive . Post-2020 advancements in models, such as those applied to , forecast resource demands based on historical and real-time patterns, achieving precise adjustments that improve resource efficiency while maintaining low during load spikes. As of 2025, AI algorithms are increasingly used for predictive in , analyzing usage patterns to scale clusters proactively before demand spikes in AI workloads. These models integrate insights to predict user behavior, supporting adaptive strategies in dynamic environments like web applications.

References

  1. [1]
    Definition of Concurrent Use - Gartner Glossary
    A way to measure the usage of software licenses. Software can be licensed in one of the following ways: individual (it cannot be shared with other users); site.
  2. [2]
    What Is Concurrent Licensing? - Revenera
    Concurrent licenses are a type of software license that revolve around the maximum number of users who will use the software at the same time.
  3. [3]
    IBM Learn about Software licensing
    A Concurrent User is a person who is accessing the Program at any particular point in time. Regardless of whether the person is simultaneously accessing the ...
  4. [4]
    Glossary of Terms - Office of Software Licensing
    Oct 2, 2024 · Concurrent User. An individual who is accessing software that is installed on a server via a network. Concurrent User License. A license that ...Missing: origin | Show results with:origin
  5. [5]
    What are concurrent users? | Liquid Web
    Concurrent users is a measurement of how many simultaneous requests your website can handle at any one single time.
  6. [6]
    Performance Testing Concepts - Concurrent Users - Test Guild
    Jul 9, 2014 · Concurrent users are connected to your application and are all requesting work at some regular interval –but not all at once, and not for the same thing.<|separator|>
  7. [7]
    None
    Nothing is retrieved...<|separator|>
  8. [8]
    The Strange Birth and Long Life of Unix - IEEE Spectrum
    By the mid-1970s, the environment of sharing that had sprung up around Unix resembled the open-source movement so prevalent today. Users far and wide were ...
  9. [9]
    35 Licenses
    The key difference is that users with a Named license are not applied against concurrency counts. This means that a Named user can log in to the system at any ...
  10. [10]
    The difference between Floating and Named User Licenses - IBM
    Named User licenses are similar to Floating licenses and may be used with the Rational License Server. Named User licenses are associate to either the host name ...
  11. [11]
    License types and order used - IBM
    Floating licenses are for a single software product that can be shared among multiple team members. However, the total number of concurrent users cannot exceed ...
  12. [12]
    Choosing a License Server Configuration
    The concurrent user license model makes software available to any user on any computer on a network because licenses are floating and not tied to a specific ...
  13. [13]
    [PDF] APPENDIX A - RESPONSES TO SPECIFICATIONS
    Licensing Model: The most common approach for the OnBase licensing is a hybrid of named and concurrent licensing models. Named or concurrent client licenses ...
  14. [14]
    Background Information about Concurrent Licensing
    The FlexNet Licensing Server checks whether a license is available. If a license is available, the FlexNet Licensing Server checks it out. In addition, the ...Missing: mechanism FlexLM
  15. [15]
    [PDF] Using the Intel® Software License Manager
    License Manager “checks out” a license to the first 20 users. Whenever the license count is less than 20, other licensed users may check out a license from the ...
  16. [16]
    Perpetual vs. Subscription Licenses: Long-Term Effects on Revenue
    Perpetual licenses are a one-time fee, while subscription licenses are monthly/annual. Subscriptions can lead to recurring revenue, but may have lower initial ...Missing: concurrent overage
  17. [17]
    Licensing, subscription, and overages - IBM
    When you flex above your subscription level, you pay for active user use, above the subscription level as overage. This price is typically 25% more than the ...
  18. [18]
    A Brief Guide to SaaS Licensing Models - Revenera
    Feb 29, 2024 · Perpetual licensing involves a one-time purchase for lifetime access to specific software versions, whereas SaaS is commonly tied to recurring ...
  19. [19]
    [PDF] Oracle Software Licensing Basics
    Oracle licenses provide a non-exclusive, limited right to use software. License types outline usage restrictions, and metrics measure usage. Full Use licenses ...
  20. [20]
    Maximum number of users which can be added to a network license
    Oct 8, 2023 · A subscription with multi-user license supports the use of Autodesk products up to a maximum number of users, or seats, connected to a server ...Missing: concurrent | Show results with:concurrent
  21. [21]
    Software License Non-Compliance and Audits - ACC Docket
    Dec 1, 2017 · Elements of an audit clause. Prudent practitioners should carefully review and negotiate audit clauses in every software license, both to limit ...
  22. [22]
    Oracle Concurrent Licensing Metrics: Legacy and Current Usage
    Jun 17, 2024 · Oracle discontinued the old Named User metric in the late 1990s, then briefly reintroduced a variant around 2001 with minimums (e.g., 10 named ...
  23. [23]
    Named vs. Concurrent Licenses: How to Choose for Your Business
    Sep 25, 2024 · Advantages of concurrent licenses​​ Cost effectiveness: This model can be more economical, especially if all users don't need the software at the ...
  24. [24]
    ScerIS, Inc. | The ETCETERA Platform - Concurrent‑User Licensing
    Sep 3, 2025 · With ETCETERA's concurrent-user model, one license serves multiple users—saving 50–90% on costs, supporting team access, shifts, growth, ...
  25. [25]
    Concurrent Licensing | Definition & Benefits - 10Duke
    Concurrent Licensing is a license model that enables a certain number of licenses to be shared between a group of users simultaneously.
  26. [26]
    How One Client Saved 46% with ZeeOS Concurrent Licensing
    Jun 16, 2025 · One ZeeTim customer found a smarter way. They took benefit of the concurrent licensing model of ZeeOS, and were able to save money while giving their employees ...
  27. [27]
    BusinessObjects BI Licensing Explained: Named Users, Concurrent ...
    Aug 22, 2025 · That data drives swift conversations with SAP to reduce license counts or switch models, resulting in tangible cost reduction on the next ...
  28. [28]
    Worldwide Public Cloud Revenue to Grow 17% in 2020 | Gartner
    Nov 13, 2019 · The worldwide public cloud services market is forecast to grow 17% in 2020 to total $266.4 billion, up from $227.8 billion in 2019, according to Gartner, Inc.Missing: concurrent | Show results with:concurrent
  29. [29]
    111 Unmissable SaaS Statistics for 2025 - Zylo
    Adoption Rates: SaaS adoption is now at an all-time high, with 95% of organizations having implemented SaaS solutions in their operations as of 2023​. Business ...Missing: 2020s | Show results with:2020s<|separator|>
  30. [30]
    worker - Apache HTTP Server Version 2.4
    The Apache MPM worker is a hybrid multi-process, multi-threaded server. It uses threads to serve requests, and maintains a pool of idle threads.Missing: concurrent | Show results with:concurrent
  31. [31]
    mpm_common - Apache HTTP Server Version 2.4
    The MaxRequestWorkers directive sets the limit on the number of simultaneous requests that will be served. Any connection attempts over the MaxRequestWorkers ...
  32. [32]
  33. [33]
  34. [34]
    The New MySQL Thread Pool
    Jun 7, 2023 · The MySQL Thread Pool is an alternative and optional thread handling mechanism that is available in the MySQL Enterprise Edition.
  35. [35]
    How bandwidth is allocated to concurrent users? - Server Fault
    Aug 28, 2010 · Given two clients of combined bandwidth exceeding the bandwidth of the server, both clients will probably download at speeds of roughly half the server's ...
  36. [36]
    Amazon EC2 Auto Scaling - AWS Documentation
    Amazon EC2 Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application.Quotas for Auto Scaling... · Auto Scaling benefits · Instance lifecycleMissing: concurrent | Show results with:concurrent
  37. [37]
    Time-sharing | IBM
    Time-sharing, as it's known, is a design technique that enables multiple users to operate a computer system concurrently without interfering with each other.
  38. [38]
    z/OS operating - IBM
    z/OS operating system: Providing virtual environments since the 1960s z/OS concepts. z/OS® is known for its ability to serve thousands of users concurrently ...
  39. [39]
    The history of virtualization and its mark on data center management
    Oct 24, 2019 · You can trace the history of virtualization back to the 1960s when the idea of a time-sharing computer first emerged.Missing: concurrent | Show results with:concurrent
  40. [40]
    Essential Software Performance Testing Metrics: A Comprehensive ...
    Dec 20, 2024 · Discover the key metrics you need to track in performance testing. Learn how to measure, analyze, and optimize your software's performance ...
  41. [41]
    [PDF] A Framework for Monitoring and Measuring a Large-Scale ... - Events
    Aug 16, 2013 · We explain how our system keep track (in real time) of key performance numbers interested by the administra- tors, such as number of concurrent ...<|separator|>
  42. [42]
    How to Test Concurrent Users using JMeter? - GeeksforGeeks
    Jul 23, 2025 · In JMeter, testing concurrent users involves simulating multiple users accessing a web application simultaneously to evaluate its performance under load.
  43. [43]
    Monitoring with Prometheus | New Relic
    Apr 4, 2024 · Prometheus is a powerful, open-source tool for real-time monitoring of applications, microservices, and networks, including service meshes and proxies.Prometheus Features · Using Prometheus To Monitor... · Grafana
  44. [44]
    Beginner's Guide to Prometheus Metrics - Logz.io
    Aug 13, 2025 · Prometheus is an open-source tool for collecting and querying time-series metrics, especially in cloud-native and Kubernetes environments. 2.Key Takeaways · What Are Prometheus Metrics? · Prometheus Metrics Types<|separator|>
  45. [45]
    Active users - New Relic Documentation
    The mobile monitoring capability includes a report tracking the number of devices, sessions or users running your app for each day, week, or month trended ...
  46. [46]
    How to find active users and active sessions in Newrelic APM for ...
    We want to identify no of concurrent users logged into New Relic. Currently we are using below query and set up an alert policy.
  47. [47]
    Attempting to Visualize Max Concurrent Users - Elastic Discuss
    Mar 9, 2018 · I'm currently trying to visualize max concurrent users from my application. There exists a separate log file for this purpose with the following nomenclature.Missing: ELK | Show results with:ELK
  48. [48]
    Elastic Stack: (ELK) Elasticsearch, Kibana & Logstash
    Built on an open source foundation, Elasticsearch and Kibana pave the way for diverse use cases that start with logging and span as far as your imagination ...Elasticsearch · Kibana · Stack Security · Integrations
  49. [49]
    User's Manual: Elements of a Test Plan - Apache JMeter
    If there are 30 threads and a ramp-up period of 120 seconds, then each successive thread will be delayed by 4 seconds. Ramp-up needs to be long enough to avoid ...3.1 Thread Group · 3.2. 2 Logic Controllers · 3.10 Scoping Rules
  50. [50]
    JMeter Ramp Up Period: The Ultimate Guide | Perforce BlazeMeter
    As an example, for 1000 target threads with a 50 second ramp-up, JMeter will add 20 users per second. And with a 100-second ramp-up, JMeter will add 10 users ...
  51. [51]
    Your first test — Locust 2.42.2 documentation
    When we get to around 20 users, response times start increasing so fast that even though Locust is still spawning more users, the number of requests per second ...
  52. [52]
    Increasing the request rate — Locust 2.42.2 documentation
    If response times are high and/or increasing as the number of users go up, then you have probably saturated the system you are testing. This is not a Locust ...Missing: periods | Show results with:periods
  53. [53]
    ISO/IEC 25010
    Performance Efficiency. This characteristic represents the degree to which a product performs its functions within specified time and throughput parameters ...
  54. [54]
    ISO/IEC 25010:2011(en), Systems and software engineering
    NOTE Usability (4.2.4) is defined as a subset of quality in use consisting of effectiveness, efficiency and satisfaction, for consistency with its established ...
  55. [55]
    8 items that make up a non-functional requirement under ISO 25010
    Dec 20, 2022 · 8 items that make up a non-functional requirement under ISO 25010 · 1. Functional Suitability · 2. Performance Efficiency · 3. Compatibility · 4.
  56. [56]
    Vertical vs. horizontal scaling: What's the difference and which is ...
    Jan 23, 2025 · Horizontal scaling refers to increasing the capacity of a system by adding additional machines (nodes), as opposed to increasing the capability ...
  57. [57]
    Horizontal scaling vs vertical scaling: Choosing your strategy
    Feb 1, 2024 · Horizontal scaling offers greater long-term scalability and fault tolerance (since if one node fails, others still serve the system), and is a ...
  58. [58]
    Horizontal Pod Autoscaling - Kubernetes
    May 26, 2025 · In Kubernetes, a HorizontalPodAutoscaler automatically updates a workload resource (such as a Deployment or StatefulSet), with the aim of ...Horizontal scaling · HorizontalPodAutoscaler · Resource metrics pipeline
  59. [59]
    Rate limiting middleware in ASP.NET Core
    ### Summary of Token Bucket Algorithm for Rate Limiting in ASP.NET Core
  60. [60]
    How API Traffic Throttling with Token Bucket algorithm works
    Oct 19, 2023 · The Token Bucket algorithm helps you to allow or deny requests depending on the levels of traffic you are having.
  61. [61]
    Capacity Planning - by ByteByteGo and Diego Ballona
    Jun 29, 2023 · One common method to determine peak QPS is through historical data analysis. This involves tracking the number of queries that the ...
  62. [62]