Work measurement
Work measurement is the systematic application of industrial engineering techniques designed to establish the time required for a qualified and motivated worker to complete a specified task at a defined rate of performance, typically under standard conditions.[1] This process quantifies the effective physical and mental effort involved in work units, enabling the development of reliable time standards that account for normal working speeds and necessary allowances for fatigue or delays.[2] Originating in the late 19th century, work measurement traces its roots to Frederick Winslow Taylor's pioneering efforts at the Midvale Steel Company, where he conducted stopwatch time studies in the 1880s to analyze and optimize individual worker tasks, thereby reducing inefficiencies and boosting productivity.[3] Taylor's principles of scientific management, detailed in his 1911 publication The Principles of Scientific Management, emphasized replacing rule-of-thumb methods with precise measurement and standardization to determine the "one best way" to perform work.[4] Building on this, Frank and Lillian Gilbreth advanced the field in the early 20th century through motion studies that decomposed tasks into basic motions known as therbligs, complementing time measurement with qualitative analysis to further eliminate waste.[5] In practice, work measurement employs several key techniques to achieve its objectives. Time study, the most traditional method, involves direct observation and timing of tasks using stopwatches, followed by rating worker performance and adding allowances to derive standard times.[1] Work sampling uses random or periodic observations to statistically estimate the proportion of time spent on various activities, making it suitable for non-repetitive or indirect work.[6] Predetermined motion time systems (PMTS), such as Methods-Time Measurement (MTM) and MODAPTS, pre-assign fixed time values to fundamental human motions (e.g., reach, grasp, move) based on empirical data, allowing standards to be set without on-site observation and ensuring consistency across operations.[2] The primary goals of work measurement include setting performance benchmarks for labor costing, capacity planning, and incentive systems; balancing workloads in assembly lines; and identifying opportunities for process improvement to minimize idle time and resource waste.[1] By providing an objective basis for evaluating efficiency, it supports broader industrial engineering initiatives like lean production and ergonomics, ultimately enhancing organizational competitiveness in manufacturing, services, and public sectors.[7]Overview
Definition and Scope
Work measurement is the systematic application of techniques designed to establish the time required for a qualified worker to carry out a specified task at a defined rate of working.[8] This approach quantifies the duration of tasks under normal working conditions, ensuring that the time reflects the effort of an average, trained individual performing effectively without undue haste or delay.[1] The scope of work measurement encompasses both manual tasks, such as assembly or material handling, and cognitive tasks, including planning, inspection, or decision-making, where human performance directly influences completion time.[1] It applies across various industries, from manufacturing to services and offices, but excludes operations controlled primarily by machines, where worker time is secondary to automated cycles.[8] Key principles underlying work measurement include standardization, which ensures consistent methods and conditions for task performance to enable reliable comparisons; repeatability, which emphasizes measurable elements with clear start and end points for accurate timing across cycles; and the establishment of standard times as benchmarks representing the expected duration for qualified workers.[1] These principles facilitate the creation of objective performance standards that account for normal variations in effort and environment.[8] Work measurement differs from related fields like method study, which focuses on analyzing and optimizing the "how" of task execution through process improvement, whereas work measurement addresses the "how long" by quantifying time once an effective method is in place.[8] Time study serves as a primary technique within this framework for direct observation and timing of tasks.[1]Historical Development
Work measurement originated in the late 19th century as part of the scientific management movement pioneered by Frederick Winslow Taylor. While working at Midvale Steel Company in Philadelphia, Taylor rose from machinist to chief engineer by 1883 and began conducting shop-floor experiments in the 1880s to analyze worker efficiency and machinery performance.[9] These efforts laid the groundwork for time studies, where tasks were broken down into elements and timed to establish standards for productivity.[10] In the early 20th century, the development of stopwatch time study became a core technique, directly influenced by Taylor's methods before the turn of the century.[11] Concurrently, Frank and Lillian Gilbreth advanced the field through motion studies in the 1910s, using photography and film to capture and analyze worker movements in tasks like bricklaying, aiming to eliminate unnecessary motions and improve efficiency.[12] Their micromotion study technique, developed around 1907–1917, involved filming operations against a cross-sectioned background to quantify therbligs—basic motion units named after Gilbreth spelled backward.[13] Following World War II, predetermined motion time systems emerged as a significant advancement, with Methods-Time Measurement (MTM) introduced in 1948 by Harold B. Maynard, G. J. Stegemerten, and John L. Schwab.[14] MTM built on Gilbreth's motion analysis by assigning fixed time values to fundamental human motions, enabling the prediction of task times without direct observation and standardizing work across industries.[15] The evolution of work measurement shifted toward digital tools in the 1980s and 1990s with the advent of personal computers, allowing for computerized analysis of time and motion data to replace manual calculations.[16] By the 2000s, software adoption accelerated, integrating video recording, motion sensing, and automated PMTS calculations to enhance accuracy and efficiency in work sampling and synthesis methods.[17] This transition facilitated real-time data processing and broader application in modern manufacturing, though challenges in standardization persisted.[18]Purposes and Uses
Core Objectives
The core objectives of work measurement center on establishing standard times that serve as benchmarks for various operational and managerial functions. These standards represent the time required for a qualified worker to complete a specified task at a defined level of performance, enabling organizations to plan tasks effectively, determine labor costs accurately, design fair incentive schemes, and assess production capacity. For instance, standard times facilitate the comparison of alternative work methods to select the most efficient one, while also providing a foundation for realistic scheduling that aligns human effort with available resources.[8] Beyond foundational planning, work measurement plays a critical role in performance evaluation and workload balancing. It allows managers to measure individual and team outputs against established standards, identifying variances that highlight inefficiencies or supervisory gaps, and ensuring equitable distribution of tasks across operations. This objective supports labor organization by contrasting actual performance with targets, thereby promoting accountability and optimization in resource allocation. In both manufacturing and service sectors, such as production lines or hospital wards, these evaluations help balance workloads to minimize idle time and maximize utilization.[8][19] The benefits of achieving these objectives include enhanced overall efficiency, tighter cost control, and a structured basis for continuous improvement initiatives. By reducing ineffective time and eliminating unnecessary activities, work measurement minimizes human effort and waste, leading to productivity gains—such as a 75% increase in labor output in certain factory processes—and more precise cost estimations for budgeting. These outcomes underpin incentive plans that motivate workers through fair rewards, often yielding 20-35% performance improvements above baseline rates, and integrate seamlessly with methodologies like lean manufacturing for ongoing process refinement.[8]Applications in Industry
In manufacturing, work measurement plays a crucial role in line balancing by distributing tasks evenly across workstations to minimize idle time and optimize workflow efficiency. For instance, multiple activity charts and process analysis help achieve balanced cycles, reducing overall production variability.[8] It also supports production scheduling through time standards that enable precise planning, resource allocation, and just-in-time implementation, as seen in cases where engine stripping transports were reduced from 21 to 15 steps, enhancing throughput.[8] Additionally, work measurement establishes performance benchmarks for wage incentive schemes, allowing fair piece-rate systems that can boost output by 20-35% above base rates in repetitive operations like electrical assembly.[8][20] In the services sector, work measurement facilitates customer service timing by quantifying task durations to streamline interactions and reduce delays, such as in office invoice processing where rates improved to 6.38 per hour through method optimization.[8] For call centers, it enhances efficiency via activity sampling and predetermined time standards for short-cycle tasks like call handling, enabling standardized training and error reduction.[8] In healthcare, task allocation benefits from flow process charts that redistribute duties among staff, exemplified by hospital ward tasks where travel distances were cut by 54%, allowing more time for patient care and improving overall productivity.[8] Broader applications integrate work measurement with ergonomics to promote worker safety by eliminating unnecessary motions and incorporating relaxation allowances, such as 4% for basic fatigue in manual tasks, which lowers accident risks from handling (accounting for 30% of incidents).[8][20] In supply chain optimization, time standards aid material flow and inventory control, reducing goods-in-process and enabling reliable delivery promises, as in aircraft parts handling where distances shrank from 56.2 meters to 32.2 meters, supporting just-in-time coordination with suppliers.[8][20]Measurement Techniques
Time Study
Time study is a direct observational technique in work measurement that employs a stopwatch to record the time taken by a qualified worker to perform a task under standard conditions, enabling the establishment of time standards for productivity analysis and planning.[8] The method, rooted in scientific management principles, involves systematically breaking down the task into short, measurable elements to capture precise timings and identify inefficiencies.[21] The procedure begins with selecting a representative task and dividing it into elements, such as manual operations, machine time, or delays, each defined by clear start and end points for accurate observation.[8] Observations are then conducted using a stopwatch in methods like cumulative timing, where the watch runs continuously and readings are noted at element boundaries, or flyback timing, where it is reset after each element.[8] Multiple cycles—typically 10-20—are timed to ensure reliability, with the average observed time calculated for each element to account for natural variations.[8][21] For effectiveness, the study requires a qualified worker who is experienced, trained, and capable of performing at a standard pace while maintaining quality and safety.[8] Standard conditions must prevail, including optimized methods, tools, materials, and environmental factors, to produce representative data.[8] A sufficient sample size, often determined statistically for 95% confidence and ±5% accuracy, further ensures the observations reflect typical performance.[8] Performance rating evaluates the worker's speed and effectiveness relative to a normal pace, defined as 100% for a qualified worker exerting average effort without strain.[8] Ratings are assessed using scales like the Westinghouse system, considering factors such as skill, effort, working conditions, and consistency.[21] The normal time is then derived by adjusting the observed time for this rating: \text{Normal time} = \text{Observed time} \times \text{Rating factor} where the rating factor is the performance rating divided by 100 (e.g., 110% yields a factor of 1.10).[8][21] To obtain the standard time, allowances for personal needs, fatigue, and delays are added to the normal time, typically ranging from 5-15% depending on task demands.[8] The formula accounts for these as a percentage of working time: \text{Standard time} = \frac{\text{Normal time}}{(1 - \text{Allowance \%})} This adjustment ensures the time standard is realistic and achievable over a full workday.[8][21] Time study is particularly suited to repetitive manual tasks, whereas estimating methods may be applied to more complex or irregular operations.[8]Work Sampling
Work sampling, also known as activity sampling, is a statistical technique used in work measurement to estimate the proportion of time spent on various activities by conducting random observations over an extended period, rather than continuous monitoring. This method is particularly suited for analyzing irregular or variable tasks where direct timing would be inefficient or disruptive, such as in office environments or group production settings. The methodology involves selecting random points in time to observe and record the activities being performed by workers or machines, ensuring that observations are unbiased and representative of the overall work cycle. To determine the required sample size for reliable estimates, the formula n = \frac{Z^2 \times p \times (1-p)}{e^2} is applied, where n is the number of observations needed, Z represents the Z-score for the desired confidence level (e.g., 1.96 for 95% confidence), p is the estimated proportion of time for the activity (often initially set at 0.5 for maximum variability if unknown), and e is the acceptable margin of error. This approach allows for probabilistic inference about time allocation without the need for exhaustive data collection. In applications, work sampling excels for irregular tasks, such as maintenance operations or service-oriented roles where activities fluctuate unpredictably, and for group studies involving multiple workers or processes, enabling broad assessments of idle time, productive work, or delays across a facility. For instance, it has been widely adopted in manufacturing to evaluate machine utilization rates and in healthcare to measure staff activity distributions, providing insights that inform process improvements. Analysis of work sampling data involves calculating the percentage of time devoted to each activity as \left( \frac{\text{Number of observations of the activity}}{\text{Total number of observations}} \right) \times 100, which yields an estimate of the activity's share of total available time with a confidence interval derived from the sample size formula. These proportions can then be multiplied by the total working hours to estimate absolute time expenditures, facilitating comparisons and optimization. A key advantage of work sampling lies in its non-intrusive nature, as intermittent observations minimize interference with normal operations and reduce observer bias compared to more intensive methods like continuous time studies, making it cost-effective for large-scale or long-term evaluations. This technique's statistical foundation ensures objectivity, though it requires careful random sampling to avoid temporal biases, such as overlooking peak or off-peak periods.Predetermined Motion Time Systems
Predetermined motion time systems (PMTS) are analytical techniques used in work measurement to establish time standards by decomposing manual tasks into fundamental human motions and assigning predefined time values to each motion from standardized tables. These systems trace their roots to the work of Frank and Lillian Gilbreth, who developed therbligs—18 elemental motion units such as search, grasp, transport loaded, and position—that represent the basic building blocks of human activity in performing tasks. Therbligs enable a detailed breakdown of work sequences without requiring direct observation of workers, focusing instead on the physiological and mechanical aspects of motion to optimize efficiency and reduce fatigue.[22] The most prominent PMTS is Methods-Time Measurement (MTM), first published in 1948 by Harold B. Maynard, John L. Schwab, and G.J. Stegemerten, building on Gilbreth's therbligs to create a rigorous framework for time predetermination. MTM uses time measurement units (TMUs), where 1 TMU equals 0.00001 hours (or 0.036 seconds), to quantify motions with high precision; for instance, the MTM-1 system analyzes tasks at a micromotion level, assigning times to elements like reach, grasp, and release based on variables such as distance and object weight. To accommodate varying levels of detail, MTM includes hierarchical systems: MTM-1 for detailed, short-cycle operations requiring fine analysis, and MTM-2 for coarser, longer-cycle tasks using grouped motions to expedite the process while maintaining accuracy. Another widely adopted system is MODAPTS (Modular Arrangement of Predetermined Time Standards), developed in the late 1960s by Chris Heyde, which simplifies analysis by coding body-part actions (e.g., move, get, put) in multiples of 0.129 seconds at a comfortable pace, emphasizing ease of application over MTM's micromotion granularity.[23][24][25] In practice, PMTS application involves observing or describing a task, segmenting it into therbligs or equivalent motion elements, selecting appropriate time values from system tables, and summing them to yield the total standard time, often incorporating allowances for rest and delays. This process ensures consistency and repeatability, as times are derived from extensive empirical data rather than subjective assessments. A key advantage of PMTS is the elimination of rating bias inherent in observational methods, making it particularly valuable for designing standards for new processes, hazardous environments, or tasks where direct timing is impractical. These motion-based times can also be synthesized into higher-level standards for broader applications.[24][26][25]Synthesis from Standard Data
Synthesis from standard data is a work measurement technique that establishes time standards for new or modified tasks by selecting and summing pre-measured elemental times from established databases, avoiding the need for direct observation of the entire operation. These databases, often called standard data systems, contain normal time values for common work elements derived from previous direct time studies or predetermined motion time systems.[8] The method begins with breaking down the task into its constituent elements, such as machine setups, material handling, or tool adjustments, and then retrieving the corresponding times from the data bank. These elemental times are adjusted as necessary for specific conditions, including variations in worker performance, equipment, or environment, before being combined to form the total normal time. The standard time is then calculated by adding allowances for personal needs, fatigue, and unavoidable delays, using the formula T_s = \sum t_e + A, where T_s is the standard time, \sum t_e is the sum of selected elemental times, and A represents the allowance factor (typically 10-20% of normal time). For instance, in a power press operation, times for reaching, grasping, and positioning parts can be pulled from standard tables like MTM-2 and aggregated to estimate the full cycle.[8] This approach offers significant advantages over conducting fresh time studies, particularly for repetitive elements, as it is faster, more cost-effective, and provides consistent results across similar tasks within an organization. By leveraging historical data, it minimizes subjectivity and enables rapid standard setting without halting production for observations. Standard data may occasionally incorporate values from predetermined motion time systems as a source for elemental times.[8] The technique finds unique application in process design and the evaluation of task variants, where existing operations are reconfigured or scaled, allowing engineers to predict times reliably using proven building blocks from past studies. It is especially valuable in manufacturing environments with extensive records of elemental data, supporting planning, costing, and incentive schemes efficiently.[8]Estimating Methods
Estimating methods in work measurement rely on expert judgment to predict task durations when direct observation or detailed analysis is impractical, such as for unique or infrequent jobs.[27] These approaches draw upon the accumulated experience of supervisors, engineers, or skilled operators to approximate the time required for performing specific tasks, often serving as a preliminary tool for planning and budgeting.[28] Unlike more structured techniques, estimation emphasizes qualitative assessment based on familiarity with similar operations, making it suitable for scenarios where historical data or standards are limited.[29] The process typically involves comparing the new task to known benchmarks from past similar activities, while adjusting for influencing factors such as task complexity, worker skill levels, environmental conditions, and material variations.[27] Experts mentally simulate the workflow, factoring in setup times, potential delays, and execution steps to arrive at a time estimate, sometimes through group consensus to reduce individual biases.[29] This method requires no specialized equipment, relying instead on professional intuition honed over years of exposure to comparable work. For instance, in analytical estimating, tasks may be broken into components for more refined judgments, though general estimation remains holistic.[27] Estimating is particularly prevalent in fields like construction and research and development (R&D), where projects often involve non-repetitive elements, such as custom builds or prototype development, necessitating quick approximations for scheduling and resource allocation.[29] In NASA's Kennedy Space Center operations, for example, engineers use estimation for Shuttle processing tasks akin to construction activities, incorporating "as-run" feedback from prior missions to refine predictions.[29] Accuracy tends to improve with the estimator's experience, as repeated exposure to real-world variances allows for better calibration of judgments over time.[29] Despite these advantages, estimating methods exhibit higher variability and subjectivity compared to observational techniques, leading to potential inaccuracies that can affect planning reliability.[27] They are best employed as interim measures until more precise studies can be conducted, avoiding use in incentive-based systems where objectivity is critical.[28]Specialized Approaches
Analytical Estimating
Analytical estimating is a work measurement technique that involves breaking down a job into its constituent elements and estimating the time required for each element based on expert judgment, historical data, and knowledge of similar operations, rather than direct observation. This method enables the establishment of standard times for tasks at a defined level of performance, typically 100% rating for a qualified worker, and is particularly suited for non-repetitive or long-cycle jobs where full time studies would be inefficient. Unlike pure estimating, which relies on overall intuition, analytical estimating emphasizes structured decomposition to enhance accuracy through reasoned analysis.[8] The procedure begins with a detailed method study to identify the job's elements at natural breakpoints, considering factors such as distances involved, worker skill levels, tools used, and working conditions. For each element, times are estimated using available synthetic data from predetermined motion time systems (PMTS) where applicable, or by drawing on the estimator's experience with comparable elements; for instance, estimating the time to align a component might incorporate knowledge of reach distances and grasp motions adjusted for the operator's proficiency. These individual element times are then summed to yield the total basic time, to which relaxation and other allowances are added to determine the standard time. This approach allows errors in individual estimates to compensate across elements, resulting in an overall acceptable level of precision for planning purposes. Skilled estimators, trained in work study principles, are essential to ensure consistency and reliability.[8][30] Analytical estimating is especially valuable in skilled trades and maintenance tasks, such as repair work in tool rooms or job-order production, where applying a complete PMTS would be overly detailed and time-consuming due to the uniqueness of operations. It provides a cost-effective alternative to direct time study, facilitating quicker quotations, scheduling, and incentive rate setting without the need for on-site observation, though its accuracy is generally lower than observational methods and depends heavily on the estimator's expertise. Periodic validation of the resulting standard times against actual performance is recommended to maintain relevance.[8][31]Comparative Estimating
Comparative estimating is a work measurement technique used to establish standard times for tasks by comparing them to known benchmark operations with established durations, particularly when direct observation or detailed analysis is impractical. This method relies on the estimator's experience to identify similarities between the new task and reference jobs, allowing for relative assessment rather than absolute measurement. It is especially suited for non-repetitive or variable work where full time studies would be inefficient.[8] The process involves first selecting and timing a set of benchmark jobs that cover a range of typical work content, often using logarithmic time slots to categorize durations—for instance, assigning a midpoint of 15 minutes to jobs falling between 0 and 30 minutes. Similar tasks are then grouped based on shared characteristics, such as motion patterns or complexity, and their times are adjusted relative to the benchmarks using ratios or performance scales. Estimators rate the new job's difficulty or pace against the reference, applying a comparison factor to derive the estimate; for example, if a task is deemed 20% more demanding than the benchmark, the estimated time is calculated as the reference time multiplied by 1.2. This approach draws on historical data from similar operations to ensure consistency across estimates.[8][32] In practice, the formula for estimation is typically expressed as:\text{Estimated time} = \text{Reference time} \times \text{Comparison factor}
where the comparison factor is a multiplier (e.g., 1.1 for slightly easier work or 1.3 for more complex variants) derived from qualitative judgment of differences in effort, conditions, or elements. To enhance reliability, multiple estimators may review the benchmarks and factors, though the method inherently minimizes individual bias through reliance on validated reference data.[8][33] This technique finds applications in batch production environments, where tasks vary slightly from established norms, and in scenarios with limited data, such as maintenance activities or new product introductions requiring quick bids. For instance, in aerospace manufacturing, times for low-volume assembly variants are estimated by scaling against proven operations, supporting outsourcing decisions and production planning without extensive studies. It is also valuable for indirect labor standards, like estimating uncommon maintenance functions by adjusting benchmark times for direct tasks to account for volume fluctuations.[32][33] A key advantage of comparative estimating is its speed in handling job variants, enabling rapid standardization in dynamic settings like medium-batch operations, while leveraging past experience to achieve reasonable accuracy over time through statistical averaging of estimation errors. Studies on its precision indicate that optimal interval selections for time allocation can yield practical results, though accuracy depends on the quality of benchmarks and estimator expertise. Unlike more granular methods, it avoids breakdown into elemental motions, focusing instead on holistic comparisons for efficiency.[8][34]