Fact-checked by Grok 2 weeks ago

Function point

Function point analysis (FPA) is a standardized for measuring the functional size of a software application based on the functionality it provides to the end , derived directly from user requirements rather than physical attributes like lines of code. Developed to offer a technology-independent , FPA quantifies software size in terms of function points, which represent units of delivered functionality, enabling consistent comparisons across projects and lifecycles. The technique was pioneered by Allan J. Albrecht at in the late 1970s, with the initial proposal presented in October 1979 as a way to assess application development productivity from the end-user's viewpoint. Albrecht's approach addressed limitations in traditional metrics by focusing on logical functionality, such as and user interactions, rather than implementation details. Since its introduction, FPA has evolved into an international standard, formalized by the International Function Point Users Group (IFPUG) and aligned with ISO/IEC 14143-1:2007 for functional size measurement. At its core, FPA involves identifying and counting five basic function types: external inputs (EI) for data entering the system, external outputs (EO) for data leaving the system, external inquiries (EQ) for retrieving data from the system and sending it to the user without maintaining , internal logical files (ILF) for user-maintainable data groups, and external interface files (EIF) for maintained by other systems. Each type is assigned a weight based on its complexity (low, average, or high), summed to yield unadjusted function points (UFP), and then adjusted by a value adjustment factor (VAF) that accounts for 14 general system characteristics like and reusability. The resulting adjusted function points provide a reliable size metric for estimating effort, costs, schedules, and productivity. FPA's primary applications include project estimation, software (e.g., effort per function point), quality assessment (e.g., defects per function point), and throughout the software development lifecycle. Governed by IFPUG's Function Point Counting Practices Manual (CPM), which complies with ISO 20926:2009, the method ensures repeatability and is supported by programs like Certified Function Point Specialist (CFPS). Its enduring relevance lies in normalizing metrics across diverse technologies, from legacy systems to modern agile environments, making it a cornerstone of software measurement practices.

Introduction

Definition and Purpose

A function point (FP) is a standardized used to quantify the functional size of software applications from the perspective of the end . It focuses on the functionality delivered to , such as the processing of inputs, outputs, and inquiries, rather than the technical details of implementation like lines of code or hardware specifics. The primary purpose of function points is to provide a technology-independent metric for estimating software development effort, costs, and productivity across the entire software lifecycle. By measuring the size based on user requirements, function points enable consistent comparisons between projects, regardless of the programming language, platform, or development methodology employed. This approach supports benchmarking, resource allocation, and performance analysis in software engineering. Key principles of function point analysis emphasize counting user-oriented functions, including external inputs, external outputs, external inquiries, internal logical files, and external interface files, to capture the provided by the software. Unlike traditional code-based metrics, which vary with implementation choices, function points prioritize the logical functionality derived from specifications, promoting a stable and repeatable measure. Developed in the to overcome the shortcomings of code volume metrics in managing large-scale projects, this method ensures assessments remain aligned with user needs and organizational goals.

Historical Development

Function point analysis originated in the late 1970s at , where developed it as a to measure software productivity independent of programming languages or technologies. Albrecht introduced the concept in October 1979 during an internal presentation and subsequently detailed it in his 1979 paper "Measuring Application Development Productivity," presented at the Joint SHARE/GUIDE/ Application Development Symposium. This approach addressed limitations in traditional metrics by focusing on five core function types: external inputs, outputs, inquiries, internal logical files, and external interface files. The metric gained broader adoption in the mid- through the formation of the International Function Point Users Group (IFPUG) in 1987, a non-profit organization dedicated to standardizing and promoting function point practices. IFPUG released its first Counting Practices Manual (CPM) in 1988 (version 1.0), providing guidelines for consistent application of Albrecht's method, with subsequent versions refining rules for accuracy and interoperability. During the and , refinements addressed ambiguities in counting complex systems, driven by user feedback and committee work, leading to more robust standardization; influential figures like Capers Jones further advanced its global promotion through research on software economics and productivity benchmarking using function points. Initially applied in mainframe environments for project estimation at and early adopters, function points expanded in the to client-server architectures as organizations sought technology-agnostic sizing. By the , adaptations extended its use to and distributed applications, culminating in international recognition with the adoption of IFPUG's method in ISO/IEC 20926:2009, which formalized function point analysis as a standard for software functional size measurement.

Function Point Analysis Methodology

Core Components

Function point analysis relies on five primary base functional components to quantify the functional size of software from the user's perspective. These components—external inputs (), external outputs (), external inquiries (), internal logical files (ILF), and external interface files (EIF)—capture the elementary processes and data entities that deliver functionality across the application's . Each component is identified and weighted based on specific criteria to ensure consistent measurement. External inputs (EI) are elementary processes that process or control entering from outside the application into the , typically to create, update, or delete in internal logical files or to alter behavior without maintaining . Examples include screens that validate and store user . EIs cross the boundary once and involve processing logic. External outputs (EO) are elementary processes that generate and send derived or to an external destination, often involving calculations, derivations, or maintenance of internal logical files during processing. For instance, a report generated from multiple sources with computed totals qualifies as an . EOs cross the boundary once and may include formatting or aggregation. External inquiries (EQ) represent the simplest transactional components, consisting of an elementary process that retrieves data from internal sources, applies no derivations or , and presents the externally via input and output crossing the . A search screen displaying matching records without updates exemplifies an . EQs emphasize read-only access for . Internal logical files (ILF) are user-identifiable groups of logically related maintained entirely within the application's through its elementary processes, such as adding, changing, or deleting . An ILF might be a database table where the application handles all CRUD operations. ILFs do not include temporary or system-generated files without user recognition. External interface files (EIF) are user-identifiable groups of logically related referenced by the application but maintained by another application outside its ; the counted application only reads or derives from them without rights. For example, an inventory system referencing a shared supplier catalog maintained elsewhere counts as an EIF. EIFs support integration but exclude any update capabilities within the scope. Complexity for these components is classified as low, average, or high using two key metrics: types (DETs), which are unique, user-recognizable, non-recursive fields of crossing boundaries or maintained; file types referenced (FTRs), which count each distinct ILF or EIF involved in processing (one per read or maintain action); and record element types (RETs), which are user-recognizable subgroups of within an ILF or EIF (e.g., one for the primary record plus additional for subtypes or associations). Transactional functions (, , ) use DETs and FTRs, while data functions (ILF, EIF) use DETs and RETs. Weights, expressed as unadjusted function points, are assigned via standardized matrices. The complexity matrix for external inputs () is as follows:
DETs \ FTRs0-123+
1-4Low (3)Low (3)Avg (4)
5-15Low (3)Avg (4)High (6)
16+Avg (4)High (6)High (6)
For external outputs () and external inquiries (), the shared matrix is:
DETs \ FTRs0-12-34+
1-5LowLowAvg
6-19LowAvgHigh
20+AvgHighHigh
Weights: —Low: 4, Average: 5, High: 7; —Low: 3, Average: 4, High: 6. The complexity matrix for internal logical files (ILF) and external interface files (EIF) is:
DETs \ RETs12-56+
1-19LowLowAvg
20-50LowAvgHigh
51+AvgHighHigh
Weights: ILF—Low: 7, Average: 10, High: 15; EIF—Low: 5, Average: 7, High: 10.

Calculation Process

The calculation of function points begins with identifying the application , which defines the of the software being measured by delineating what functionality is internal to the application versus external interfaces or other systems. This is determined from the user's perspective, focusing on the logical design and functionalities perceivable by the end user, ensuring only relevant elements are counted within the project's . Step 1 involves identifying and counting the five core function types—External Inputs (EIs), External Outputs (EOs), External Inquiries (EQs), Internal Logical Files (ILFs), and External Interface Files (EIFs)—using predefined complexity criteria such as the number of data element types and file type referenced, as detailed in the component definitions. Each identified function type is classified as low, average, or high complexity and assigned a corresponding weight: for example, low-complexity EIs are weighted at 3, average at 4, and high at 6. The Unadjusted Function Points (UFP) are then calculated as the sum of the weighted values across all components:
\text{UFP} = \sum (\text{EI weights}) + \sum (\text{EO weights}) + \sum (\text{EQ weights}) + \sum (\text{ILF weights}) + \sum (\text{EIF weights}).
Step 2 applies the Value Adjustment Factor (VAF) to account for general system characteristics influencing the software's complexity and value. The VAF is derived from 14 General System Characteristics (GSCs), each rated on a degree of influence scale from 0 (no influence) to 5 (strong influence), covering aspects such as data communications, performance, and reusability. The Total Degree of Influence (TDI) is the sum of these ratings (ranging from 0 to 70), and the VAF is computed as:
\text{VAF} = 0.65 + 0.01 \times \text{TDI},
yielding a multiplier between 0.65 and 1.35. The 14 GSCs are:
GSC NumberCharacteristicDescription Example
1Data communicationsExtent of communication facilities
2Distributed Distribution of processing components
3Response or throughput specifications
4Operational environmentOperating system and support
5Transaction rateNumber of per time period
6 data entryProportion of online versus batch
7End-user Efforts to make system convenient
8 updateProportion of updates in online mode
9Complex Mathematical or statistical computations
10Reusability for reuse
11 easeEase of converting and installing
12Operational easeEase of daily operations
13Multiple installationsNumber of sites for one application
14Facilitated changesEase of non-functional modifications
The Adjusted Function Points (AFP), representing the final functional size measure, are obtained by multiplying the UFP by the VAF:
\text{AFP} = \text{UFP} \times \text{VAF}.
For example, consider a hypothetical where the UFP totals 100 based on counted components. If the GSCs are rated such that the TDI is 35 (e.g., as shown in the below for illustration), the VAF is 1.0, resulting in AFP = 100.
GSC NumberDegree of Influence (0-5)
13
22
34
41
50
63
75
84
92
103
112
123
131
142
Total (TDI)35
VAF = 0.65 + (35 × 0.01) = 1.0; = 100 × 1.0 = 100.

Standards and Variations

IFPUG Standards

The International Function Point Users Group (IFPUG) was formally established in 1986 to standardize and promote the function point analysis method originally developed by Allan Albrecht. As a non-profit organization, IFPUG maintains the official guidelines for function point counting through its Counting Practices Manual (CPM), with the latest full release being version 4.3.1 in 2010, accompanied by subsequent minor updates and supplementary materials to ensure compliance with international standards. The CPM provides detailed rules for function point analysis, including boundary setting, which defines the scope based on user requirements and applies across the software development life cycle. Component identification involves categorizing functional elements such as external inputs, external outputs, external inquiries, internal logical files, and external interface files. Complexity assessment evaluates each component using standard tables based on data elements and record elements types, assigning low, average, or high complexity weights. The Value Adjustment Factor (VAF) is then applied to adjust the unadjusted function point count for general system characteristics, using a 14-factor model rated from 0 to 5. IFPUG offers certification programs to validate expertise in function point analysis, including the Certified Function Point Specialist (CFPS), which requires a minimum score of 90% overall and 80% in each exam section covering definition, implementation, and case studies, demonstrating mastery of best practices. The Certified Function Point Practitioner (CFPP) is an entry-level certification requiring 80% overall and 70% per section, focusing on foundational skills for accurate and consistent counting. Both certifications are valid for three years, with options for extension pending major CPM updates. The IFPUG method has been internationally standardized as ISO/IEC 20926:2009, which defines the rules, steps, and definitions for applying function point analysis as a functional size measurement technique, ensuring interoperability and consistency. Post-2010 revisions and supplementary IFPUG publications, such as the 2012 Guide to IT and Software Measurement, address adaptations for modern technologies, including guidance on applying function points in agile development environments and contexts to maintain relevance in iterative and distributed systems.

Other Variants and Extensions

Beyond the International Function Point Users Group (IFPUG) standard, several alternative functional size measurement (FSM) methods have emerged to address specific limitations or extend applicability to diverse software domains. These variants maintain the core principle of quantifying functionality from the user's perspective but differ in components, weighting schemes, and target applications. COSMIC Function Points (CFP), developed by the Common Software Measurement International Consortium in 1998, provide a second-generation FSM approach suitable for all software types, including real-time and embedded systems. Unlike IFPUG's focus on data and transactional functions, CFP measures size based on four elementary data movements—entries (input to the software), exits (output from the software), reads (retrieval of data without change), and writes (storage or update of data)—each assigned a fixed size of 1 CFP. This granularity enables precise sizing in layers or processes, making it ideal for non-business applications where IFPUG may undercount control processes. The method was formalized as the ISO/IEC 19761:2011 standard, emphasizing universality across development paradigms like Agile. NESMA Function Points, originating in the during the as a national under the Netherlands Software Metrics Association, closely resemble IFPUG but incorporate simplified estimation techniques for early project phases. It classifies functions similarly (internal logical files, external interface files, external inputs, external outputs, external inquiries) but applies predefined weights to function types, reducing subjectivity in counting for common business applications. This approach facilitates rapid indicative and estimated sizing, particularly valuable in European contracts where contractual benchmarks require consistent, low-effort measurements. NESMA's guidelines align with ISO/IEC 24570:2018 for software enhancement projects. Mark II Function Points, introduced in the late by Charles Symons and detailed in his 1991 publication, represent a UK-originated emphasizing transaction-oriented for information systems. It counts logical transactions (external inputs, outputs, and inquiries) weighted by complexity, alongside an "information profile" that assesses entities (logical data stores and access paths) to capture both processing and data aspects more holistically than early IFPUG versions. This method, standardized under ISO/IEC 20968:2002, supports broader applicability to transaction-heavy systems but has seen limited global adoption compared to newer standards. Extensions to traditional function points have also adapted the metric for modern contexts. Web Function Points (WFP) extend IFPUG by incorporating web-specific elements, such as dynamic pages, hyperlinks, and content, to better size user interfaces and in web applications where standard counts overlook interactivity. This variant assigns points to web objects like forms and static/dynamic pages, improving estimation accuracy for and portal developments. Similarly, Agile Function Points tailor FSM for iterative environments by aligning counts with user stories and sprints, allowing incremental sizing that integrates with story points for velocity-based planning without disrupting agile workflows. These adaptations maintain core FSM principles while enhancing relevance to web and agile paradigms.
VariantKey ComponentsWeighting SchemePrimary Applicability
COSMIC (CFP)Entries, Exits, Reads, WritesFixed (1 CFP each), embedded, all software types
NESMAILF, EIF, EI, EO, EQ (similar to IFPUG)Predefined for standard functions apps, in
Mark IILogical transactions, data entitiesComplexity-based (low/avg/high)Transactional info systems
IFPUG (baseline)ILF, EIF, EI, EO, EQComplexity-based (low/avg/high)Traditional applications
This table highlights structural differences, with COSMIC's data-movement focus contrasting IFPUG's data-transaction emphasis, while NESMA streamlines for estimation and balances both.

Applications and Benefits

Project Estimation and

Function points serve as a foundational for software projects by quantifying the functional requirements in terms of user-valued features, enabling the estimation of effort through established ratios. Practitioners typically multiply the total unadjusted or adjusted function points by an organization-specific or industry-derived person-hours per function point (PH/FP) ratio to forecast effort; for instance, industry benchmarks indicate ratios ranging from 5 to 20 PH/FP, with a global average of approximately 15 PH/FP derived from large-scale studies across various languages and domains. These ratios are calibrated from historical project data, accounting for factors like programming language and team experience, to provide reliable upfront in the phase. For cost estimation, function points are converted to monetary values by applying productivity rates—such as function points per person-month (FP/PM)—to the effort estimate, incorporating labor rates and overheads; this approach is particularly valuable in fixed-price contracts, where it ensures scope clarity and reduces disputes over changes. In , organizations compare their cost per function point against industry repositories, revealing efficiencies or gaps; for example, the International Software Benchmarking Standards Group (ISBSG) data shows average costs varying by project type, aiding competitive analysis and vendor negotiations. Productivity measurement leverages function points to track output as delivered FP per month or per developer, normalizing for functional complexity across projects and teams; this metric supports performance evaluation by highlighting variances from benchmarks like 7-13 FP/PM in mature organizations. Function points integrate seamlessly with traditional waterfall methodologies for comprehensive upfront sizing during requirements analysis, while adaptations for agile environments involve estimating FP within user stories or epics to inform sprint capacities and velocity forecasting. In practice, 's early adoption of function points for mainframe applications in the late , including those in and banking sectors, demonstrated their efficacy; a study of 24 IBM projects correlated function point counts with actual work-hours at coefficients of 0.86 or higher, achieving average relative errors below 32.3% in effort predictions and thus improving schedule accuracy. Tools such as Function Point WORKBENCH automate the counting process, integrating with systems to streamline estimation and maintain audit trails for compliance in regulated industries.

Advantages in Software Metrics

Function points offer a technology-independent measure of software size, focusing on the functionality delivered to users rather than implementation details such as programming language or platform. This independence allows for consistent assessment across diverse technological environments, unlike code-based metrics like lines of code (LOC), which vary significantly by language (e.g., requiring up to 2,200% more LOC in low-level languages compared to high-level ones for equivalent functionality). A key advantage is the ability to apply function point analysis early in the lifecycle, during the requirements phase, enabling reliable predictions of effort, cost, and resources before design or coding begins. This early applicability supports proactive project planning and , contrasting with metrics that depend on completed artifacts. The standardized counting rules of function point analysis, as defined by international standards like ISO 20926:2009, minimize subjectivity and facilitate cross-project and cross-organizational comparisons. By providing a consistent for sizing, function points enable against industry data, such as that from the ISBSG repository, to evaluate productivity and process improvements objectively. Function points emphasize user value by quantifying the features and functionalities that directly address needs and end-user requirements, aligning software metrics with delivered benefits rather than internal artifacts. This focus helps organizations assess the economic value of software assets and prioritize enhancements based on functional impact. Empirical evidence indicates that function points demonstrate reliability in predicting effort across languages, with productivity rates stabilizing at consistent function points per month regardless of technological choices. Function points exhibit strong , effectively measuring software size from small applications to large enterprise systems, including maintenance and enhancement activities. This versatility supports management and of metrics like (e.g., function points per staff hour) across varying project scales.

Comparisons with Other Metrics

Versus Lines of Code

Lines of code (LOC) is a traditional that measures the physical or logical volume of in a program, typically counting statements while accounting for variations in programming languages; for instance, a single line in may equate to approximately 3-5 lines in C due to differences in syntax density and levels. Function points () differ fundamentally from by focusing on the functional size of software from a user perspective, quantifying elements like inputs, outputs, inquiries, files, and interfaces, whereas emphasizes implementation details such as coding style and language specifics; consequently, remains stable after the design phase and is unaffected by refactoring, while counts fluctuate with code optimizations or rewrites. FP offers advantages over LOC by eliminating language bias, enabling fair comparisons across technologies, and providing a more reliable basis for productivity measurement, such as consistent function points per person-month across development teams regardless of the programming language used. In contrast, while LOC is simpler for quick code reviews in small-scale tasks, it can inflate counts through inclusions like comments or blank lines, leading to misleading size estimates; for example, the same application delivering 100 function points might require about 10,700 lines in COBOL but only around 5,300 lines in Python, highlighting how LOC distorts cross-language productivity assessments. Empirical studies, including research by Capers Jones in the 2010s analyzing thousands of projects, demonstrate that correlates more strongly with development effort and costs than , as evidenced by an case where two compilers with identical function points showed vastly different (17,500 in versus 5,000 in PL/S) but better predicted the actual 2.4 times higher productivity in the higher-level language. Approximate conversions between LOC and FP exist in models like , which use language-specific factors to link the metrics, such as an average of 128 per function point for the C language, though these ratios vary widely (e.g., 53 for Java) and are derived from historical data rather than universal rules.

Versus Object-Oriented Metrics

Object-oriented metrics, such as the Chidamber-Kemerer (CK) suite, focus on internal design attributes to assess and quality in software. The CK metrics include Weighted Methods per (WMC), which measures the complexity of a 's methods; Depth of Tree (DIT), capturing hierarchy depth; Coupling Between Object (CBO), quantifying inter-class dependencies; Response For a (RFC), indicating the number of methods invoked by a ; Number of Children (NOC), reflecting subclass proliferation; and Lack of Cohesion of Methods (LCOM), evaluating intra-class cohesion. These metrics emphasize structural elements like encapsulation, , and polymorphism, aiding in predicting and fault-proneness during design and implementation phases. In contrast, function points (FP) measure external functionality from the user's perspective, counting data and transactional elements without regard to internal implementation details, making them paradigm-agnostic. CK metrics, however, delve into OO-specific internals, such as class coupling and method complexity, which FP overlooks. Use case points (UCP), another OO-oriented approach, share structural similarities with FP by estimating size from behavioral specifications but differ by weighting actors (simple, average, complex) and use case scenarios (simple, average, complex) based on interaction complexity, rather than FP's focus on elementary processes and data functions. The UCP formula, UCP = UUCP × TCF × EF—where UUCP is the unadjusted use case points (sum of and weights), TCF is the technical complexity factor, and EF is the —parallels FP's unadjusted function points multiplied by a value adjustment factor (VAF), but UCP better aligns with early-stage modeling via use case diagrams. FP excels in contexts by treating applications as black boxes, enabling consistent sizing across paradigms, including integrations where design varies. methods combine FP's functional sizing with CK metrics for comprehensive lifecycle coverage, using FP for requirements and CK for design evaluation. For instance, in OO projects, UCP facilitates estimation from initial use cases, while FP proves more suitable for environments involving non-OO legacy components, as it avoids OO-specific assumptions. Empirical studies on OO adaptations of FP demonstrate enhanced prediction capabilities; one validation of Object-Oriented Function Points (OOFP) reduced normalized in size estimation from 38% to 15% by integrating OO entities like classes and with FP principles. Despite these strengths, traditional FP can undervalue OO features such as , as it ignores internal hierarchies and reuse mechanisms that reduce development effort. To mitigate this, adaptations like Object-Oriented Function Points (OOFP) have been proposed, extending FP to explicitly count OO elements including associations, methods, and depths, thereby providing a more tailored measure for OO and .

Criticisms and Limitations

Key Criticisms

One major criticism of function point analysis (FPA) is its subjectivity, particularly in defining system boundaries and assigning complexity weights to functions, leading to significant variations among counters. Studies on have shown mean relative errors (MRE) ranging from 10.5% for adjusted function point counts under to 17.3% under earlier variants, with some categories like external interfaces exhibiting errors up to 47.5%. Further empirical work by Low and Jeffery reported coefficients of variation (standard deviation divided by mean) of 33.8% to 45.5% across projects, translating to assumed MREs of 23% to 31%, highlighting inconsistencies due to interpretive differences in counting rules. The process of performing a full FPA count is often time-consuming, requiring detailed examination of requirements and transactions, which can make it impractical for small applications or iterative development. Manual counting methods demand substantial effort for activities like categorizing data functions and transactions, leading to proposals for simplified variants to reduce this burden. In practice, this overhead can consume notable project resources, deterring adoption in fast-paced or resource-constrained environments. FPA is limited to measuring functional size based on user-visible features, ignoring non-functional aspects such as , , , or reliability, which are critical in many systems. This focus on data movements and storage makes it particularly unsuitable for or , where control processes and hardware interactions dominate but are underrepresented or capped at low weights (e.g., maximum 7 function points per regardless of elaboration ). For instance, standard rules fail to distinguish sub-processes in tasks like engine diagnostics, treating them equivalently to simpler controls and underestimating overall effort. Critics argue that FPA struggles with modern software paradigms, such as , AI-driven applications, or systems, where traditional user boundaries blur due to distributed architectures and . In these contexts, defining elementary processes becomes ambiguous, as interactions span services or involve non-deterministic elements like models, rendering standard counting guidelines inadequate. Empirical studies from the 2010s, including systematic mappings of effort techniques, indicate low of FPA in agile environments (less than 2% of reviewed works), with questions raised about its to actual effort amid iterative practices and evolving requirements.

Mitigation Strategies

To address subjectivity in function point counting, the International Function Point Users Group (IFPUG) offers certification programs such as Certified Function Point Specialist (CFPS) and Certified Function Point Practitioner (CFPP), which provide standardized training on counting practices to ensure consistent application of rules and boundaries across projects. These programs emphasize guidelines from the Counting Practices Manual (CPM), including detailed definitions for elementary processes and data functions, reducing variability by promoting uniform interpretation among practitioners. Automation tools mitigate the time-intensive nature of manual counting by integrating with systems to generate function point estimates directly from specifications. The (OMG) Automated Function Points (AFP) specification enables consistent automated counting aligned with IFPUG practices, supporting integration with tools like requirements analyzers for rapid sizing during development. Similarly, the Consortium for IT Software Quality (CISQ) automated function points use static code analysis to derive sizes post-implementation, while maintaining to user requirements. Hybrid approaches combine function points with agile techniques to handle dynamic environments, such as mapping unadjusted function points to story points for , allowing teams to leverage functional size for velocity-based . For non-functional elements not captured by traditional function points, IFPUG's Software Non-functional Assessment Process () measures aspects like operational and performance requirements, providing a complementary size metric that integrates with function points for total effort estimation in agile iterations. Updated variants like COSMIC function points address limitations in non-business applications, such as or systems, by measuring functional size through data movements (entries, exits, reads, writes) applicable to any software domain, offering a more neutral alternative to IFPUG's business-oriented focus. Organizations can further mitigate inaccuracies through empirical calibration, adjusting COSMIC counts based on historical project data to tailor predictions to specific contexts. Best practices include conducting peer reviews during counting to validate boundaries and resolve ambiguities, fostering structured analysis similar to requirements inspections and enhancing count reliability through collaborative verification. Additionally, leveraging historical databases like the International Software Benchmarking Standards Group (ISBSG) repository allows benchmarking of function point counts against industry data, enabling organizations to calibrate metrics and identify deviations for refinement. Ongoing research post-2015 validates function points in contexts through adjusted models, such as integrating automated function points with pipelines to track size changes during agile- transformations, demonstrating improved productivity and quality metrics in high-velocity deployments.

References

  1. [1]
    Function Point Analysis (FPA) - IFPUG
    Function points measure software size based on the functionality requested by and provided to the end user. Function points are derived from requirements and ...
  2. [2]
    40 Years of Function Points: Past, Present, Future - IFPUG
    Sep 18, 2019 · Just 40 years ago in October 1979, Dr. Allan Albrecht proposed for the first time a technique for sizing the functionality of a software system.Missing: original | Show results with:original
  3. [3]
    Function Points - ScienceDirect.com
    Function points were introduced by Allan Albrecht, of IBM. Albrecht's aim was to measure application development productivity in IBM's DP Services organization.
  4. [4]
    What is Function Point Analysis (FPA)? - Total Metrics
    FPA is a method of Functional Size Measurement. It assesses the functionality delivered to its users, based on the user's external view of the functional ...
  5. [5]
    FPA - Function Point Analysis
    FPA is a method for measuring software's functional size, a consistent, standardized method for measuring functionality delivered to an end user.
  6. [6]
    [PDF] Nesma on sizing - Part 1: Function Point Analysis (FPA)
    FPA is a method to determine the functional size of an information system or project. The functional size may be used for different purposes, for example ...
  7. [7]
    [PDF] A Generalized Structure for Function Point Analysis - Amazon S3
    The purpose of Function. Point Analysis was to measure the amount of soft- ware produced. Albrecht wanted to measure the functionality of software from the user ...
  8. [8]
    [PDF] Measuring-Application-Development-Productivity.pdf - fattocs
    by Allan J. Albrecht. IBM Corporation. While Plains, New York. In this paper I would like to share with you some experiences in measuring application ...Missing: invention | Show results with:invention
  9. [9]
    About Us - IFPUG - International Function Points Users Group
    ... foundation of IFPUG, formed in 1987. While various function points variations have appeared in the ensuing years, IFPUG function points remain the standard ...Missing: date history
  10. [10]
    (PDF) The IFPUG Function Point Counting Method - ResearchGate
    The IFPUG (International Function Point Users Group) Function Point Method (FPM) is a method to measure the (functional) size of software from the user ...
  11. [11]
    [PDF] Software Economics and Function Point Metrics - IFPUG
    Apr 14, 2017 · Once function points were released by IBM in 1978 other companies began to use them, and soon the International Function Point User's Group ( ...
  12. [12]
    ISO/IEC 20926:2009 - Software measurement
    In stockISO/IEC 20926:2009 specifies the set of definitions, rules and steps for applying the IFPUG (International Function Point Users Group) functional size ...
  13. [13]
    [PDF] IFPUG Quick Refrence Card - Total Metrics
    1. Determine the type of function point count. 2. Gather the available documentation. 3. Determine counting scope & application boundary.
  14. [14]
    Albrechts Function Point Method - Tutorials Point
    Compute value adjustment factor (VAF) based on 14 general system characteristics (GSC). General System Characteristic, Brief Description. GSC 1, Data ...Missing: GSCs | Show results with:GSCs<|separator|>
  15. [15]
    ISMA Conference - IFPUG - International Function Points Users Group
    IFPUG was formally established in 1986 in Westerville, Ohio, USA, and the first conference was held that same year in Toronto, Canada. For the first several ...Conferences From Past · Isma2025 Hybrid - Seoul · Isma22
  16. [16]
    Function Point Counting Practices Manual, Release 4.3.1 - MC LMS
    The manual was designed for use by persons new to functional size measurement as well as those with intermediate and advanced experience.Missing: website | Show results with:website
  17. [17]
    Certification Overview - International Function Points Users Group
    Get the Function Points Analysis accreditations you need—for which employers will pay you more—with IFPUG's trio of industry-leading certifications.
  18. [18]
    IFPUG - International Function Points Users Group
    The only limitation in the number of extensions an individual may obtain is the release of a major change in the Counting Practices Manual (as defined later).
  19. [19]
    The IFPUG Guide to IT and Software Measurement - Routledge
    It includes coverage of cloud computing, agile development, quantitative project management, process improvement, measurement as a tool in accountability ...
  20. [20]
    [PDF] Functional Sizing of Agile Programs at US Department of ... - IFPUG
    Jul 6, 2021 · The sixth version of the Scrum Guide was released in November of. 2020; however, the first version wasn't released until 2010, nine years after ...Missing: post- | Show results with:post-
  21. [21]
    COSMIC Software Sizing - open standard for software size ...
    Cosmic Software Sizing is the leading ISO standard for measuring the functional size of all types of software. COSMIC Function Points.COSMIC Method · COSMIC Glossary · COSMIC Projects · COSMIC compared<|separator|>
  22. [22]
    Nesma
    ### Summary of NESMA Function Points
  23. [23]
  24. [24]
    ISO/IEC 19761:2011 - a functional size measurement method
    In stock 2–5 day deliveryISO/IEC 19761:2011 specifies the set of definitions, conventions and activities of the COSMIC Functional Size Measurement Method.Missing: Points | Show results with:Points
  25. [25]
    EFS - Nesma
    The official documentation of the Easy Functional Sizing (EFS) method is now available in both English and Dutch. This standard offers a clear and practical ...
  26. [26]
    [PDF] mk ii function point analysis counting practices manual
    C J Date, 'An Introduction to Database Systems', Addison Wesley. Counting Practices Manual, Release 4.0 - IFPUG. 'Function Points As An Asset - Reporting to ...
  27. [27]
    Function point measurement from Web application source code ...
    A function point (FP) is a unit of measurement that expresses the degree of functionality that an information system provides to a user.
  28. [28]
    Counting Function Points for Agile / Iterative Software Development
    Abstract. Function points (FPs) are proven to be effective and efficient units of measure for both agile/iterative and waterfall software deliveries.
  29. [29]
    [PDF] Comparative Analysis of Functional Size Measurement Methods on ...
    This paper shows the comparative analysis of four standard programming estimating techniques (the Albrecht/IFPUG,. MkII FPA, NESMA, and COSMIC method) that have ...
  30. [30]
    [PDF] The use of function points in the industry | ISBSG
    IFPUG and Nesma are very similar methods, resulting in similar functional size when measuring the same requirements. COSMIC is often used to measure real-time ...
  31. [31]
    An Analysis of Function Point Trends - QSM
    Large projects, as Capers Jones tells us, are more likely to be cancelled. Since we calculate project productivity from completed projects, the impact of ...
  32. [32]
    [PDF] The Role of Function Points in Software Development Contracts
    A chief cause of the disputes is the cost of the software development exceeding initial estimates. Software development pricing is based on a 'fee per unit' ...Missing: benchmarking | Show results with:benchmarking<|control11|><|separator|>
  33. [33]
    [PDF] Analysis of Project Cost per Function Point - ISBSG
    In return for your data submission, we issue a free benchmark report that shows the performance in your project or contract against relevant industry peers.
  34. [34]
    [PDF] AN EMPIRICAL STUDY OF FUNCTION POINTS ANALYSIS ...
    Jun 4, 1990 · The Function Points methodology was developed by Allan Albrecht to help measure the size of software programs. The methodology quantifies ...
  35. [35]
    [PDF] Moving from Function Point WORKBENCH™ to SCOPE Project ...
    This provides the most accurate counting capability of any tool currently on the market. Ensures traceability and auditability of your counts. Guess Low / ...
  36. [36]
    Benefits - IFPUG - International Function Points Users Group
    Simple to learn and easy to apply, Function Point Analysis is a critical measure of software process effectiveness and provides the capability to make ...Missing: advantages | Show results with:advantages
  37. [37]
    Software Complexity for Estimation - ISBSG
    Oct 2, 2025 · ... Function Points (UFP) provide more accurate effort estimates for complex projects than simplified measures like Simple Function Points (SFP).Missing: prediction | Show results with:prediction
  38. [38]
    [PDF] Function Point Analysis – A Cornerstone to Estimating - OSTI.GOV
    Using Function Point Metrics For Software Economic Studies, Capers Jones, January 2010. (from Capers Jones). Function Point Analysis – A Cornerstone to.
  39. [39]
    (PDF) Function Point Estimation Methods: A Comparative Overview
    The appearance of the Function Point technique has allowed the ICT community to increase significantly the practice of software measurement, with respect to ...
  40. [40]
    Cost Estimation
    Software cost estimation methods include lines of code, function points, and objects. Function points are considered more accurate than LOC-based models.Missing: comparison | Show results with:comparison
  41. [41]
    [PDF] The Application of Function Points to Predict Source Lines of Code ...
    The purpose of this research was to examine the use of function point analysis in estimating source lines of code for projects in the earliest stages in ...
  42. [42]
    [PDF] COCOMO II Model Definition Manual - Rose-Hulman
    code (SLOC) or as unadjusted function points (UFP), as discussed in Section 2. ... person-hours in a person-month. Page 49. Version 2.1. 45. © 1995 – 2000 ...
  43. [43]
    A metrics suite for object oriented design | IEEE Journals & Magazine
    This paper presents a new suite of six metrics for object-oriented design, developed to address the need for software measures in process improvement.
  44. [44]
    Predicting software effort from use case points: A systematic review
    Apr 1, 2021 · Use Case Points (UCP) has been studied intensively in the past two decades as an alternative method to Function Points, with a goal to ...
  45. [45]
    Object-Oriented Function Points: An Empirical Validation
    The aim of the paper is to gain an understanding of which factors contribute to accurate size prediction for OO software, and to position OOFP within that ...Missing: studies | Show results with:studies
  46. [46]
    A Function Point-Like Measure for Object-Oriented Software
    We define an adaptation of traditional function points, called ''Object Oriented Function Points'', to enable the measurement of object oriented analysis ...
  47. [47]
    Estimating Software Functional Size via Machine Learning
    NESMA defined two functional size estimation methods: the “NESMA Indicative” and the “NESMA Estimated” methods. IFPUG adopted these methods as early function ...Missing: Dutch | Show results with:Dutch
  48. [48]
    Using function points to measure and estimate real-time and ...
    This paper illustrates a set of guidelines –derived from the experience gained in measuring and estimating the development effort of embedded avionic ...Missing: limitations non-<|separator|>
  49. [49]
    Full Function Points for Embedded and Real-Time Software
    This report describes work done by the Software Engineering Management Research Laboratory at the Universit du Qubec Montral and its industrial research ...Missing: non- | Show results with:non-
  50. [50]
    Software development effort estimation: a systematic mapping study
    Aug 1, 2020 · The field of software-development effort estimation explores ways of defining effort through prediction approaches.
  51. [51]
    Quantitative aspects of outsourcing deals
    ### Summary of Function Point Analysis in Outsourcing from the Article
  52. [52]
    About the Automated Function Points Specification Version 1.0
    ... (IFPUG CPM) produced by the International Function Point Users Group (IFPUG). ... Publication Date: December 2013; Categories: Modeling · Software ...
  53. [53]
    CISQ's Automated Function Points: History and Calculation
    Dec 3, 2018 · In 1986 the IFPUG was formed to support the Function Point counting community and formalize guidelines for counting Function Points.
  54. [54]
    Using Function Points in Agile Projects - ResearchGate
    Aug 9, 2025 · This paper examines whether function points would be compatible with story points on agile projects.
  55. [55]
    Software Non-Functional Assessment Process (SNAP) - IFPUG
    Function points measure the size of the functional software user requirements (“what” the software will do). SNAP points measure the size of the non ...
  56. [56]
    [PDF] Early Software Sizing with COSMIC: Practitioners Guide
    May 9, 2020 · 1 Localization (calibration). Approximate sizing techniques are based on artefacts that are not standardized and may vary in their levels of ...
  57. [57]
    [PDF] www.ifpug.org
    Function Point Analysis (FPA) is a software measurement technique based on the users point of view. It measures the software functions and the Function Point ( ...
  58. [58]
    [PDF] Using Analytics to Guide Improvement during an Agile–DevOps ...
    Improvements in productivity and quality during the agile–DevOps transformation. AFP 5 Automated Function Points,. CAST AIP 5 CAST Application Intelligence ...Missing: post- | Show results with:post-