Function point
Function point analysis (FPA) is a standardized method for measuring the functional size of a software application based on the functionality it provides to the end user, derived directly from user requirements rather than physical attributes like lines of code.[1] Developed to offer a technology-independent metric, FPA quantifies software size in terms of function points, which represent units of delivered functionality, enabling consistent comparisons across projects and lifecycles.[1] The technique was pioneered by Allan J. Albrecht at IBM in the late 1970s, with the initial proposal presented in October 1979 as a way to assess application development productivity from the end-user's viewpoint.[2] Albrecht's approach addressed limitations in traditional metrics by focusing on logical functionality, such as data processing and user interactions, rather than implementation details.[3] Since its introduction, FPA has evolved into an international standard, formalized by the International Function Point Users Group (IFPUG) and aligned with ISO/IEC 14143-1:2007 for functional size measurement.[1] At its core, FPA involves identifying and counting five basic function types: external inputs (EI) for data entering the system, external outputs (EO) for data leaving the system, external inquiries (EQ) for retrieving data from the system and sending it to the user without maintaining data storage, internal logical files (ILF) for user-maintainable data groups, and external interface files (EIF) for data maintained by other systems.[1] Each type is assigned a weight based on its complexity (low, average, or high), summed to yield unadjusted function points (UFP), and then adjusted by a value adjustment factor (VAF) that accounts for 14 general system characteristics like performance and reusability.[4] The resulting adjusted function points provide a reliable size metric for estimating effort, costs, schedules, and productivity.[1] FPA's primary applications include project estimation, benchmarking software productivity (e.g., effort per function point), quality assessment (e.g., defects per function point), and requirements management throughout the software development lifecycle.[1] Governed by IFPUG's Function Point Counting Practices Manual (CPM), which complies with ISO 20926:2009, the method ensures repeatability and is supported by certification programs like Certified Function Point Specialist (CFPS).[1] Its enduring relevance lies in normalizing metrics across diverse technologies, from legacy systems to modern agile environments, making it a cornerstone of software measurement practices.[2]Introduction
Definition and Purpose
A function point (FP) is a standardized unit of measurement used to quantify the functional size of software applications from the perspective of the end user. It focuses on the functionality delivered to users, such as the processing of data inputs, outputs, and inquiries, rather than the technical details of implementation like lines of code or hardware specifics.[1][5] The primary purpose of function points is to provide a technology-independent metric for estimating software development effort, costs, and productivity across the entire software lifecycle. By measuring the size based on user requirements, function points enable consistent comparisons between projects, regardless of the programming language, platform, or development methodology employed. This approach supports benchmarking, resource allocation, and performance analysis in software engineering.[1][6] Key principles of function point analysis emphasize counting user-oriented functions, including external inputs, external outputs, external inquiries, internal logical files, and external interface files, to capture the business value provided by the software. Unlike traditional code-based metrics, which vary with implementation choices, function points prioritize the logical functionality derived from specifications, promoting a stable and repeatable measure. Developed in the 1970s to overcome the shortcomings of code volume metrics in managing large-scale projects, this method ensures assessments remain aligned with user needs and organizational goals.[1][7]Historical Development
Function point analysis originated in the late 1970s at IBM, where Allan J. Albrecht developed it as a method to measure software productivity independent of programming languages or technologies. Albrecht introduced the concept in October 1979 during an internal presentation and subsequently detailed it in his 1979 paper "Measuring Application Development Productivity," presented at the Joint SHARE/GUIDE/IBM Application Development Symposium.[8] This approach addressed limitations in traditional metrics by focusing on five core function types: external inputs, outputs, inquiries, internal logical files, and external interface files.[2] The metric gained broader adoption in the mid-1980s through the formation of the International Function Point Users Group (IFPUG) in 1987, a non-profit organization dedicated to standardizing and promoting function point practices.[9] IFPUG released its first Counting Practices Manual (CPM) in 1988 (version 1.0), providing guidelines for consistent application of Albrecht's method, with subsequent versions refining rules for accuracy and interoperability.[10] During the 1980s and 1990s, refinements addressed ambiguities in counting complex systems, driven by user feedback and committee work, leading to more robust standardization; influential figures like Capers Jones further advanced its global promotion through research on software economics and productivity benchmarking using function points.[11] Initially applied in mainframe environments for project estimation at IBM and early adopters, function points expanded in the 1990s to client-server architectures as organizations sought technology-agnostic sizing. By the 2000s, adaptations extended its use to web and distributed applications, culminating in international recognition with the adoption of IFPUG's method in ISO/IEC 20926:2009, which formalized function point analysis as a standard for software functional size measurement.[12]Function Point Analysis Methodology
Core Components
Function point analysis relies on five primary base functional components to quantify the functional size of software from the user's perspective. These components—external inputs (EI), external outputs (EO), external inquiries (EQ), internal logical files (ILF), and external interface files (EIF)—capture the elementary processes and data entities that deliver functionality across the application's boundary. Each component is identified and weighted based on specific criteria to ensure consistent measurement.[13] External inputs (EI) are elementary processes that process data or control information entering from outside the application boundary into the system, typically to create, update, or delete data in internal logical files or to alter system behavior without maintaining data. Examples include data entry screens that validate and store user information. EIs cross the boundary once and involve processing logic.[13][10] External outputs (EO) are elementary processes that generate and send derived data or control information to an external destination, often involving calculations, derivations, or maintenance of internal logical files during processing. For instance, a report generated from multiple data sources with computed totals qualifies as an EO. EOs cross the boundary once and may include formatting or aggregation.[13][10] External inquiries (EQ) represent the simplest transactional components, consisting of an elementary process that retrieves data from internal sources, applies no derivations or maintenance, and presents the information externally via input and output crossing the boundary. A search screen displaying matching records without updates exemplifies an EQ. EQs emphasize read-only access for information retrieval.[13][10] Internal logical files (ILF) are user-identifiable groups of logically related data maintained entirely within the application's boundary through its elementary processes, such as adding, changing, or deleting records. An ILF might be a customer database table where the application handles all CRUD operations. ILFs do not include temporary data or system-generated files without user recognition.[13][10] External interface files (EIF) are user-identifiable groups of logically related data referenced by the application but maintained by another application outside its boundary; the counted application only reads or derives data from them without maintenance rights. For example, an inventory system referencing a shared supplier catalog maintained elsewhere counts as an EIF. EIFs support integration but exclude any update capabilities within the scope.[13][10] Complexity for these components is classified as low, average, or high using two key metrics: data element types (DETs), which are unique, user-recognizable, non-recursive fields of data crossing boundaries or maintained; file types referenced (FTRs), which count each distinct ILF or EIF involved in processing (one per read or maintain action); and record element types (RETs), which are user-recognizable subgroups of data within an ILF or EIF (e.g., one for the primary record plus additional for subtypes or associations). Transactional functions (EI, EO, EQ) use DETs and FTRs, while data functions (ILF, EIF) use DETs and RETs. Weights, expressed as unadjusted function points, are assigned via standardized matrices.[13][10] The complexity matrix for external inputs (EI) is as follows:| DETs \ FTRs | 0-1 | 2 | 3+ |
|---|---|---|---|
| 1-4 | Low (3) | Low (3) | Avg (4) |
| 5-15 | Low (3) | Avg (4) | High (6) |
| 16+ | Avg (4) | High (6) | High (6) |
| DETs \ FTRs | 0-1 | 2-3 | 4+ |
|---|---|---|---|
| 1-5 | Low | Low | Avg |
| 6-19 | Low | Avg | High |
| 20+ | Avg | High | High |
| DETs \ RETs | 1 | 2-5 | 6+ |
|---|---|---|---|
| 1-19 | Low | Low | Avg |
| 20-50 | Low | Avg | High |
| 51+ | Avg | High | High |
Calculation Process
The calculation of function points begins with identifying the application boundary, which defines the scope of the software being measured by delineating what functionality is internal to the application versus external interfaces or other systems. This boundary is determined from the user's perspective, focusing on the logical design and functionalities perceivable by the end user, ensuring only relevant elements are counted within the project's scope.[1][12] Step 1 involves identifying and counting the five core function types—External Inputs (EIs), External Outputs (EOs), External Inquiries (EQs), Internal Logical Files (ILFs), and External Interface Files (EIFs)—using predefined complexity criteria such as the number of data element types and file type referenced, as detailed in the component definitions. Each identified function type is classified as low, average, or high complexity and assigned a corresponding weight: for example, low-complexity EIs are weighted at 3, average at 4, and high at 6. The Unadjusted Function Points (UFP) are then calculated as the sum of the weighted values across all components:\text{UFP} = \sum (\text{EI weights}) + \sum (\text{EO weights}) + \sum (\text{EQ weights}) + \sum (\text{ILF weights}) + \sum (\text{EIF weights}). [1][10] Step 2 applies the Value Adjustment Factor (VAF) to account for general system characteristics influencing the software's complexity and value. The VAF is derived from 14 General System Characteristics (GSCs), each rated on a degree of influence scale from 0 (no influence) to 5 (strong influence), covering aspects such as data communications, performance, and reusability. The Total Degree of Influence (TDI) is the sum of these ratings (ranging from 0 to 70), and the VAF is computed as:
\text{VAF} = 0.65 + 0.01 \times \text{TDI},
yielding a multiplier between 0.65 and 1.35. The 14 GSCs are:
| GSC Number | Characteristic | Description Example |
|---|---|---|
| 1 | Data communications | Extent of communication facilities |
| 2 | Distributed data processing | Distribution of processing components |
| 3 | Performance | Response or throughput specifications |
| 4 | Operational environment | Operating system and network support |
| 5 | Transaction rate | Number of transactions per time period |
| 6 | Online data entry | Proportion of online versus batch |
| 7 | End-user efficiency | Efforts to make system convenient |
| 8 | Online update | Proportion of updates in online mode |
| 9 | Complex processing | Mathematical or statistical computations |
| 10 | Reusability | Modularity for reuse |
| 11 | Installation ease | Ease of converting and installing |
| 12 | Operational ease | Ease of daily operations |
| 13 | Multiple installations | Number of sites for one application |
| 14 | Facilitated changes | Ease of non-functional modifications |
| [1][12][14] |
\text{AFP} = \text{UFP} \times \text{VAF}. [1][10] For example, consider a hypothetical project where the UFP totals 100 based on counted components. If the GSCs are rated such that the TDI is 35 (e.g., as shown in the table below for illustration), the VAF is 1.0, resulting in AFP = 100.
| GSC Number | Degree of Influence (0-5) |
|---|---|
| 1 | 3 |
| 2 | 2 |
| 3 | 4 |
| 4 | 1 |
| 5 | 0 |
| 6 | 3 |
| 7 | 5 |
| 8 | 4 |
| 9 | 2 |
| 10 | 3 |
| 11 | 2 |
| 12 | 3 |
| 13 | 1 |
| 14 | 2 |
| Total (TDI) | 35 |
| VAF = 0.65 + (35 × 0.01) = 1.0; AFP = 100 × 1.0 = 100.[1][10] |
Standards and Variations
IFPUG Standards
The International Function Point Users Group (IFPUG) was formally established in 1986 to standardize and promote the function point analysis method originally developed by Allan Albrecht.[15] As a non-profit organization, IFPUG maintains the official guidelines for function point counting through its Counting Practices Manual (CPM), with the latest full release being version 4.3.1 in 2010, accompanied by subsequent minor updates and supplementary materials to ensure compliance with international standards.[1][16] The CPM provides detailed rules for function point analysis, including boundary setting, which defines the scope based on user requirements and applies across the software development life cycle.[1] Component identification involves categorizing functional elements such as external inputs, external outputs, external inquiries, internal logical files, and external interface files.[1] Complexity assessment evaluates each component using standard tables based on data elements and record elements types, assigning low, average, or high complexity weights.[1] The Value Adjustment Factor (VAF) is then applied to adjust the unadjusted function point count for general system characteristics, using a 14-factor model rated from 0 to 5.[1] IFPUG offers certification programs to validate expertise in function point analysis, including the Certified Function Point Specialist (CFPS), which requires a minimum score of 90% overall and 80% in each exam section covering definition, implementation, and case studies, demonstrating mastery of best practices.[17] The Certified Function Point Practitioner (CFPP) is an entry-level certification requiring 80% overall and 70% per section, focusing on foundational skills for accurate and consistent counting.[18] Both certifications are valid for three years, with options for extension pending major CPM updates.[18] The IFPUG method has been internationally standardized as ISO/IEC 20926:2009, which defines the rules, steps, and definitions for applying function point analysis as a functional size measurement technique, ensuring interoperability and consistency.[12] Post-2010 revisions and supplementary IFPUG publications, such as the 2012 Guide to IT and Software Measurement, address adaptations for modern technologies, including guidance on applying function points in agile development environments and cloud computing contexts to maintain relevance in iterative and distributed systems.[19][20]Other Variants and Extensions
Beyond the International Function Point Users Group (IFPUG) standard, several alternative functional size measurement (FSM) methods have emerged to address specific limitations or extend applicability to diverse software domains. These variants maintain the core principle of quantifying functionality from the user's perspective but differ in components, weighting schemes, and target applications.[21][22] COSMIC Function Points (CFP), developed by the Common Software Measurement International Consortium in 1998, provide a second-generation FSM approach suitable for all software types, including real-time and embedded systems. Unlike IFPUG's focus on data and transactional functions, CFP measures size based on four elementary data movements—entries (input to the software), exits (output from the software), reads (retrieval of data without change), and writes (storage or update of data)—each assigned a fixed size of 1 CFP. This granularity enables precise sizing in layers or processes, making it ideal for non-business applications where IFPUG may undercount control processes. The method was formalized as the ISO/IEC 19761:2011 standard, emphasizing universality across development paradigms like Agile.[23][24] NESMA Function Points, originating in the Netherlands during the 1990s as a national standard under the Netherlands Software Metrics Association, closely resemble IFPUG but incorporate simplified estimation techniques for early project phases. It classifies functions similarly (internal logical files, external interface files, external inputs, external outputs, external inquiries) but applies predefined weights to standard function types, reducing subjectivity in counting for common business applications. This approach facilitates rapid indicative and estimated sizing, particularly valuable in European outsourcing contracts where contractual benchmarks require consistent, low-effort measurements. NESMA's guidelines align with ISO/IEC 24570:2018 for software enhancement projects.[25][26] Mark II Function Points, introduced in the late 1980s by Charles Symons and detailed in his 1991 publication, represent a UK-originated variant emphasizing transaction-oriented sizing for information systems. It counts logical transactions (external inputs, outputs, and inquiries) weighted by complexity, alongside an "information profile" that assesses data entities (logical data stores and access paths) to capture both processing and data aspects more holistically than early IFPUG versions. This method, standardized under ISO/IEC 20968:2002, supports broader applicability to transaction-heavy systems but has seen limited global adoption compared to newer standards.[27] Extensions to traditional function points have also adapted the metric for modern contexts. Web Function Points (WFP) extend IFPUG by incorporating web-specific elements, such as dynamic pages, hyperlinks, and multimedia content, to better size user interfaces and navigation in web applications where standard counts overlook interactivity. This variant assigns points to web objects like forms and static/dynamic pages, improving estimation accuracy for e-commerce and portal developments. Similarly, Agile Function Points tailor FSM for iterative environments by aligning counts with user stories and sprints, allowing incremental sizing that integrates with story points for velocity-based planning without disrupting agile workflows. These adaptations maintain core FSM principles while enhancing relevance to web and agile paradigms.[28][29]| Variant | Key Components | Weighting Scheme | Primary Applicability |
|---|---|---|---|
| COSMIC (CFP) | Entries, Exits, Reads, Writes | Fixed (1 CFP each) | Real-time, embedded, all software types |
| NESMA | ILF, EIF, EI, EO, EQ (similar to IFPUG) | Predefined for standard functions | Business apps, outsourcing in Europe |
| Mark II | Logical transactions, data entities | Complexity-based (low/avg/high) | Transactional info systems |
| IFPUG (baseline) | ILF, EIF, EI, EO, EQ | Complexity-based (low/avg/high) | Traditional business applications |