Standardized test
A standardized test is an assessment administered, scored, and interpreted under uniform conditions to permit reliable comparisons of performance across test-takers, typically involving fixed content, time limits, and scoring rubrics derived from empirical norming or criterion-referencing.[1] These tests emerged in the early 20th century United States as tools for efficiently sorting students by ability amid expanding public education systems, evolving from rudimentary civil service exams to widespread use in K-12 accountability, college admissions, and professional licensing.[2] Empirically, standardized tests demonstrate strong predictive validity for academic and occupational outcomes, often outperforming alternatives like high school grades in forecasting college GPA and graduation rates due to their resistance to grade inflation and subjective bias.[3][4] Despite controversies alleging cultural or socioeconomic bias—claims frequently amplified in academic discourse but undermined by longitudinal data showing consistent validity across demographic groups—they enable merit-based selection by quantifying cognitive skills causally linked to complex task performance, though critics argue they incentivize narrow curriculum focus at the expense of broader learning.[1][4]Definition and Core Principles
Definition and Purpose
A standardized test is an assessment that requires all test-takers to answer the same questions, or a selection from a common question bank, under uniform administration and scoring procedures to enable consistent comparison of performance across individuals or groups.[5] This standardization ensures that variations in results reflect differences in abilities rather than discrepancies in testing conditions, with reliability established through empirical validation on large representative samples.[6] Such tests are typically objective, often featuring formats like multiple-choice items that minimize subjective scoring, though they may include constructed-response elements scored via rubrics.[5] The core purpose of standardized testing is to measure specific knowledge, skills, or aptitudes against established norms or criteria, facilitating objective evaluations for decision-making in education, employment, and certification.[7] Norm-referenced tests compare individuals to a peer group, yielding percentile ranks or standard scores derived from a normal distribution, while criterion-referenced tests assess mastery of predefined standards independent of others' performance.[5] These instruments support high-stakes applications, such as college admissions via exams like the SAT, where over 1.9 million U.S. students participated in 2023 to demonstrate readiness, or accountability measures under policies like No Child Left Behind, which mandated annual testing in reading and mathematics for grades 3-8 from 2002 onward to track proficiency rates.[8] By providing quantifiable data, standardized tests inform resource allocation, curriculum adjustments, and identification of achievement gaps, though their validity depends on alignment with intended constructs and avoidance of cultural biases confirmed through psychometric analysis.[9][6] In professional contexts, standardized tests serve selection and licensure functions, such as the Graduate Record Examination (GRE) used by over 300 graduate programs annually to predict academic success, or civil service exams that screened applicants for U.S. federal positions since the Pendleton Act of 1883, reducing patronage by prioritizing merit-based scoring.[8] Overall, their design promotes fairness by mitigating evaluator bias, enabling large-scale assessments that individual judgments cannot match in scalability or comparability.[10]Key Characteristics of Standardization
Standardization in testing refers to the establishment of uniform procedures for test administration, scoring, and interpretation to ensure comparability of results across test-takers. This process mandates that all examinees encounter identical or statistically equivalent test items, receive the same instructions, adhere to consistent time limits, and complete the assessment under comparable environmental conditions, such as quiet settings and supervised proctoring.[5][11] Such uniformity minimizes extraneous variables that could influence performance, enabling scores to reflect inherent abilities or knowledge rather than situational differences.[12] A core feature is objective scoring, where responses are evaluated using predetermined criteria that reduce or eliminate subjective judgment, often through machine-readable formats like multiple-choice items or automated essay scoring algorithms calibrated against human benchmarks. This objectivity contrasts with teacher-made assessments, where variability in grading can introduce bias; standardized tests achieve high inter-rater reliability, typically exceeding 0.90 in psychometric evaluations, by employing fixed answer keys or rubrics validated through empirical trials.[13] Equivalent forms—alternate versions of the test with parallel difficulty and content—are developed and equated statistically to prevent advantages from prior exposure, ensuring fairness in repeated administrations such as annual proficiency exams.[14] Norming constitutes another essential characteristic, involving the administration of the test to a large, representative sample of the target population—often thousands stratified by age, gender, socioeconomic status, and geography—to derive percentile ranks, standard scores, or stanines that contextualize individual performance. For instance, norms for aptitude tests like the SAT are updated periodically using samples exceeding 1 million U.S. high school students to reflect demographic shifts and maintain relevance.[15] This process relies on psychometric techniques, including item response theory, to calibrate difficulty and discriminate ability levels, yielding reliable metrics where test-retest correlations often surpass 0.80 over short intervals.[16] Without rigorous norming, scores lack interpretive validity, as evidenced by historical revisions to IQ tests that adjusted for the Flynn effect—a documented 3-point-per-decade rise in scores due to environmental factors.[17] Finally, standardization incorporates safeguards for accessibility and equity, such as accommodations for disabilities (e.g., extended time verified through empirical validation studies) while preserving test integrity, and ongoing validation against external criteria like academic outcomes to confirm predictive utility. These elements collectively underpin the test's reliability—consistency of scores under repeated conditions—and validity—alignment with intended constructs—hallmarks of psychometric soundness.[18][19]Historical Development
Ancient and Early Modern Origins
The earliest known system of standardized testing emerged in ancient China during the Han dynasty (206 BCE–220 CE), where initial forms of merit-based selection for government officials involved recommendations and rudimentary assessments of scholarly knowledge, primarily drawn from Confucian texts.[20] This evolved into a more formalized examination process by the Sui dynasty (581–618 CE), with Emperor Wen establishing the first imperial examinations in 605 CE to recruit civil servants based on uniform evaluations of candidates' mastery of classical literature, ethics, and administrative skills.[21] These tests were administered nationwide at provincial, metropolitan, and palace levels, featuring standardized formats such as essay writing on prescribed topics from the Five Classics and policy memoranda, with anonymous grading to minimize favoritism and corruption.[22] By the Tang dynasty (618–907 CE), the system had standardized further, emphasizing rote memorization, poetic composition, and interpretive analysis under timed conditions, serving as a meritocratic tool for social mobility that bypassed hereditary privilege in favor of demonstrated competence.[23] Success rates were low, with only about 1–5% of candidates passing the highest levels across dynasties, reflecting rigorous norming against elite scholarly standards. The Song dynasty (960–1279 CE) refined the process with printed question papers and multiple-choice elements in some sections, increasing scale to thousands of examinees per cycle and institutionalizing it as a cornerstone of bureaucratic selection.[23] In contrast, ancient Western traditions, such as those in Greece and Rome, relied on non-standardized oral examinations and rhetorical displays rather than uniform written tests. Greek education in city-states like Athens involved assessments through debates and recitations evaluated subjectively by teachers, prioritizing dialectical skills over quantifiable metrics.[24] Roman systems similarly featured public orations and legal disputations for entry into professions, lacking the centralized, anonymous scoring of Chinese exams.[24] During the early modern period in China (Ming and Qing dynasties, 1368–1912 CE), the keju system persisted with enhancements like stricter content uniformity and anti-cheating measures, such as secluded testing halls, testing up to 10,000 candidates per session and maintaining predictive validity for administrative roles through empirical correlations with performance in office. In Europe, early modern assessments remained predominantly oral or essay-based in universities, with no widespread adoption of standardized formats until the 19th century, when British administrators drew indirect inspiration from Chinese models for colonial civil services.[25]19th and Early 20th Century Innovations
In the mid-19th century, educational reformers in the United States began transitioning from oral examinations to standardized written assessments to promote uniformity and objectivity in evaluating student achievement. Horace Mann, secretary of the Massachusetts Board of Education, advocated for written tests in 1845 as a means to assess pupil progress across diverse school districts, replacing subjective yearly oral exams with more consistent methods that could reveal systemic educational deficiencies.[26] This shift aligned with broader efforts to professionalize public schooling, though early implementations remained limited in scope and lacked the statistical norming of later standardized tests.[27] Pioneering psychometric approaches emerged in the late 19th century, with Francis Galton developing early mental tests in the 1880s to quantify human abilities through anthropometric measurements of sensory discrimination, reaction times, and mental imagery via questionnaires distributed to scientific acquaintances. Galton's work, influenced by his studies of heredity and individual differences, established foundational principles for measuring innate capacities empirically, though his tests correlated more with sensory acuity than higher cognitive functions.[28] These innovations laid the groundwork for differential psychology but were critiqued for overemphasizing physiological traits over intellectual ones. The early 20th century saw practical applications in educational and admissions testing. The College Entrance Examination Board (CEEB) was founded in 1900 by representatives from 12 universities to standardize college admissions, administering its first essay-based exams in 1901 across nine subjects including mathematics, history, and classical languages, with over 300 students tested nationwide.[29] Concurrently, in 1905, French psychologist Alfred Binet and physician Théodore Simon created the Binet-Simon scale, the first operational intelligence test, featuring 30 age-graded tasks such as following commands, naming objects, and pattern reproduction to identify children with intellectual delays for remedial education, as commissioned by the Paris Ministry of Public Instruction.[30][31] This scale introduced concepts like mental age, emphasizing practical utility over Galtonian sensory focus, and was revised in 1908 to enhance reliability. World War I accelerated large-scale standardization with the U.S. Army's Alpha and Beta tests, developed in 1917 under psychologist Robert Yerkes and administered to approximately 1.7 million recruits by 1918 to classify personnel by mental aptitude. The Alpha, a verbal multiple-choice exam covering arithmetic, vocabulary, and analogies, targeted literate soldiers, while the Beta used pictorial and performance tasks for illiterate or non-English speakers, enabling rapid group testing under time constraints and yielding data on national intelligence distributions that influenced postwar policy debates.[32] These military innovations demonstrated standardized tests' scalability for selection in high-stakes contexts, though results were later contested for cultural biases favoring educated urban recruits.[33]Mid-20th Century Expansion and Standardization
The expansion of standardized testing in the mid-20th century was propelled by the post-World War II surge in educational access, particularly in the United States, where the Servicemen's Readjustment Act of 1944—commonly known as the GI Bill—provided tuition assistance, subsistence allowances, and low-interest loans to over 7.8 million veterans by 1956, leading to a tripling of college enrollments from 1.5 million students in 1940 to 4.6 million by 1950.[34] [35] This influx overwhelmed traditional admissions methods reliant on subjective recommendations, prompting greater dependence on objective, scalable assessments like the Scholastic Aptitude Test (SAT), which had originated in 1926 but saw administrations rise from approximately 10,000 test-takers in 1941 to over 100,000 annually by the early 1950s to facilitate merit-based selection amid the applicant boom.[36] [27] A pivotal development occurred in 1947 with the founding of the Educational Testing Service (ETS) through the consolidation of testing operations from the College Entrance Examination Board, the Carnegie Foundation for the Advancement of Teaching, and the American Council on Education; this nonprofit entity, chartered by the New York State Board of Regents, centralized test development, administration, and scoring to enhance psychometric rigor, including the adoption of multiple-choice formats amenable to machine scoring and the establishment of national norms based on representative samples.[37] [38] Under leaders like Henry Chauncey, ETS refined procedures for equating test forms across administrations—ensuring scores reflected consistent difficulty levels—and expanded the SAT's scope, administering it to broader demographics while integrating statistical methods like item analysis to minimize content bias and maximize reliability coefficients often exceeding 0.90 for total scores.[39] [2] In K-12 education, standardized achievement tests proliferated during this era, becoming embedded in school routines by the 1950s; instruments such as the Stanford Achievement Test (revised in 1941 and widely adopted postwar) and the Iowa Tests of Basic Skills (first published in 1935 and expanded in the 1940s) were administered annually to millions of students in over half of U.S. school districts to benchmark performance against grade-level norms derived from stratified national samples, enabling comparisons of instructional effectiveness across regions.[40] [29] These tests emphasized criterion-referenced elements alongside norm-referencing, with subscores in subjects like reading and mathematics yielding percentile ranks that informed curriculum adjustments, though their validity hinged on empirical validation showing correlations of 0.50–0.70 with future academic outcomes.[27] The 1959 introduction of the American College Test (ACT), comprising sections in English, mathematics, social sciences, and natural sciences, further diversified higher-education assessments, competing with the SAT by offering content-specific measures scored on a 1–36 scale.[41] Standardization processes advanced through psychometric innovations, including the widespread use of normal distribution models for norming—where raw scores were converted to standardized scales (e.g., mean of 500 and standard deviation of 100 for SAT verbal and math sections)—facilitating inter-year comparability and predictive utility, as evidenced by longitudinal studies linking scores to college grade-point averages with coefficients around 0.50.[1] This era's emphasis on empirical reliability over anecdotal evaluation marked a shift toward data-driven educational decision-making, though it also amplified debates on test coaching effects and socioeconomic correlations in score variances.[42]Late 20th to Early 21st Century Reforms
In the United States, the No Child Left Behind Act (NCLB), signed into law on January 8, 2002, represented a major federal push for accountability through expanded standardized testing, requiring states to administer annual assessments in reading and mathematics to students in grades 3 through 8, as well as once in high school, with results disaggregated by subgroups including race, income, English proficiency, and disability status to identify achievement gaps.[43] The law tied school funding and sanctions to adequate yearly progress (AYP) benchmarks, aiming to ensure all students reached proficiency by 2014, which spurred states to develop or refine aligned tests while increasing overall testing volume from sporadic to systematic.[44] Empirical data post-NCLB showed modest gains in national math scores for grades 4 and 8 (rising 11 and 7 points, respectively, from 2003 to 2007 on the National Assessment of Educational Progress) and narrowed gaps between white and minority students, though critics noted incentives for narrowed curricula focused on tested subjects.[45] Reforms in college admissions testing during this era addressed criticisms of content misalignment and predictive validity; the SAT, administered by the College Board, underwent a significant redesign in 2005, adding a writing section with an essay component that raised the maximum score from 1600 to 2400 and aimed to better reflect high school curricula amid competition from the ACT, which saw rising usage from 28% of test-takers in 1997 to over 40% by 2007.[46] These changes responded to research questioning the SAT's verbal analogies for cultural bias and low correlation with college GPA (around 0.3-0.4), prompting shifts toward evidence-based reading and grammar assessment.[47] By 2016, further SAT revisions eliminated the penalty for guessing, emphasized real-world data interpretation, and aligned more closely with Common Core emphases on critical thinking, reflecting broader efforts to enhance fairness and utility. The ACT, in parallel, introduced optional writing in 2005 and expanded science reasoning sections, adapting to demands for multifaceted skill measurement. The adoption of the Common Core State Standards (CCSS) in 2010 by 45 states and the District of Columbia catalyzed a wave of assessment reforms, replacing many state-specific tests with consortium-developed exams like the Partnership for Assessment of Readiness for College and Careers (PARCC) and Smarter Balanced, which incorporated performance tasks, open-ended questions, and computer-based delivery to evaluate deeper conceptual understanding over rote recall.[48] These standards-driven tests, rolled out from 2014-2015, prioritized skills like evidence-based argumentation in English language arts and mathematical modeling, with initial implementation showing varied state proficiency rates (e.g., 37% in math for grade 8 nationally in early trials) but facing pushback over federal overreach perceptions and implementation costs exceeding $1 billion across states.[49] Concurrently, computerized adaptive testing (CAT) gained traction, as seen in Smarter Balanced's format where question difficulty adjusts in real-time based on prior responses, reducing test length by 20-30% while maintaining reliability (Cronbach's alpha >0.90) through item response theory algorithms that calibrate to individual ability levels.[50] This technological shift, piloted in state assessments post-NCLB, improved precision by minimizing floor and ceiling effects, though equitable access to computer infrastructure remained a challenge in under-resourced districts.[51]Design and Technical Aspects
Test Construction and Content Development
Test construction for standardized assessments follows a rigorous, multi-stage process guided by psychometric principles to ensure the instruments measure intended constructs with high fidelity, reliability, and minimal distortion from extraneous factors. This involves collaboration among subject matter experts (SMEs), psychometricians, and statisticians, adhering to frameworks like those in the Standards for Educational and Psychological Testing, which emphasize evidence-based design, documentation of procedures, and evaluation of technical quality throughout development.[52][53] The process prioritizes alignment with validated content standards, such as state curricula or college-readiness benchmarks, to support causal inferences about examinee abilities rather than superficial knowledge recall.[54] Initial content development entails creating a test blueprint that specifies domains, subdomains, item types (e.g., multiple-choice, constructed-response), cognitive demands (e.g., recall vs. application), and item distributions to reflect real-world task relevance. For instance, the College Board's SAT Suite derives specifications from empirical analyses of skills predictive of postsecondary success, including algebra, problem-solving in science, and evidence-based reading.[55] SMEs, often educators or practitioners, draft items under strict guidelines: stems must pose unambiguous problems, options should include plausible distractors without clues, and content must avoid cultural or linguistic biases that could confound ability measurement.[56] Educational Testing Service (ETS) employs interdisciplinary teams for this, with items prototyped to target precise difficulty levels—typically aiming for 0.3 to 0.7 on the p-value scale (proportion correct)—to optimize information yield across ability ranges.[57] Draft items undergo iterative reviews for content accuracy, clarity, and fairness, including sensitivity panels to detect potential adverse impacts on demographic subgroups. ETS protocols mandate multiple blind reviews and empirical checks for differential item functioning (DIF), where statistical models like Mantel-Haenszel or logistic regression identify items performing discrepantly across groups after controlling for overall ability, leading to revision or deletion if discrepancies exceed thresholds (e.g., standardized DIF >1.5).[56][57] Pretesting follows on representative samples—often thousands of examinees mirroring the target population in age, ethnicity, and socioeconomic status—to gather empirical data. Item analyses compute metrics such as point-biserial correlations (ideally >0.3 for discrimination) and internal consistency via Cronbach's alpha (>0.8 for high-stakes tests), informing selection for operational forms.[53] Final assembly balances statistical properties with content coverage, using algorithms to equate forms for comparability across administrations via methods like Item Response Theory (IRT), which models item parameters (difficulty, discrimination) on a latent trait scale. This ensures scores reflect stable ability estimates, with equating studies verifying mean score invariance within 0.1 standard deviations.[58] Ongoing validation, including post-administration analyses, refines future iterations; for example, ACT and ETS conduct annual reviews correlating item performance with external criteria like GPA to confirm construct validity.[59] These procedures, while reducing measurement error, cannot eliminate all sources of variance, as real-world causal factors like motivation or prior exposure influence outcomes, underscoring the need for multifaceted interpretations of scores.[52]Administration and Scoring Procedures
Standardized tests demand uniform administration protocols to guarantee score comparability and minimize extraneous influences on performance. These protocols, outlined in professional guidelines such as the Standards for Educational and Psychological Testing (2014), require test administrators to adhere strictly to developer-specified instructions, including precise timing of sections, standardized verbal directions, and controlled environmental conditions like quiet rooms and proper lighting.[52] [53] Trained proctors oversee sessions to enforce rules against unauthorized aids, communication, or disruptions, with at least one proctor per group of examinees typically mandated to uphold security.[60] Irregularities, such as suspected cheating, trigger documented incident reports and potential score invalidation to preserve test integrity.[61] Accommodations for disabilities follow established criteria, ensuring equivalent access without altering test constructs, as per guidelines emphasizing fairness over advantage.[52] Test security extends to handling materials pre- and post-administration, with secure storage and chain-of-custody procedures to prevent tampering or leaks.[62] Scoring procedures prioritize objectivity and consistency to reflect true ability rather than rater variability. Objective items, such as multiple-choice questions, undergo automated scoring via optical mark recognition or digital systems, yielding raw scores as the count of correct responses, often adjusted for guessing through formulas like deducting a fraction of incorrect answers.[63] Raw scores convert to scaled scores through equating processes—statistical methods like linear or equipercentile equating—that account for form difficulty differences, maintaining score meaning across administrations and yielding metrics like percentiles or standard scores with means of 100 and standard deviations of 15 or 20.[64] [65] Constructed-response items employ analytic rubrics with predefined criteria, scored by trained human raters under dual-rating systems where interrater agreement targets 80-90% exact matches or adjacent categories, with adjudication for discrepancies.[66] ETS guidelines for such scoring stress rater calibration sessions, ongoing monitoring, and empirical checks for bias to ensure reliability coefficients above 0.80.[66] Final scores aggregate section results, sometimes weighted, and undergo psychometric review for anomalies before release, typically within weeks via secure portals.[67]Standardization and Norming Processes
Standardization in psychological and educational testing entails establishing uniform protocols for test administration, scoring, and interpretation to ensure comparability across individuals and groups. This process begins with the development of test items through rigorous procedures, including content validation by subject-matter experts and pilot testing to refine items for clarity and difficulty. The test is then field-tested on a large, representative sample under controlled conditions—such as standardized instructions, timing, and environment—to collect empirical data for scaling and norm establishment.[68] Norming follows field testing and involves administering the test to a norm group, typically a stratified random sample of thousands of individuals matched to the target population's demographics, including age, gender, ethnicity, socioeconomic status, and geographic region. For national tests like the SAT, the norm group comprises over 200,000 college-bound high school seniors annually, reflecting the intended test-taker pool. Raw scores from this group are analyzed statistically to derive descriptive statistics, such as the mean and standard deviation, often assuming a normal distribution for score transformation into standard scores (e.g., mean of 100 and standard deviation of 15 for intelligence tests like the Wechsler Adult Intelligence Scale). Percentile ranks, stanines, and other derived metrics are computed to indicate relative standing within the norm group.[69][70] Norm-referenced standardization, prevalent in aptitude and achievement tests, interprets scores relative to the norm group's performance, enabling comparisons of an individual's standing (e.g., top 10% percentile). In contrast, criterion-referenced norming evaluates mastery against fixed performance standards, such as proficiency cut scores determined via methods like Angoff or bookmarking by expert panels, without direct peer comparison. Many modern standardized tests hybridize these approaches; for instance, state accountability exams under the U.S. Every Student Succeeds Act (2015) set criterion-based proficiency levels but may report norm-referenced percentiles for additional context. Norms must be periodically renormed—every 5–15 years—to account for shifts in population abilities, as seen in IQ tests where the Flynn effect necessitates upward adjustments of approximately 3 IQ points per decade. Failure to update can lead to score inflation or deflation, undermining validity.[71][72] Equating ensures comparability across multiple test forms or administrations, using techniques like equipercentile methods or item response theory (IRT) to adjust for minor content variations while preserving the underlying ability scale. This is critical for high-stakes tests, where statistical linking maintains score stability; for example, the Graduate Record Examination (GRE) employs IRT-based equating on a continuous scale from field test data. Overall, these processes prioritize empirical rigor to minimize measurement error, though critiques note potential biases if norm groups inadequately represent subgroups, prompting ongoing refinements via diverse sampling and differential item functioning analyses.[73]Validity, Reliability, and Empirical Foundations
Statistical Validity and Reliability Metrics
Standardized tests assess reliability through metrics that quantify score consistency, such as internal consistency via Cronbach's alpha (or Kuder-Richardson 20 for dichotomous items), test-retest correlations, alternate forms reliability, and inter-rater agreement for constructed-response sections. Cronbach's alpha measures how well items correlate to form a unidimensional scale, with values above 0.70 deemed acceptable and above 0.90 indicating excellent reliability; for major admissions tests, alphas typically exceed 0.90, reflecting low measurement error and high precision.[74][75] For the ACT Composite score, reliability estimates reach 0.95 for 10th graders and 0.96 for 11th graders, based on large-scale administrations.[76] Similarly, SAT sections show coefficients from 0.89 to 0.93 across internal consistency and test-retest methods.[75] GRE sections exhibit Verbal Reasoning reliability of 0.92, Quantitative Reasoning at 0.93, Analytical Writing at 0.79 (lower due to subjective scoring), and combined Verbal+Quantitative at 0.96.[77] Test-retest reliability, evaluating score stability over short intervals (e.g., 2-3 weeks), is particularly relevant for aptitude-oriented standardized tests measuring relatively stable cognitive traits, yielding coefficients often above 0.80 in achievement contexts.[78] Alternate forms reliability, used when parallel test versions exist, similarly supports consistency, as seen in equating processes for tests like the SAT to minimize form-to-form variance. These metrics collectively ensure that true score variance dominates over error, with reliability informing standard error of measurement calculations (e.g., SEM = SD * sqrt(1 - reliability)), which for high-reliability tests like the ACT yields narrow confidence intervals around scores.[79] Validity metrics evaluate whether tests measure intended constructs, encompassing content validity (alignment to domain specifications via expert judgment), criterion validity (correlations with external outcomes), and construct validity (convergent/discriminant evidence, factor structure). Predictive criterion validity for college admissions tests is gauged by correlations with first-year GPA (FYGPA), ranging from 0.51 to 0.67 for undergraduate tests when uncorrected for range restriction; SAT and ACT scores alone predict FYGPA at approximately 0.30-0.40 observed, rising to 0.50+ adjusted, though combining with high school GPA enhances this to 0.50-0.60.[80][75] For graduate tests like the GRE, observed correlations with first-year law GPA are 0.33 for Verbal+Quantitative, adjusting to 0.54 after correcting for selection effects.[77] Construct validity evidence includes factor analyses confirming general cognitive ability ("g") loading, with standardized tests correlating 0.70-0.80 with other g-loaded measures, supporting their role in assessing reasoning over narrow skills.[81]| Test Section | Reliability Coefficient (e.g., Cronbach's α or Equivalent) | Source |
|---|---|---|
| SAT (overall sections) | 0.89-0.93 | [75] |
| ACT Composite (11th grade) | 0.96 | [76] |
| GRE Verbal+Quantitative | 0.96 | [77] |
| GRE Analytical Writing | 0.79 | [77] |