Fact-checked by Grok 2 weeks ago

Overall Position

The Overall Position (OP) was a numerical employed in , , to assess senior secondary students' eligibility for , assigning ranks from 1 (highest achievers, representing the top approximately 0.5% statewide) to 25 based on moderated internal school assessments across multiple subjects and performance in the Queensland Core Skills (QCS) Test. Introduced in as a departure from purely exam-based national models, the OP emphasized a combination of school-based evaluations calibrated through statewide moderation processes to account for variations in grading standards between institutions. The system's design aimed to provide a holistic measure of student capability by integrating subject-specific achievements into an overall , with Field Positions () offering supplementary rankings in six broad study areas for targeted admissions. Universities primarily used OP ranks for selection, where lower numbers (e.g., OP 1-5) granted access to competitive programs, though non-OP pathways existed for vocational or mature-age applicants. This Queensland-unique approach persisted for over two decades, distinguishing the state from interstate systems reliant on external exams like the Higher School Certificate, but faced criticism for potential inconsistencies arising from heavy dependence on internal assessments, which some argued could inflate ranks in less rigorous school environments despite efforts. In 2015, the announced the OP's phase-out to align with national standards, culminating in its replacement by the Australian Tertiary Admission Rank () for students completing in 2019 and thereafter, incorporating a greater proportion of external examinations to enhance comparability and perceived objectivity. The transition addressed longstanding concerns about interstate equity, as OP ranks were not directly convertible to ATAR equivalents without approximations, limiting mobility for Queensland graduates. While the OP facilitated broader access to university for regional and diverse student cohorts through its moderation framework, its abolition marked a shift toward standardized, exam-heavy evaluation amid debates over balancing teacher judgment with verifiable rigor.

History and Development

Origins and Introduction in 1992

The Overall Position (OP) system in was introduced in as a for the previous Tertiary Entrance (TE) score, which had been used since the to determine admissions based on a numerical derived primarily from external examinations. The TE score aimed to quantify but faced for its heavy reliance on high-stakes end-of-year exams, which did not fully account for variations in school-based assessments or broader senior secondary achievements. This shift to OPs sought to establish a more robust statewide mechanism that integrated moderated internal and external assessments, emphasizing relative positions among students rather than absolute scores. The origins of the OP system trace directly to an independent review of Queensland's tertiary entrance procedures commissioned in 1990 by the Minister for Education and conducted by Professor Nancy Viviani. Viviani's report, titled The Review of Tertiary Entrance in Queensland 1990, highlighted systemic issues including public distrust in the TE system's fairness, inconsistencies in subject scaling, and inadequate representation of diverse student pathways in senior studies. Among its 10 key recommendations, the review advocated for a banded rank-order system to better capture overall achievement across Authority-approved subjects, incorporating statistical moderation to ensure comparability across schools and regions. These proposals addressed concerns that the TE score overly penalized students in less advantaged settings and failed to align with evolving educational practices emphasizing continuous assessment. Implementation occurred under the oversight of the Board of Senior Secondary School Studies, which developed the operational guidelines for calculating as a scale from 1 (highest achieving) to 25 (lowest), derived from students' positions in up to six senior subjects. The first cohort of students received OP ranks in 1992, marking the system's debut for tertiary selection purposes. This introduction coincided with increased retention rates and a push for equitable access to , with OPs designed to provide universities a standardized metric less susceptible to raw score . Early distributions showed a broad spread, with approximately 15% of eligible students attaining OP 1 or 2, reflecting the system's intent to differentiate top performers while grouping lower bands to minimize marginal distinctions.

Evolution and Key Reforms Until 2019

The Overall Position (OP) system, established in 1992 following recommendations from the 1990 Viviani Review of Tertiary Entrance, initially relied on school-based assessments in Authority subjects, moderated through Sound Achievement Indicators (SAIs), combined with the Core Skills (QCS) Test for inter-school . Early refinements in the addressed potential inconsistencies, including the introduction of monitoring processes to identify and mitigate manipulation of SAIs by schools, such as through paired-comparison methods for verifying levels. Additionally, procedures for subjects with small cohorts (fewer than 10 students) were adjusted to ensure reliable rankings without undue influence from limited data. These measures aimed to preserve the system's emphasis on moderated internal assessments while maintaining statewide comparability. By 1994, annual statewide reviews of school-assigned grades via random sampling of student work were implemented to verify moderation accuracy and standardize reporting across institutions. Participation patterns evolved modestly over the 2000s, with extension subjects increasing from under 3% to 6.5% of OP-eligible students between 2003 and 2012, yet the core ranking methodology—aggregating SAIs from up to six subjects and applying QCS-derived scaling—remained intact without structural overhaul. A 2001 review of senior certification, building on earlier Pitman analyses, highlighted emerging challenges from diverse study pathways but prompted no immediate recalibration of OP calculations. The QCS Test itself underwent in 2011, which reaffirmed its in providing a common metric for adjusting subject difficulties across schools, thereby supporting equitable assignments. Despite rising concerns over the system's alignment with national standards—evidenced by a 2014 review noting its dated reliance on categorical ranks amid shifting enrollment trends—the OP framework persisted unchanged for the final cohort in 2019, with over 30,000 students receiving ranks from 1 (highest) to 25 (lowest) based on the established process. This stability reflected the system's design resilience but also underscored critiques of its resistance to broader psychometric updates seen elsewhere in .

Assessment and Scaling Methodology

Sound Achievement Indicators (SAIs) and Subject Results

Subject achievement indicators (SAIs) represent a numerical measure of an OP-eligible student's relative performance within their school's for a specific subject, serving as the foundational input derived from subject results for calculating overall positions. Teachers assign SAIs on a from 400, indicating the highest achiever in the subject group, down to 200 for the lowest, with intermediate values reflecting the student's position relative to peers based on overall achievement across the two years of study. These indicators are determined holistically from subject results, which encompass school-based assessments, moderated for consistency, and any applicable external examinations, ensuring SAIs capture comparative standing rather than absolute marks. In subject groups with 14 or more OP-eligible students, SAIs are mandatory; smaller groups may use alternative methods like very small group to approximate relative achievement. Subject results feed into SAI assignment by providing the empirical basis for teacher judgments of student ranking within the cohort, emphasizing comparative performance over time rather than isolated exam outcomes. For instance, a student excelling consistently in assessments for a subject like English would receive a high SAI, such as 350–400, positioning them favorably against school peers, independent of the subject's statewide difficulty. This process occurs annually for subjects, with SAIs submitted to the Curriculum and Assessment Authority (QCAA) as raw data for subsequent . Unlike letter grades (e.g., A to E), SAIs offer finer , allowing for nuanced among students in the same subject, which is critical for aggregating across multiple subjects—typically six for OP eligibility—into a composite measure. The integration of SAIs from subject results ensures that overall achievement reflects breadth across disciplines while accounting for school-specific contexts, though this relies on teacher accuracy in ranking, moderated by QCAA oversight to maintain comparability. SAIs do not directly equate to statewide standings but are adjusted later via within-school and inter-school scaling using Queensland Core Skills (QCS) Test data to normalize for cohort strength. Empirical reviews of the system, such as those by the Australian Council for Educational Research, have noted that SAIs effectively capture relative subject mastery but can be influenced by school practices, underscoring the need for rigorous moderation to uphold validity. By prioritizing cohort-relative indicators over raw scores, this approach aims to mitigate variability in assessment rigor across schools while preserving the role of sustained subject performance in OP determination.

Within-School Scaling

Within-school scaling, the initial phase of the () calculation process, standardizes subject indicators (SAIs) across different subjects within a single to enable equitable comparisons of student performance. SAIs, ranging from 400 for the highest to 200 for the lowest, are assigned by teachers based on a combination of school-based assessments and external exams in subjects, with at least 14 OP-eligible students per subject-group for standard scaling. This adjustment accounts for variations in subject difficulty and cohort performance within the , using the Skills (QCS) Test as a common benchmark to align SAIs on a shared scale. The process employs QCS Test results—transformed into scaling scores fitted to a Gaussian distribution with a of 175 and mean difference of 25 (typically ranging from 75 to 275)—to derive parameters for each subject-group. For each group, the and variability of the SAIs are mapped linearly to match those of the corresponding QCS scores, preserving the relative rank order and internal gaps among students within the subject while repositioning the group's overall level. The for this linear is x'_i = (x_i - \mu) / \sigma \cdot \sigma' + \mu', where x_i is the original SAI, \mu and \sigma are the SAI and standard deviation, and \mu' and \sigma' are the QCS-derived targets. This produces scaled SAIs that reflect a student's performance relative to the school's overall ability, as measured by QCS. Scaled SAIs from the best 20 semester units of credit (requiring at least three subjects studied over four semesters) are then averaged to yield a student's (OAI), indicating their aggregate position within the school cohort. In smaller subject-groups (fewer than 14 students), defaults to statewide values for large groups to ensure reliability, while intermediate groups (10–13 students) blend school-specific and statewide methods. This within-school OAI serves as input for subsequent inter-school , mitigating biases from school-specific subject selections or assessment rigor. The approach, implemented by the Curriculum and Assessment Authority (QCAA), was designed to reduce distortions from uneven subject cohort strengths, though it relies on the QCS Test's validity as a neutral anchor.

Inter-School Scaling Using QCS Test

The Queensland Core Skills (QCS) Test, administered to OP-eligible students, served as a common external benchmark to facilitate inter-school scaling of achievement indicators, ensuring that internal school assessments could be compared fairly across the state. The test consisted of four papers assessing 49 common elements through multiple-choice, short-response, and extended writing tasks, with raw scores transformed into scaling scores fitted to a Gaussian distribution featuring a mean of 175 and a mean difference of 25. These group-level QCS scaling scores, rather than individual results, informed adjustments to Subject Achievement Indicators (SAIs) and Overall Achievement Indicators (OAIs), mitigating the risk of over-reliance on a single high-stakes external exam while aligning school-specific rankings to statewide standards. Inter-school scaling occurred in a two-stage process. In the first stage, within each , teachers-assigned SAIs (ranging from to based on relative rankings in groups) were scaled to a common metric using the 's aggregate QCS performance, producing scaled SAIs on a 175–25 statewide scale via linear transformation: X' = (X - M) \cdot \frac{S'}{S} + M', where M and S represent the original mean and spread, and M' and S' are the target state parameters. OAIs were then derived by weighting and averaging the best 20 semester units of these scaled SAIs (requiring at least three subjects with four semesters each). In the second stage, for schools with 20 or more OP-eligible students, OAIs underwent inter-school adjustment by multiplying them by factors derived from the 's adjusted QCS mean and standard deviation relative to state averages, effectively calibrating the 's overall academic cohort strength. Smaller schools (fewer than 16 students) received no second-stage to avoid from limited , while intermediate-sized schools (16–19 students) used weighted averages of raw and scaled OAIs. This QCS-based approach assumed a sufficient aggregate correlation between test performance and overall achievement, enabling with school-based SAIs and levels of achievement for robust statewide banding into 25 OP ranks. Subject groups were similarly categorized by size—small (under 10 students) relied on estimated from larger cohorts and teacher placements, intermediate (10–13) blended methods—to maintain consistency without over-penalizing variability in enrollment. The methodology prioritized group parameters over individual QCS outcomes, as the test's design targeted curriculum-wide skills rather than subject-specific content, supporting equitable inter-school moderation until the OP system's phase-out in 2019.

Final Assignment of Overall Positions (OPs)

The final assignment of occurs after the computation of scaled Overall Achievement Indicators (OAIs) for OP-eligible students, establishing a statewide rank order based on these indicators. OP eligibility requires full-time completion of , study of at least three subjects for four semesters each (totaling at least 100 credit points or units of weight), and participation in the Queensland Core Skills (QCS) Test. Scaled OAIs, derived from the weighted mean of the best 20 semester units of scaled Subject Achievement Indicators (SAIs), incorporate both within-school and inter-school adjustments via QCS Test data to ensure comparability across the cohort. This statewide ranking places all eligible students in a single , preserving relative achievements while accounting for variations in school performance and subject difficulty. To derive the 25 OP ranks (OP1 as the highest, representing superior achievement, to OP25 as the lowest), the ranked scaled OAIs are divided into 25 bands, with boundaries determined annually to maintain consistent standards of achievement across years rather than fixed cohort percentages. Banding employs statistical methods such as multiple using dummy variables for Authority subjects, followed by linear transformations to align distributions and minimize year-to-year differences in regression coefficients. QCS Test scores, equated across years using (IRT), further inform boundary setting through linear matching of OAI distributions, combining estimates from regression and equating approaches for reliability. This ensures that OP bands reflect enduring levels of overall academic performance, with the 25-band structure justified as providing sufficient precision for entrance decisions without over-differentiating minor variations. Anomaly detection scrutinizes individual assignments by comparing a student's with peers' results in similar subject combinations, SAIs, and QCS scores; outliers may be adjusted by the Studies Authority (now QCAA) committee, typically by one position, to address inconsistencies from or errors. For small schools (fewer than 16 OP-eligible students), second-stage is omitted, relying instead on first-stage scaled SAIs for OAI computation and final ranking. The resulting OPs thus aggregate scaled achievements into discrete ranks, enabling statewide comparability for university admissions until the system's replacement by in 2020.

Interpretation and Application

OP Bands and Percentile Equivalents

The Overall Position () system ranks eligible students into 25 bands, from OP 1 (highest achieving) to OP 25 (lowest), based on their aggregated and scaled performances across subjects. These bands do not represent fixed percentile intervals, as the of students within each band varies annually depending on cohort size, achievement levels, and the statistical processes of within- and between-school . Instead, the bands reflect a statewide order where higher bands contain fewer students, with the top bands capturing exceptional performers and lower bands encompassing the majority. Percentile equivalents for OP bands are cohort-specific and derived from cumulative distributions of eligible students. In , for example, among 17,638 OP-eligible students, OP 1 included 503 students (2.85% of the cohort), placing recipients in the top 2.85%; OP 1–6 cumulatively covered the top 28.55%; and OP 1–15 the top 82.92%. The following table summarizes the 2019 state distribution, illustrating typical band sizes and cumulative percentiles:
OP BandStudents (%)Cumulative Percentile (Top %)
12.852.85
24.036.88
34.6111.49
45.3416.83
55.6722.50
66.0528.55
76.2034.74
86.6041.34
96.5547.90
106.4054.29
116.1660.45
125.9866.42
135.7472.17
145.5277.69
155.2382.92
164.9687.87
174.1091.97
182.8194.78
191.9896.76
201.6198.37
21–251.63100.00
For interstate comparability, OP bands have been approximately equated to (ATAR) percentiles, which directly represent the percentage of the cohort outperformed (e.g., ATAR 90.00 indicates top 10%). QTAC provided a guide for 2019 OPs based on the lowest ATAR scores within each band, showing OP 1 aligning with ATAR 98.85 or higher (top ~1.15% threshold, though actual cohort placement varied), OP 5 with 91.15 or higher, and OP 21–25 with 30.00 or below. These mappings are not exact translations, as OP derives from subject-specific scaling while ATAR uses aggregated scaled marks, but they facilitated cross-jurisdictional admissions until OP's phase-out in 2020. Similar 2018 equivalences from UAC confirmed OP 1 corresponding to ATAR 99.00–99.95 and OP 6 to 90.00–91.00.

Field Positions (FPs) for Specialized Ranks

Field Positions (FPs) supplemented the Overall Position () by providing a field-specific of on a scale from 1 (highest) to 10 (lowest), derived from scaled Subject Achievement Indicators (SAIs) in relevant subjects. These positions were calculated for up to five distinct fields, each emphasizing particular knowledge and skills, using unequally weighted combinations of SAIs from subjects aligned with the field's focus, aggregated into Field Achievement Indicators (FAIs) that were then ranked statewide among eligible students. Eligibility required students to be OP-eligible and to have accumulated at least 60 weighted units of credit in subjects contributing to a given field, with subject weights ranging from 0 to 5 based on their relevance to the field's core competencies. The five fields were defined as follows: Field A (extended written expression), Field B (short written communication), Field C (basic ), Field D (complex problem-solving), and Field E (practical performance in physical or creative arts). For instance, subjects like English or contributed heavily to Fields A and B due to their emphasis on communication skills, while and Physics weighed more in Fields C and D for quantitative and analytical demands. Unlike the OP, which integrated all eligible subjects for a holistic rank, FPs isolated performance in these domains without inter-field scaling via the QCS Test, relying instead on the initial within- and between-school scaling of SAIs. In tertiary entrance applications, served to differentiate candidates with identical , particularly for competitive programs requiring domain-specific aptitudes, such as (favoring strong Field C or D positions) or arts degrees (emphasizing Fields A, B, or E). Universities applied at their discretion, often as a tie-breaker when selection pressure was high, though their utility diminished in less competitive scenarios where sufficed. This mechanism aimed to align admissions with course prerequisites but was critiqued for its potential to undervalue broader capabilities in favor of narrow field strengths, as evidenced by limited reliance on in final selection decisions. Students received on their Senior Statement alongside , enabling targeted use in QTAC applications from the system's in 1992 through its phase-out in 2019.

Strengths and Empirical Outcomes

Advantages in Reducing Exam Stress and Promoting Broad Assessment

The OP system's reliance on school-based assessments for the majority of subject achievement indicators minimized dependence on external, high-stakes examinations, thereby distributing evaluative pressure across an extended period rather than concentrating it on discrete testing events. Queensland's framework explicitly avoided public exams, allowing students to demonstrate through ongoing tasks that accommodated varied preparation timelines and reduced the intensity of last-minute cramming or performance volatility from acute stressors like test-day anxiety. This continuous model promoted sustained academic engagement, as evidenced by the system's design to triangulate multiple data points—including judgments of fine-grained differences—over single-snapshot metrics prone to amplifying pressure. By incorporating diverse assessment instruments such as folios, practical applications, and oral components, the process enabled a multifaceted of proficiency, extending beyond rote to include applied skills and contextual understanding inherent in subject curricula. Teachers exercised flexibility in selecting techniques tailored to local contexts and individual , yielding a broader profile of abilities that aligned with holistic educational goals rather than exam-centric formats. mechanisms, achieving over 91% agreement in random sampling by 2013, ensured these varied inputs contributed reliably to overall positions without sacrificing comparability. The integration of the QCS test primarily for inter-school scaling, rather than direct achievement scoring, further buffered students from subject-specific exam burdens, preserving emphasis on school-level breadth while curbing systemic overemphasis on standardized testing. This configuration supported student motivation by centering judgments with educators familiar with individual trajectories, fostering environments where broad competencies could be nurtured without the disincentives of narrow, high-pressure evaluations.

Evidence of Predictive Validity for Tertiary Success

Studies conducted at Queensland universities have established that the Overall Position (OP) rank exhibits moderate to strong predictive validity for first-year tertiary performance, including grade point average (GPA) and retention outcomes. In an analysis of 461 first-year students at , OP scores demonstrated a significant positive (r = 0.516, p < 0.001) with combined fail and withdrawal rates, indicating that higher (worse) OP bands were associated with elevated academic risk. Specifically, students with OP scores of 12 or higher (representing 17% of the cohort) accounted for 51% of failures and withdrawals, with a 57% fail/withdraw rate compared to 11% for those with OP 1–11. Multiple models from the same study confirmed as a significant independent predictor of first-semester GPA, alongside readiness assessments, with course-specific correlations between and reaching r ≈ -0.7 (reflecting , as lower numerical indicates higher achievement). A fit for mean fail/withdraw rates by yielded an R² of 0.917, underscoring robust explanatory power for identifying early in the semester. Researchers concluded that effectively flags students likely to underperform, though its utility is enhanced when combined with other diagnostics, as approximately one-third of the cohort lacked data due to non-traditional entry pathways. Further evidence from (QUT) supports this, where entry ranking positively correlated with overall program GPA (r = 0.38, p < 0.001), explaining 14% of variance in long-term across and cohorts. These correlations align with broader patterns in Australian admissions, where secondary rank orders like typically predict initial university success at levels comparable to interstate equivalents such as , though predictive strength diminishes over subsequent years as factors like study habits intervene. Empirical data thus affirm 's role in stratifying students by readiness, with bands 1–5 consistently linked to GPAs above 5.0 (on a 7-point scale) and lower attrition, while 16–25 bands showed GPAs below 4.0 and failure rates exceeding 50% in high-risk groups.

Criticisms and Limitations

Disadvantages Including Peer Influence and School Disparities

The system in ranked students relative to their school cohort in each subject, which introduced disparities arising from differences in quality and school performance levels. Students in high-achieving schools, often characterized by higher socio-economic status (SES) and selective enrollment, faced stiffer internal competition, making it harder to attain top relative positions (e.g., Position 1) despite strong absolute performance, whereas peers in lower-performing schools could more readily secure high rankings within weaker cohorts. This cohort effect undermined , as scaling via the Core Skills Test (QCS) aimed to adjust for school differences but struggled with reduced participation ranges and varying subject enrollments across schools, leading to inconsistencies in OP calculations. Empirical data from 1992 to 2012 showed declining proportions of OP-eligible students, with smaller schools exhibiting greater variability in outcomes tied to cohort size and regional factors, exacerbating inequities for students in under-resourced or rural institutions. Peer influence amplified these disparities through the system's reliance on within-school comparisons, where the academic of classmates directly shaped individual rankings. In or high-SES schools, exposure to more capable peers intensified , potentially demotivating mid-tier students or pressuring them into narrower study patterns to optimize relative standings, though the system assumed stable cohort participation that increasingly failed to hold amid diverse enrollment trends like part-time or studies. Conversely, in disadvantaged schools with lower peer achievement, students benefited from easier relative positioning but risked underdeveloped skills due to less rigorous , with of weakening QCS correlations indicating that cohort variance was eroding the scaling mechanism's effectiveness. School-based assessments, comprising 50% of the OP, were vulnerable to or in environments with lax oversight, further disadvantaging students whose schools prioritized higher internal marks over consistent standards, despite Queensland Studies Authority (QSA) moderation efforts. These issues contributed to broader inequities, as with small cohorts (increasing over time) produced unreliable positions, and regional or SES-linked differences in offerings limited comparable for . For instance, by , over 2,500 OP-ineligible students were still offered places, highlighting systemic gaps in fair ranking tied to school-level variations rather than individual merit alone. Critics noted that while the OP sought to leverage local teacher judgment, the resultant dependence on peer cohorts perpetuated advantages for those in balanced, high-performing and penalized others, prompting the review that informed the system's phase-out.

Concerns Over Complexity, Transparency, and Potential Bias in Internal Assessments

The calculation of in Queensland's system incorporated school-based internal assessments, which constituted 50% of the final , alongside external examinations. These internal assessments, comprising Subject Achievement Indicators (SAIs), were derived from judgments moderated through panels, random sampling, and alignment with statewide Levels of Achievement. However, the multi-stage process for integrating these assessments— including within-school scaling, between-school scaling via the Queensland Core Skills (QCS) test, and using polyscores and weighted subject means—introduced significant complexity, rendering the methodology opaque to students, parents, and educators despite detailed procedural documentation by the Queensland Curriculum and Assessment Authority (QCAA). Critics highlighted that this intricacy undermined public confidence, as explanations of non-parametric adjustments and bivariate scaling models were technically demanding and prone to misconceptions, such as erroneous beliefs in subject-specific " scores" or arbitrary weightings. The reliance on teacher-derived SAIs for half the score raised concerns over subjectivity and potential , as assessments depended on professional judgments susceptible to influences like parental pressures, school reputation maintenance, or inadvertent favoritism toward certain students, even with checks achieving high agreement rates (e.g., 91% in 2013 sampling of over 3,000 folios). Small subject cohorts (fewer than 10 students) further complicated fair , potentially amplifying inconsistencies across schools. Transparency was further limited by the discretionary elements in anomaly resolution, such as decisions by the , which compared individual outcomes against peers with similar profiles but lacked full public disclosure of rationales. While QCS scaling aimed to calibrate internal assessments against a common external benchmark, excluding certain groups (e.g., visa students' scores due to disparities) introduced questions, and the system's assumption of stable school participation ranges eroded over time, exacerbating scaling biases in diverse educational contexts. These factors contributed to perceptions of inherent vulnerabilities in internal assessments, prompting systemic reviews that noted risks of and unequal outcomes despite built-in safeguards.

Empirical Data on Inequities and Grade Inflation

The Overall Position (OP) system demonstrated persistent inequities in outcomes across school sectors, with () schools outperforming public and Catholic schools in achieving ranks. In 2018, 15 of the 20 secondary schools ranked by percentage of students attaining OP1-5—ranks qualifying for entry to most competitive courses—were institutions, reflecting advantages in resources, selective , and targeted preparation. These disparities extended to regional and socio-economic variations, where schools in higher (SES) areas and urban centers reported higher OP eligibility rates, while rural and low-SES schools exhibited greater variability in student performance due to limited access to advanced and support. Empirical analyses confirmed that school size and sector influenced OP distributions, with larger independent schools achieving higher mean scores on the external Queensland Core Skills (QCS) test—a component used for —and correspondingly elevated OP eligibility. For example, between 2009 and 2013, the number of OP-ineligible Year 12 completers rose from 1,506 to 2,511, disproportionately affecting smaller schools with narrower participation ranges, which skewed internal processes. Such patterns indicated that the system's dependence on school-based assessments (contributing 50% to OP calculations via achievement indices, or SAIs) amplified inequities, as better-resourced schools could optimize internal grading and selections to boost aggregate outcomes. Grade inflation emerged as a documented risk in the OP framework, stemming from the hybrid model of 50% internal school assessments moderated against external exams and QCS results. Reviews identified upward biases in SAIs, where schools faced incentives to inflate grades amid competitive pressures and parental expectations, potentially undermining the external components' corrective role. Although statewide moderation panels adjusted for discrepancies, empirical critiques noted persistent misalignment, with some schools exhibiting patterns suggestive of strategic SAI adjustments to favor high-achieving cohorts, eroding comparability across institutions. This inflation tendency was exacerbated by declining OP eligibility (from over 50% in the early 2000s to 49.7% by 2018), compressing the performance distribution and incentivizing boundary-pushing in internal evaluations to secure top bands. The 2014 review of senior assessment processes cited these issues, including evidence of inconsistent moderation efficacy, as factors contributing to the system's replacement by the in 2020.

Transition to ATAR and QCE System

Rationale for Replacement in 2020

The Queensland Government commissioned the Australian Council for Educational Research (ACER) in 2014 to review senior assessment and tertiary entrance processes, which identified the need for reforms to achieve greater rigour, simplicity, and alignment with evolving educational priorities. The review highlighted limitations in the OP system, introduced in 1992 as a replacement for external exams abolished in the 1970s, noting that its reliance on moderated school-based assessments and the Queensland Core Skills Test (QCS) had led to inconsistencies in scaling and comparability over time. In August 2015, the government announced the abolition of the system, with the replacing it for graduates from 2020 onward, following ACER's recommendations for external assessments to restore objectivity. A core rationale was to harmonize Queensland's tertiary entrance with the national standard, as the state was the only jurisdiction using the , thereby enabling direct comparability of student rankings across and supporting interstate mobility for admissions. This alignment addressed concerns that the state-specific hindered equitable access to tertiary places in competitive national markets. The new system mandated external exams in addition to school-based assessments for ATAR-eligible students, comprising 50% external and 50% internal components in four subjects, to bolster and mitigate risks of or school-specific biases inherent in the OP's processes. The ATAR's finer —percentile ranks from 0.00 to 99.95 in 0.05 increments—offered more precise differentiation than the OP's 15-band scale, facilitating better matching of students to university entry requirements. universities endorsed the shift, viewing ATAR as a reliable predictor consistent with interstate practices. These reforms were integrated into the Certificate of Education (QCE), implemented from 2019 for , to emphasize verifiable skills and reduce systemic pressures like within-school competition under the , while preserving multiple pathways beyond entrance. The transition aimed to produce rankings administered by the Queensland Tertiary Admissions Centre (QTAC) rather than the Queensland Curriculum and Assessment Authority (QCAA), enhancing transparency in calculation.

Key Structural Differences from OP System

The Overall Position (OP) system ranked students in 25 bands (1 being the highest) based on moderated school-based assessments across five Authority subjects, incorporating 20 semester units and using the for inter-subject and comparability, with no subject-specific external examinations. In contrast, the under the QCE system provides a from 0.00 to 99.95 in 0.05 increments, calculated by the Queensland Tertiary Admissions Centre (QTAC) from scaled scores in the best 10 units of achievement, typically drawn from four General subjects (each contributing up to 4 units over two years) plus optional units from Applied subjects or and Training (VET) at Level III or higher. A primary structural shift lies in composition and weighting: the emphasized fully internal assessments (five to seven per subject, school-developed and QCAA-moderated statistically), whereas General subjects in the QCE system allocate 75% to three internal assessments (set and marked by schools but QCAA-verified for instrument quality) and 25% to a mandatory external examination set and marked by the Curriculum and Authority (QCAA). This introduces direct external validation per subject, eliminating the QCS Test (discontinued after ) and relying instead on statistical scaling of internal and external results for equity. Applied subjects, ineligible for OP calculation, now contribute limited units to eligibility, broadening pathways while requiring a minimum C in English for ATAR access. Moderation processes also diverge: used post-assessment statistical adjustment via QCS data to align school results statewide, whereas the QCE employs pre-assessment verification of internal instruments and post-assessment data-driven scaling without a common skills test, aiming for enhanced consistency amid greater student choice in subjects and pathways. These changes, implemented from 2020, align with national standards while prioritizing flexibility over the 's rigid focus on academic subjects.

Ongoing Evaluations and Adjustments as of 2025

The Queensland Curriculum and Assessment Authority (QCAA) initiated an independent of the new Queensland Certificate of Education (QCE) system in mid-2023, engaging the ’s and Evaluation Research Centre to examine its , , , , and value for money. As of October 2025, this remains ongoing, with the final report scheduled for delivery in November 2025; no interim findings or recommendations have been publicly released. In August 2025, QCAA announced minor adjustments to specific assessment components, including revisions to the Drama external assessment effective for 2026, which now requires student responses to unseen dramatic excerpts under timed exam conditions, replacing reliance on a prescribed list of works to enhance adaptability and reduce predictability. Updated specifications were also issued for Internal Assessment 3 in Economics, accompanied by a dedicated webinar on 1 September 2025 to guide implementation. These changes aim to refine assessment alignment with syllabus objectives without altering the overall structure of internal (75%) and external (25%) components contributing to the Australian Tertiary Admission Rank (ATAR). The (Queensland and Authority) Regulation 2025, commencing 1 September 2025, remakes and updates prior provisions for senior assessment administration, including new fees for services under the Senior Assessment and Tertiary Entrance (SATE) framework and adjustments to existing fees to cover expanded QCAA operations. Procedural enhancements continue, such as the ancillary phase for Common Internal Assessments in Essential English and extending to 12 September 2025, and Confirmation Event 3 for provisional Unit 3 and 4 marks due by 19 August 2025, to support QCAA moderation and . The 2025 Senior External Examination timetable was finalized on 1 September 2025, incorporating updates for language and non-language subjects to facilitate smoother delivery. QCAA also expanded recruitment of teachers as markers for external assessments, with applications opening in June 2025 to handle annual volumes. These measures reflect iterative refinements amid stable system operation since the 2020 transition, pending broader insights from the forthcoming evaluation report.

References

  1. [1]
    [PDF] Guideline for determining Overall Positions (OPs) and Field ... - QCAA
    Overall Position A number from 1 to 25 representing the overall position of a student among all. OP-eligible students in that year. Guideline for determining ...
  2. [2]
    New assessment and tertiary education systems proposed by ...
    Dec 29, 2014 · The OP system has been in place in Queensland since its introduction in 1992, but Mr Langbroek said it had become "less relevant as education ...<|separator|>
  3. [3]
    ATAR - QTAC
    From 2020, the Australian Tertiary Admission Rank (ATAR) replaced the Overall Position (OP) as the standard pathway to tertiary study for Queensland Year 12 ...
  4. [4]
    The death of the OP means more exams but a better deal for students
    Jan 17, 2019 · OP scores have been used in Queensland since 1992 to rank Year 12 students for universities. It gets abolished in 2020 and replaced by the ...
  5. [5]
    OP score to go in Queensland in 2018; replaced by Australian ...
    Aug 25, 2015 · Overall Position (OP) scores will be scrapped for Queensland's senior high school students for a new tertiary entrance ranking system to be introduced in 2018.Missing: history | Show results with:history
  6. [6]
    ATAR/OP comparison table - UAC
    The table below shows how the ATAR equated to Queensland OPs in 2018. This table should only be used as a guide to how ATARs and OPs equated.
  7. [7]
    From OP to ATAR – Queensland's Changing Secondary Education ...
    Jun 26, 2018 · In 2020, the standard pathway to tertiary study for Queensland Year 12's will change from the Overall Position (OP) to the Australian Tertiary Admission Rank ( ...
  8. [8]
    [PDF] Geelong University Selection in Queensland: From a Single Index to ...
    Students obtain an Overall Position (OP) but, in contrast to the. TE rank, the OP is a coarse-grained 25 point scale. It is derived by averaging with equal.Missing: origins | Show results with:origins
  9. [9]
    The review of tertiary entrance in Queensland 1990
    The review of tertiary entrance in Queensland 1990 : report submitted to the Minister for Education / by the Tertiary Entrance Reviewer, Nancy Viviani.
  10. [10]
    [PDF] An Investigation of the Comparability of Teachers' Assessments of ...
    In her 1990 review of tertiary entrance procedures in Queensland, Professor Viviani concluded that there was a general lack of public confidence in the ...
  11. [11]
    [PDF] The Review of Tertiary Entrance in Queensland 1990: Report ...
    Queensland. Department of Education, 1990 (Tertiary Entrance Reviewer: Nancy Viviani). Overview of the document. 180 page review into tertiary entrance ...
  12. [12]
    [PDF] Redesigning the secondary–tertiary interface
    • Tertiary Entrance in Queensland: A Review (1987, “Pitman Report”). • The Review of Tertiary Entrance in Queensland 1990 (1990, “Viviani Report”). These ...
  13. [13]
    [PDF] 1992 State Distribution of Overall Positions (OPs) and Field ... - QCAA
    Overall Positions (OPs) rank students 1-25 based on achievement, used for tertiary selection. Field Positions (FPs) rank students 1-10 in specific study areas. ...
  14. [14]
    Time to scrap the OP, say educators | The Courier Mail
    Jul 9, 2012 · Introduced in 1992 in place of the Tertiary Entrance (TE) score, the OP shows how well a student has performed in their senior secondary studies ...
  15. [15]
    [PDF] Review of Queensland's Overall Position calculations - QCAA
    Sep 23, 2014 · The use of school-based assessments rather than external examinations has given teachers greater power in the assessment of their students, ...
  16. [16]
    [PDF] Strengths and weaknesses of Queensland's OP system today
    The OP (Overall Position) is a rank order from 1 (the highest) to 25 based ... The OP, which is a measure of overall achievement at one stage of education, is ...
  17. [17]
    The senior certificate: a new deal [Pitman report] - VOCEDplus
    This report is the result of a year-long research project examining certification in senior education in Queensland. The report recommends initiatives ...Missing: 2001 | Show results with:2001
  18. [18]
    [PDF] 2019 Data summary: State distribution of Overall Positions ... - QCAA
    Overall Positions (OPs) provide a statewide rank order of students from 1 (highest) to 25 (lowest) based on students' achievement in Authority subjects.Missing: key reforms 1992
  19. [19]
    [PDF] Subject Achievement Indicators (SAIs) Fact Sheet 1 - QCAA
    SAIs are allocated to students who are eligible for Overall Positions (OP-eligible students) in large subject groups of ≥ 14. OP-eligible students.Missing: Sound Queensland
  20. [20]
    [PDF] Calculating Overall Positions (OPs): The basic principles - PLATO QLD
    SAIs show one student's achievement in relation to the achievement of other students in that subject in that school. Each OP-eligible student is awarded an SAI ...
  21. [21]
    [PDF] Procedures for calculating Overall Positions (OPs) and Field ... - QCAA
    An SAI is assigned by teachers to each OP-eligible student in each Authority subject on a scale from 400. (for the student who has the highest achievement in ...
  22. [22]
    [PDF] Strengths and weaknesses of Queensland's OP system today
    The OP (Overall Position) is a rank order from 1 (the highest) to 25 based ... The OP, which is a measure of overall achievement at one stage of education, is ...
  23. [23]
    [PDF] OP TO ATAR CONVERSION TABLE - AWS
    There is no possible direct translation of OP to ATAR because the two are derived from different sets of information and by different statistical methods.
  24. [24]
    [PDF] School-based assessment: The Queensland System - QCAA
    Teachers have the freedom to develop a curriculum suited to local needs and use a wide variety of techniques to assess student performance. They are also.Missing: stress | Show results with:stress
  25. [25]
    [PDF] Identification of at-risk students and strategies to improve academic ...
    Our findings show that both student OP scores and performance on a readiness assessment were strong markers for identifying students at high risk of failure or ...<|separator|>
  26. [26]
    [PDF] Crowther, Phil - QUT ePrints
    Academic achievement as measured by overall program GPA was positively correlated with entry ranking (OP) (r = 0.38, p < 0.001) which accounted for 14% of the ...Missing: coefficient | Show results with:coefficient
  27. [27]
    OP results: How public schools compared to private | The Courier Mail
    Feb 18, 2019 · At Brisbane State High School, 217 students achieved an OP1-5 in 2018 – the highest of any school in the state, and half of its entire senior cohort.Missing: differences | Show results with:differences
  28. [28]
    Overall Position - Wikipedia
    The Overall Position (OP) was a tertiary entrance rank used in Queensland, Australia to guide selection into universities.Missing: encyclopedia | Show results with:encyclopedia
  29. [29]
    [PDF] Queensland Review of Senior Assessment and Tertiary Entrance
    We are pleased to submit to you the Report of the Queensland Review of Senior. Assessment and Tertiary Entrance, which was commissioned in July 2013. We have.
  30. [30]
    Redesigning the secondary–tertiary interface: Queensland Review ...
    A general conclusion of this Review is that senior secondary assessment and tertiary entrance in Queensland are in need of attention.
  31. [31]
    Background | Queensland Curriculum and Assessment Authority
    Nov 18, 2021 · In 2014 the Queensland Government commissioned the Australian Council for Educational Research (ACER) to review senior assessment and tertiary processes in ...
  32. [32]
    Why senior schooling in Queensland is changing - QCAA
    Aug 13, 2024 · Stronger school-based assessment processes and common external assessment will benefit students and teachers.Missing: advantages | Show results with:advantages
  33. [33]
    New senior assessment and tertiary entrance systems
    Jul 23, 2025 · The new systems include a new QCE assessment combining school and external assessment, and the ATAR, replacing the Overall Position (OP) ...
  34. [34]
    Tertiary entrance: ATARs and OPs - QCAA
    Jul 22, 2024 · Before 2020, the Overall Position (OP) was the primary pathway for tertiary entrance for Year 12 school leavers. The QCAA calculated OPs. The ...
  35. [35]
    [PDF] Introducing new senior assessment and tertiary entrance systems in ...
    commonly known as the OP system — was introduced in 1992. In a report released in 2014, the Australian Council for Educational ...
  36. [36]
    [PDF] Senior assessment and tertiary entrance in Queensland - QCAA
    The Australian Tertiary Admission Rank (ATAR) has replaced the OP. An ATAR is a number between 0.00 and 99.95. ATARs increase in increments of 0.05. • The ...
  37. [37]
    Evaluation of the new Queensland Certificate of Education system
    project methodology (PDF, 702.8 KB). Finding out more.
  38. [38]
    [PDF] Evaluating the Queensland Certificate of Education system - QCAA
    The evaluation focuses on design, implementation, effectiveness, sustainability, and value for money of the QCE system, with the report delivered in 2025.
  39. [39]
    QCE system update — August 2025
    Aug 19, 2025 · Updates include external assessment webinars, CIA ancillary phase, Confirmation event 3 submission, new assessor features, and a senior subject ...
  40. [40]
  41. [41]
    Education (Queensland Curriculum and Assessment Authority ...
    Aug 29, 2025 · Education (Queensland Curriculum and Assessment Authority) Regulation 2025. Reprint current from 1 October 2025 to date (accessed 20 October ...
  42. [42]
  43. [43]
  44. [44]
    Senior External Examination 2025 timetable - QCAA
    Sep 1, 2025 · The 2025 external assessment timetable is now available. Note: The timetable has been updated to include language and non-language Senior ...
  45. [45]
    The QCAA recruits thousands of Queensland teachers to mark ...
    Jun 17, 2025 · The QCAA recruits thousands of Queensland teachers to mark external assessments each year. Applications are now open for 2025 – visit the QCAA ...