Selection sort
Selection sort is a simple, in-place comparison-based sorting algorithm that builds a sorted array (or list) one element at a time by repeatedly selecting the smallest (or largest, for descending order) element from the unsorted portion of the array and placing it at the beginning of the sorted portion.[1] This process divides the input into two regions: a growing sorted subarray and a shrinking unsorted subarray, until the entire array is sorted.[2]
The algorithm operates through nested loops: an outer loop iterates over each position in the array to establish the boundary between sorted and unsorted regions, while an inner loop scans the unsorted region to find the index of the minimum element, which is then swapped with the first element of the unsorted region.[3] For an array of length n, this results in exactly n-1 passes, with the i-th pass performing up to n-i comparisons.[2] Here is a pseudocode representation:
procedure selectionSort(array A of size n)
for i from 0 to n-2 do
minIndex = i
for j from i+1 to n-1 do
if A[j] < A[minIndex] then
minIndex = j
swap A[i] with A[minIndex]
procedure selectionSort(array A of size n)
for i from 0 to n-2 do
minIndex = i
for j from i+1 to n-1 do
if A[j] < A[minIndex] then
minIndex = j
swap A[i] with A[minIndex]
This implementation guarantees at most n swaps, regardless of the input, which is advantageous when element swaps are costly (e.g., for large data structures).[1] However, selection sort is not stable, meaning it does not preserve the relative order of elements with equal keys.[1]
In terms of performance, selection sort performs Θ(n²) comparisons and Θ(n) swaps in the worst, average, and best cases, resulting in an overall time complexity of Θ(n²), making it quadratic and unsuitable for large inputs compared to more efficient algorithms like quicksort or mergesort.[3] Its space complexity is Θ(1), as it requires no additional storage beyond the input array.[1] Despite its inefficiency, selection sort is valued in educational contexts for illustrating fundamental sorting principles and is occasionally used in scenarios where simplicity and low swap counts outweigh speed.[2]
Overview
Definition and Principle
A sorting algorithm is a procedure that takes a collection of elements and rearranges them into a specific order, such as non-decreasing, producing a permutation of the input that satisfies the desired ordering relation. These algorithms typically operate on data structures like arrays or lists and rely on comparisons or other operations to achieve the sorted result.[4]
Selection sort is an in-place comparison-based sorting algorithm that divides the input list into a sorted sublist, initially empty, and an unsorted sublist, initially the entire list. It builds the sorted sublist incrementally by repeatedly identifying the minimum element from the unsorted portion and placing it at the boundary between the sorted and unsorted parts. The algorithm assumes a total order on the elements, meaning a binary relation that is reflexive, antisymmetric, transitive, and total, allowing consistent comparisons between any pair of elements.[5]
The core principle of selection sort involves, for each position i from the start of the list up to the second-to-last element, scanning the unsorted sublist to locate the smallest element and swapping it with the element at position i to extend the sorted sublist. This process ensures that after each iteration, the prefix of the list up to i is sorted, and the remaining suffix contains the unsorted elements.[5][6]
Key properties of selection sort include its in-place nature, requiring only a constant amount of extra memory beyond the input array, and its reliance on comparisons without additional data structures. It is not stable, as swaps can alter the relative order of equal elements. While straightforward to implement and educational, selection sort exhibits quadratic time complexity in all cases, making it inefficient for large datasets compared to more advanced algorithms.[5][1]
Historical Development
Selection sort traces its origins to the early days of electronic computing, where simple selection-based methods were developed to organize data efficiently on limited hardware. Although no single inventor is attributed, the algorithm's principles resemble manual sorting techniques employed in punch-card data processing systems of the 1920s and 1930s, such as those by IBM, which involved iteratively selecting and separating cards based on specific criteria.[7] This reference, while not establishing the absolute origin, marks a pivotal early documentation amid the rapid growth of computing in the mid-20th century.[8]
Donald E. Knuth provided a comprehensive analysis in The Art of Computer Programming, Volume 3: Sorting and Searching (1973), dedicating a section to "Sorting by Selection" and emphasizing its role as an accessible example for understanding comparison-based sorting. Knuth's treatment, grounded in historical context and efficiency analysis, solidified its place in algorithms textbooks, where it was presented alongside insertion and exchange sorts for instructional clarity.[9][5]
The core algorithm has remained largely static since its formalization, with evolution limited to pedagogical refinements rather than major redesigns. In the 1970s and 1980s, minor optimizations—such as tournament-based selection to reduce comparisons or bidirectional scanning—were explored in research papers, but these variants retained the inherent quadratic time complexity and did not gain widespread adoption beyond academic interest.[10] Instead, selection sort's enduring legacy lies in its simplicity, making it a foundational topic in introductory computer science education worldwide since the 1970s, where it illustrates key concepts like iterative selection without the complexities of more advanced algorithms.[5]
Algorithm Mechanics
Step-by-Step Process
Selection sort operates by repeatedly selecting the minimum element from the unsorted portion of the array and placing it in its correct position within the growing sorted portion, building the sorted array incrementally from the beginning.[6]
The process begins with an array of n elements, where the initial sorted portion is empty (for n > 0), and the unsorted portion is the entire array; this sorted segment expands by one element per iteration until it encompasses the entire array.[11]
The core iteration proceeds as follows: for each index i ranging from 0 to n-2, the algorithm scans the subarray from position i to n-1, comparing elements to identify the index min_idx of the smallest element in that subarray.[6][11]
Upon locating the minimum, if min_idx differs from i, the algorithm swaps the element at position i with the element at min_idx, thereby fixing the next smallest element in its sorted position and extending the sorted portion by one.[6][11]
This iteration repeats n-1 times, after which the array is fully sorted in ascending order, as the last element is automatically in place once all preceding minima have been positioned.[6]
For edge cases, an empty array (n=0) requires no operations and is considered sorted by default, while a single-element array (n=1) is already sorted and triggers no iterations or swaps.[12] Additionally, input in descending order demands no modifications to the algorithm, as it consistently selects the overall minimum each time, performing the same scans and swaps regardless of initial arrangement.[6]
Pseudocode Representation
The standard pseudocode for the selection sort algorithm, as presented in foundational algorithms literature, is a language-agnostic representation that highlights its iterative process of identifying and placing the minimum element in the unsorted portion of the array.
procedure selection_sort(A: [array](/page/Array) of integers)
n = length(A)
for i = 0 to n-2 do
min_idx = i
for j = i+1 to n-1 do
if A[j] < A[min_idx] then
min_idx = j
end if
end for
if min_idx != i then
swap A[i] and A[min_idx]
end if
end for
end procedure
procedure selection_sort(A: [array](/page/Array) of integers)
n = length(A)
for i = 0 to n-2 do
min_idx = i
for j = i+1 to n-1 do
if A[j] < A[min_idx] then
min_idx = j
end if
end for
if min_idx != i then
swap A[i] and A[min_idx]
end if
end for
end procedure
This pseudocode operates on an array A of length n. The outer loop, iterating from i = 0 to n-2, progressively builds a sorted prefix of the array by fixing the position of the next smallest element at index i. Within each iteration of the outer loop, the inner loop scans from j = i+1 to n-1 to locate the index min_idx of the minimum value in the unsorted subarray starting at i, updating min_idx whenever a smaller element is found. Finally, a conditional swap exchanges the elements at positions i and min_idx only if they differ, thereby avoiding redundant operations when the minimum is already in place.
The pseudocode can be easily adapted for sorting in descending order by reversing the comparison in the inner loop from A < A[min_idx] to A > A[min_idx], which selects the maximum element instead of the minimum for placement at the beginning of the unsorted subarray.
Practical Example
Numerical Illustration
To illustrate the selection sort algorithm, consider the input array [64, 25, 12, 22, 11].[13]
In the first pass, the minimum value 11 is found at index 4 and swapped with the element at index 0 (64), resulting in [11, 25, 12, 22, 64]. This fixes the smallest element in position, growing the sorted prefix to length 1 while shrinking the unsorted portion to the remaining 4 elements.[13]
In the second pass, over indices 1 to 4, the minimum value 12 is at index 2 and swapped with the element at index 1 (25), yielding [11, 12, 25, 22, 64]. The sorted prefix now extends to length 2.[13]
In the third pass, over indices 2 to 4, the minimum value 22 is at index 3 and swapped with the element at index 2 (25), producing [11, 12, 22, 25, 64]. The sorted prefix grows to length 3.[13]
In the fourth pass, over indices 3 to 4, the minimum value 25 is at index 3, requiring no swap, so the array remains [11, 12, 22, 25, 64]. With only one element left, the process completes, leaving the fully sorted array [11, 12, 22, 25, 64].[13]
For this array of length n = 5, the algorithm performs 4 passes (one fewer than n), involving a total of 10 comparisons (summing $4 + 3 + 2 + 1) and 3 swaps (in the first three passes). This numerical tracing demonstrates how selection sort progressively builds the sorted prefix from the beginning, selecting and placing the next smallest element each time.[13]
Visual Step-by-Step
To visualize the selection sort process, consider an initial array represented as a horizontal sequence of rectangular bars or boxes, where each bar's height corresponds to the element's value (e.g., taller bar for 64, shorter for 11), positioned from left to right by index. The example array [64, 25, 12, 22, 11] starts with all bars in blue to indicate the fully unsorted portion, with no sorted elements yet.[14]
In the first pass (i=0), a pointer marks the current index i at the leftmost position, scanning rightward to find the minimum value (11 at index 4), highlighted in yellow or green. The unsorted portion remains blue, while an arrow or animated line illustrates the swap between the minimum (11) and the element at i (64), transitioning the bars' positions; post-swap, the leftmost bar (now 11) turns red to denote the growing sorted prefix, with the remaining bars [64, 25, 12, 22] staying blue. Subsequent passes repeat this: for i=1, scan [25, 12, 22] to find 12 (yellow/green highlight), swap with 25, redden the second bar (12), and so on, progressively expanding the red sorted region rightward while shrinking the blue unsorted tail—e.g., after the second pass: [11, 12, 25, 22, 64] (first two red) and [25, 22, 64] blue; third pass swaps 22 with 25, yielding [11, 12, 22, 25, 64] (first three red) and [25, 64] blue; fourth pass requires no swap as the minimum (25) is already in position, reddening the fourth bar and leaving the last (64) to complete the sort trivially.[15][14]
Key visual elements include color-coding (blue for unsorted, red for sorted, yellow/green for the current minimum, and temporary black during swaps for emphasis), pointers or labels for i (scanning the unsorted region) and min_idx (tracking the minimum's location), and directional arrows showing element exchanges to clarify the partitioning mechanism.[14][15]
In digital formats, this can be animated stepwise with delays (e.g., 500ms per scan or swap) to reveal the iterative selection and placement, allowing viewers to pause and inspect intermediate states; for static diagrams, side-by-side panels depict before-and-after each pass, tracing the array's evolution from [64, 25, 12, 22, 11] (all blue) to [11, 12, 22, 25, 64] (all red).[15]
Such visualizations aid intuition by demonstrating how selection sort partitions the array into sorted and unsorted subarrays through repeated minimum extractions, making the algorithm's simplicity and quadratic behavior more tangible without relying on abstract code.[14]
Implementation Details
In-Place Implementation
Selection sort operates as an in-place algorithm, utilizing only O(1) extra space by performing all operations directly on the original array through element swaps, thereby avoiding the need for additional auxiliary arrays or data structures.[16]
The core code structure employs a double-loop mechanism to maintain efficiency in data movements. The outer loop runs from the first element (index 0) to the second-to-last element (index n-2), where n is the array length, designating each position i as the boundary between the sorted and unsorted portions. Within each outer iteration, the inner loop scans the unsorted subarray from index i+1 to n-1, identifying the index of the minimum value (min_idx) by comparing elements sequentially. This design ensures that swaps occur only once per outer iteration, at most, to place the smallest remaining element at position i, thereby minimizing the total number of exchanges to at most n-1.[5]
The swap mechanics are implemented using a single temporary variable to exchange elements without data loss: a temporary variable temp is assigned the value at A, then A receives the value from A[min_idx], and finally A[min_idx] is set to temp; this swap is executed conditionally only if min_idx ≠ i, further reducing operations when the minimum is already correctly positioned.[5]
Language-Specific Adaptations
Selection sort implementations vary across programming languages due to differences in syntax, built-in functions, and data structures, but all maintain the core in-place swapping mechanism to minimize memory usage.[17] These adaptations ensure the algorithm's simplicity while leveraging language-specific features for efficiency and readability.
In Python, the implementation uses list slicing and tuple unpacking for swaps, emphasizing the language's concise syntax. The following function sorts an array in ascending order:
python
def selection_sort(arr):
n = len(arr)
for i in range(n-1):
min_idx = i
for j in range(i+1, n):
if arr[j] < arr[min_idx]:
min_idx = j
arr[i], arr[min_idx] = arr[min_idx], arr[i]
def selection_sort(arr):
n = len(arr)
for i in range(n-1):
min_idx = i
for j in range(i+1, n):
if arr[j] < arr[min_idx]:
min_idx = j
arr[i], arr[min_idx] = arr[min_idx], arr[i]
This code directly mirrors the pseudocode structure, with Python's dynamic typing allowing seamless operation on lists of integers or other comparable elements.[17]
For C++, the algorithm is typically implemented with vectors and standard library utilities like std::swap from <algorithm>, which provides a safe and efficient exchange operation. The example below uses a std::vector<int> and traditional for-loops:
cpp
#include <vector>
#include <algorithm>
void selection_sort([std::vector](/page/Vector)<int>& arr) {
int n = arr.size();
for (int i = 0; i < n-1; ++i) {
int min_idx = i;
for (int j = i+1; j < n; ++j) {
if (arr[j] < arr[min_idx]) {
min_idx = j;
}
}
std::swap(arr[i], arr[min_idx]);
}
}
#include <vector>
#include <algorithm>
void selection_sort([std::vector](/page/Vector)<int>& arr) {
int n = arr.size();
for (int i = 0; i < n-1; ++i) {
int min_idx = i;
for (int j = i+1; j < n; ++j) {
if (arr[j] < arr[min_idx]) {
min_idx = j;
}
}
std::swap(arr[i], arr[min_idx]);
}
}
This approach highlights C++'s emphasis on explicit memory management and performance, where indices are zero-based and the vector is passed by reference to enable in-place modification.[17]
In Java, selection sort can use primitive arrays like int[] or collections such as ArrayList<Integer>, requiring manual swap logic since no built-in swap function exists for arrays. The example uses an int[] for simplicity:
java
public static void selectionSort(int[] arr) {
int n = arr.length;
for (int i = 0; i < n-1; i++) {
int min_idx = i;
for (int j = i+1; j < n; j++) {
if (arr[j] < arr[min_idx]) {
min_idx = j;
}
}
// Manual swap
int temp = arr[min_idx];
arr[min_idx] = arr[i];
arr[i] = temp;
}
}
public static void selectionSort(int[] arr) {
int n = arr.length;
for (int i = 0; i < n-1; i++) {
int min_idx = i;
for (int j = i+1; j < n; j++) {
if (arr[j] < arr[min_idx]) {
min_idx = j;
}
}
// Manual swap
int temp = arr[min_idx];
arr[min_idx] = arr[i];
arr[i] = temp;
}
}
Java's strict typing and array immutability in some contexts necessitate this explicit temporary variable for swapping, distinguishing it from more fluid languages like Python.[17]
To handle generics or custom objects, both Java and C++ allow defining comparators for flexible sorting beyond primitive integers. In Java, the Comparator interface can be passed to the inner loop for comparing objects, enabling sorting by specific attributes (e.g., students.sort(Comparator.comparing(Student::getGPA)) adapted to selection sort).[18] Similarly, C++ uses std::function or functor objects with the < operator overloaded via templates, as in template<typename T> void selection_sort(std::vector<T>& arr, auto cmp) { ... if (cmp(arr[j], arr[min_idx])) ... }.[19] These adaptations support sorting heterogeneous data without altering the algorithm's O(n²) nature.
Selection sort is inherently iterative, avoiding recursion to prevent stack overflow in large arrays and maintain constant space complexity, a design choice preserved across these language implementations.[20]
Complexity Analysis
Time Complexity
The time complexity of selection sort is determined primarily by the number of comparisons performed in its nested loop structure, where an outer loop iterates over the array from the first to the second-to-last element, and an inner loop scans the remaining unsorted portion to find the minimum value.[21] This results in a worst-case and average-case time complexity of O(n^2), as the algorithm performs approximately n(n-1)/2 comparisons regardless of the input arrangement.[22]
To derive this, consider an array of length n: the outer loop runs for n-1 iterations, and in the i-th iteration (starting from i=1), the inner loop performs n-i comparisons to identify the minimum in the unsorted subarray. The total number of comparisons is thus the sum \sum_{i=1}^{n-1} (n-i) = \sum_{k=1}^{n-1} k = \frac{(n-1)n}{2}, which simplifies asymptotically to O(n^2).[23]
In the best case, when the input is already sorted, the algorithm still executes the full inner loops without early termination, yielding the same O(n^2) complexity, as no optimizations skip scans in the standard implementation.[24] The number of swaps is at most n-1, contributing only O(n) to the overall time, which is dominated by the quadratic comparisons.[21]
Asymptotically, this quadratic time complexity renders selection sort inefficient for large inputs, typically becoming impractical for arrays exceeding 1000 elements due to the rapid growth in operations.[25]
Space Complexity and Stability
Selection sort operates as an in-place algorithm, utilizing only a constant amount of auxiliary space—specifically O(1)—for temporary variables during swaps, while the total space complexity remains O(n) due to the input array itself.[2] This minimal extra memory requirement makes it suitable for environments with limited resources, such as embedded systems, where avoiding additional data structures is advantageous.[26]
Despite its space efficiency, selection sort is not stable, meaning it does not preserve the relative order of elements with equal keys in the sorted output. Instability arises because the algorithm swaps the selected minimum (or maximum) element with the first position in the unsorted subarray, potentially moving one equal element past another through non-adjacent exchanges. For instance, consider an unsorted subarray starting at index k: [5_a, 5_b, 3], where 5_a and 5_b are equal values distinguished by their original positions (a before b). During the pass for position k, the minimum 3 at index k+2 is identified and swapped with 5_a at k, resulting in [3, 5_b, 5_a]. Here, the relative order of the two 5s is inverted, with 5_b now preceding 5_a.[27] This behavior stems from the selection process choosing the first encountered minimum for swapping, without regard for the positions of subsequent equal elements.[28]
The proof of instability follows directly from this mechanism: in each iteration, the algorithm scans the unsorted portion to find the index of the minimum value, starting from the current position, and performs a direct swap regardless of distance, which can reorder equal elements encountered later in the scan.[4] Consequently, applications requiring stability—such as sorting records by multiple keys where secondary order must be maintained—should opt for stable alternatives like merge sort, despite selection sort's in-place advantage.[29]
Beyond these properties, selection sort is neither adaptive nor online. It is not adaptive, as it always executes n-1 full scans of the remaining unsorted subarray, irrespective of the input's partial order, leading to no performance gain on nearly sorted data.[30] Similarly, it is not online, requiring access to the complete array upfront to perform its selection passes, unlike algorithms that can process elements incrementally as they arrive.[31]
Comparisons
With Insertion Sort
Both selection sort and insertion sort are quadratic-time comparison-based sorting algorithms that perform in-place operations and are straightforward to implement, making them suitable for small arrays or educational purposes. They share the property of incrementally constructing a sorted prefix of the array, one element at a time, through iterative passes over the data.[5]
A key difference lies in their approaches to placing elements: selection sort identifies the minimum element in the unsorted subarray via a full linear scan and swaps it directly into its final position at the boundary of the sorted prefix, ensuring exactly one swap per pass. In contrast, insertion sort takes the next unsorted element and inserts it into the correct position within the sorted prefix by shifting larger elements rightward until the spot is found, which can require multiple shifts but no full scans of the unsorted portion. This makes insertion sort more efficient for partially sorted inputs, as it avoids unnecessary scans.[32][5]
Performance-wise, selection sort exhibits fixed behavior across all inputs, with approximately \frac{n^2}{2} comparisons and n swaps in the average and worst cases, providing consistent predictability. Insertion sort, however, adapts to the data: it achieves a best-case time complexity of O(n) with only n-1 comparisons and no shifts for sorted input, though its worst case matches selection sort's \frac{n^2}{2} comparisons and incurs up to \frac{n^2}{4} shifts on average. Empirical studies confirm insertion sort often outperforms selection sort on nearly sorted datasets due to fewer operations overall.[32][33]
Selection sort is typically favored for uniformly random data where its invariant comparison count minimizes variability, while insertion sort is better suited for nearly sorted arrays or online/streaming applications where elements are processed incrementally.[33][5]
Consider a reverse-sorted array such as [5, 4, 3, 2, 1]: selection sort conducts complete minimum searches across the shrinking unsorted subarray in each of the four passes, yielding \frac{n(n-1)}{2} = 10 comparisons, whereas insertion sort performs extensive shifts for each of the four insertions—shifting 1, 2, 3, and 4 elements respectively—resulting in the maximum \frac{n(n-1)}{2} = 10 shifts alongside comparable comparisons.[32]
With Bubble Sort
Selection sort and bubble sort are both straightforward, comparison-based sorting algorithms with a time complexity of O(n²) in the worst and average cases, making them suitable for introductory computer science education where they are frequently taught together to illustrate basic sorting principles.[34][35]
The primary difference lies in their mechanisms: bubble sort operates by making multiple passes through the array, repeatedly swapping adjacent elements if they are in the wrong order to gradually "bubble" the largest (or smallest) elements to the end, whereas selection sort identifies the global minimum (or maximum) element in the unsorted portion during each pass and swaps it directly into its final position without relying on adjacent exchanges.[35][36]
In terms of performance, bubble sort is adaptive and can terminate early if no swaps occur in a pass, indicating the array is sorted, which improves efficiency on nearly sorted data; selection sort lacks this adaptability and always completes all passes regardless of input order.[37] On reverse-sorted inputs, bubble sort performs up to Θ(n²) swaps due to extensive adjacent exchanges, while selection sort requires only n-1 swaps, one per pass to place each minimum element.[38]
Bubble sort benefits from optimizations like the cocktail shaker variant, which alternates pass directions to move elements toward both ends more efficiently, whereas selection sort is more challenging to optimize due to its fixed pass structure and direct minimum selection.[39][40]
Bubble sort is often preferred in educational contexts for its intuitive, step-by-step adjacent swapping that visually demonstrates disorder resolution, while selection sort excels in scenarios with costly swaps, such as random data distributions, where its minimal n-1 swaps provide an advantage over bubble sort's potentially higher swap count.[41][42]
Variants
Bidirectional Selection Sort
Bidirectional selection sort is a variant of the standard selection sort algorithm that enhances efficiency by simultaneously identifying the minimum and maximum elements in the unsorted portion of the array during each iteration.[43] This approach alternates between placing the minimum element at the left end of the unsorted region and the maximum element at the right end, thereby shrinking the unsorted middle section from both sides until the left and right pointers meet.[43]
The algorithm initializes two pointers: low at index 0 and high at index n-1, where n is the length of the array. While low < high, it scans the subarray from low to high to locate the indices of the minimum and maximum values. The minimum is swapped into position low, and the maximum is swapped into position high. The pointers are then updated by incrementing low and decrementing high. This process repeats, effectively sorting the array by progressively fixing elements at both extremities of the remaining unsorted portion.[43]
One key benefit of bidirectional selection sort is the reduction in the number of comparisons required, which is approximately halved in the average case to about n^2/4 compared to n^2/2 for the standard selection sort, although the worst-case time complexity remains O(n^2). This improvement arises from performing roughly half as many passes, as the unsorted region diminishes twice as quickly.
Despite these gains, the algorithm introduces drawbacks such as increased implementation complexity due to the dual tracking of extremes and the need to handle potential overlaps in minimum and maximum positions.[43] It remains unstable, like its predecessor, potentially altering the relative order of equal elements during swaps.[43]
Bidirectional selection sort is particularly useful for medium-sized arrays seeking a modest performance boost over basic selection sort without introducing the space overhead or coding intricacy of more sophisticated algorithms.
Tournament Method Variant
The tournament method variant of selection sort utilizes a binary tournament tree, also known as a winner tree, to efficiently identify the minimum element in the unsorted portion of the array. In this structure, the leaves of the complete binary tree correspond to the array elements, and each internal node stores the smaller (for min-selection) of its two children, propagating the minima upward until the root holds the global minimum. This setup mimics a single-elimination tournament where elements "compete" pairwise, with winners advancing.[44]
The sorting process begins with constructing the initial tournament tree in O(n) time through bottom-up pairwise comparisons. For each of the n passes, the minimum is extracted from the root and swapped into its correct position in the sorted prefix of the array. To maintain the tree for the remaining unsorted elements, the extracted leaf is replaced with a sentinel value (e.g., infinity for min-selection), and the "loser" from the final comparison along the path to the root is promoted to fill the vacancy, followed by O(log n) comparisons to rebuild the affected path. This selective update avoids full tree reconstruction, enabling repeated minimum selections.[45][44]
This variant achieves O(n log n) time complexity in the worst, average, and best cases for both comparisons and swaps, a significant improvement over the standard selection sort's O(n²) due to the logarithmic cost of each minimum extraction after the initial build. It retains the selection sort's flavor of iteratively placing the next minimum but enhances efficiency through the tree structure, making it particularly suitable for parallel computing environments where multiple comparisons at the same tree level can occur simultaneously using O(n) processors in O(log n) time per phase.[45][46]
The tournament method draws from heap sort concepts introduced by J. W. J. Williams in 1964 but emphasizes the tournament tree data structure, formalized by Alexander Stepanov and Aaron Kershenbaum in their 1986 technical report on using tournament trees for sorting, which generalizes earlier binomial tree ideas for optimal comparison-based sorting. While it offers better asymptotic performance, the variant requires O(n) additional space for the tree, increasing from the O(1) auxiliary space of standard selection sort, and the setup overhead can make it less advantageous for small input sizes where simpler linear scans suffice.[45][46]