Linear search
Linear search, also known as sequential search, is a fundamental algorithm in computer science used to locate a specific element within a list or array by examining each element one by one in order, starting from the first position, until the target is found or the end of the list is reached.[1] This method does not require the data to be sorted, making it applicable to unordered collections, and it typically returns the index of the matching element or an indicator (such as -1) if the element is absent.[2] The algorithm's simplicity stems from its straightforward implementation, often involving a loop that iterates through the structure while comparing each item to the search key.[3]
In terms of efficiency, linear search exhibits a time complexity of O(n) in the worst and average cases, where n is the number of elements in the list, as it may need to inspect every item if the target is at the end or not present; the best-case complexity is O(1) if the target is the first element.[1] Space complexity is O(1), requiring only a constant amount of additional memory regardless of input size.[2] While advantageous for small datasets or when simplicity is prioritized—such as in educational examples or real-world scenarios like scanning a short document—the algorithm becomes inefficient for large-scale applications due to its linear growth in execution time.[3]
Linear search serves as a baseline for understanding more advanced searching techniques, such as binary search, which leverages sorted data for logarithmic performance, and it remains relevant in contexts where data ordering is impractical or the list size is minimal.[4] Variants, like self-organizing linear search, adapt the list order based on access frequency to potentially improve repeated queries over time.[5]
Fundamentals
Definition
Linear search is a sequential search algorithm that examines each element of a data structure in a linear order, starting from the beginning, until it either locates the target element or determines that the target is not present by reaching the end of the structure.[6] This approach involves iteratively comparing the target value against successive elements without any optimization based on prior knowledge of the data's arrangement.[7]
As the simplest form of search method, linear search is versatile and can be applied to any iterable data structure, such as arrays or linked lists, regardless of whether the elements are sorted.[8] It requires no preprocessing of the data, making it straightforward to implement in various programming contexts.[9]
Linear search originated as a fundamental technique in early computing practices. A seminal reference is found in Donald Knuth's The Art of Computer Programming, Volume 3: Sorting and Searching (1973), which explicitly discusses and analyzes the algorithm as a baseline for more advanced searching methods.[10]
In terms of performance characteristics, linear search requires checking all elements in the worst case, such as when the target is absent or positioned at the end, but it achieves the best-case scenario by identifying the target on the first comparison if it occupies the initial position.[6]
Basic Principles
Linear search presupposes access to linear data structures, such as arrays, linked lists, or other sequences, where elements are organized in a traversable order via indices or pointers, enabling straightforward sequential access without prior organization like sorting.[11] These structures form the foundational prerequisite, as the algorithm relies on the ability to iterate through elements in a linear fashion from start to end.[12]
At its core, linear search operates on the principle of sequential traversal: it begins at the initial element and proceeds one by one, comparing each to the target value using an equality check until either a match occurs or all elements have been examined.[11] This methodical, step-by-step progression distinguishes it as a straightforward, exhaustive inspection of the data collection.[12]
If the target is absent, the algorithm handles the not-found case by continuing the traversal until the structure's end is reached, at which point it signals failure, such as by returning a sentinel value or null indicator.[11] Unlike parallel searches that may distribute checks across multiple processors or randomized approaches that introduce probabilistic sampling, linear search maintains a deterministic, single-pass execution in a strictly sequential manner.[11]
Unique edge cases underscore its simplicity: for an empty structure, the search terminates immediately with a not-found result, requiring no comparisons; in a single-element structure, it performs exactly one comparison to determine presence or absence.[12][11]
Algorithm
Unsorted Array
Linear search on an unsorted array, also known as sequential search, involves systematically examining each element in the array from the beginning until the target value is found or the entire array has been traversed.[13] The process begins by initializing an index to 0, then iterating through the array by comparing the element at the current index to the target; if a match is found, the index is returned, otherwise the loop continues until the end of the array, at which point a value indicating not found (such as -1) is returned.[14] This relies on the basic principles of sequential access to memory locations in the array.[13]
The following pseudocode illustrates the basic linear search procedure for an unsorted array, returning the index of the first occurrence of the target or -1 if absent:
function linearSearch([array](/page/Array), [target](/page/Target)):
for i from 0 to [length](/page/Length)([array](/page/Array)) - 1:
if [array](/page/Array)[i] == [target](/page/Target):
return i
return -1
```[](https://www.csus.edu/college/engineering-computer-science/student-success/peer-assisted-learning/_internal/_documents/pal_worksheet_ch13_searching.pdf)
For example, consider searching for the value 7 in the unsorted [array](/page/Array) [3, 1, 7, 4]. The algorithm checks [index](/page/Index) 0 (3 ≠ 7), [index](/page/Index) 1 (1 ≠ 7), and [index](/page/Index) 2 (7 == 7), returning 2 upon [match](/page/Match).[](https://www.csus.edu/college/engineering-computer-science/student-success/peer-assisted-learning/_internal/_documents/pal_worksheet_ch13_searching.pdf) In unsorted data, early termination is only possible upon finding a [match](/page/Match), and the worst case always requires a full [scan](/page/Scan) of the [array](/page/Array).[](https://cse.unl.edu/~cbourke/searchingSorting.pdf) Unlike searches on sorted arrays, this method does not exploit any ordering to skip elements or enable early exits beyond a match.[](https://cse.unl.edu/~cbourke/searchingSorting.pdf)
### With Sentinel
The sentinel variant of linear search introduces an optimization to the basic procedure by temporarily placing the [target](/page/Target) value at the end of the [array](/page/Array) as a "[sentinel](/page/Sentinel)," which eliminates the need for separate [boundary](/page/Boundary) checks in the [loop](/page/Loop) condition. This approach ensures that the search [loop](/page/Loop) always terminates upon finding the [target](/page/Target)—either the actual match or the [sentinel](/page/Sentinel)—thus simplifying the [iteration](/page/Iteration) logic while maintaining the same worst-case [time complexity](/page/Time_complexity).
In the modified steps, the algorithm first saves the original value at the [array](/page/Array)'s end position and overwrites it with the [target](/page/Target) to serve as the [sentinel](/page/Sentinel). It then begins scanning from the start of the [array](/page/Array), advancing the [index](/page/Index) until the [target](/page/Target) is encountered. After the [loop](/page/Loop), the original value is restored to the end position. If the [index](/page/Index) where the [target](/page/Target) was found is within the original [array](/page/Array) bounds, that [index](/page/Index) is returned as the [location](/page/Location) of the match; otherwise, the [target](/page/Target) is deemed absent, as the [sentinel](/page/Sentinel) was hit. This process assumes a zero-based [indexing](/page/Index) scheme and works on unsorted [arrays](/page/Array), focusing solely on sequential examination without leveraging any order.
The following pseudocode illustrates the sentinel linear search:
function linearSearch([array](/page/Array), [target](/page/Target)):
for i from 0 to [length](/page/Length)([array](/page/Array)) - 1:
if [array](/page/Array)[i] == [target](/page/Target):
return i
return -1
```[](https://www.csus.edu/college/engineering-computer-science/student-success/peer-assisted-learning/_internal/_documents/pal_worksheet_ch13_searching.pdf)
For example, consider searching for the value 7 in the unsorted [array](/page/Array) [3, 1, 7, 4]. The algorithm checks [index](/page/Index) 0 (3 ≠ 7), [index](/page/Index) 1 (1 ≠ 7), and [index](/page/Index) 2 (7 == 7), returning 2 upon [match](/page/Match).[](https://www.csus.edu/college/engineering-computer-science/student-success/peer-assisted-learning/_internal/_documents/pal_worksheet_ch13_searching.pdf) In unsorted data, early termination is only possible upon finding a [match](/page/Match), and the worst case always requires a full [scan](/page/Scan) of the [array](/page/Array).[](https://cse.unl.edu/~cbourke/searchingSorting.pdf) Unlike searches on sorted arrays, this method does not exploit any ordering to skip elements or enable early exits beyond a match.[](https://cse.unl.edu/~cbourke/searchingSorting.pdf)
### With Sentinel
The sentinel variant of linear search introduces an optimization to the basic procedure by temporarily placing the [target](/page/Target) value at the end of the [array](/page/Array) as a "[sentinel](/page/Sentinel)," which eliminates the need for separate [boundary](/page/Boundary) checks in the [loop](/page/Loop) condition. This approach ensures that the search [loop](/page/Loop) always terminates upon finding the [target](/page/Target)—either the actual match or the [sentinel](/page/Sentinel)—thus simplifying the [iteration](/page/Iteration) logic while maintaining the same worst-case [time complexity](/page/Time_complexity).
In the modified steps, the algorithm first saves the original value at the [array](/page/Array)'s end position and overwrites it with the [target](/page/Target) to serve as the [sentinel](/page/Sentinel). It then begins scanning from the start of the [array](/page/Array), advancing the [index](/page/Index) until the [target](/page/Target) is encountered. After the [loop](/page/Loop), the original value is restored to the end position. If the [index](/page/Index) where the [target](/page/Target) was found is within the original [array](/page/Array) bounds, that [index](/page/Index) is returned as the [location](/page/Location) of the match; otherwise, the [target](/page/Target) is deemed absent, as the [sentinel](/page/Sentinel) was hit. This process assumes a zero-based [indexing](/page/Index) scheme and works on unsorted [arrays](/page/Array), focusing solely on sequential examination without leveraging any order.
The following pseudocode illustrates the sentinel linear search:
SENTINEL-LINEAR-SEARCH(A, length, target)
original ← A[length]
A[length] ← target
i ← 0
while A[i] ≠ target:
i ← i + 1
A[length] ← original // Restore the original value
if i < length:
return i // Found at index i
else:
return not found // Target not present
A key advantage of this variant is the reduction in per-iteration overhead, as the loop performs only a single comparison (for equality to the target) rather than dual checks for both equality and array bounds, potentially improving performance by a small constant factor in environments sensitive to branch instructions or tight loops. It is particularly beneficial in low-level implementations, such as [assembly code](/page/Assembly_language), where minimizing conditional branches can enhance execution speed on older or resource-constrained hardware.
Despite these gains, the sentinel approach has notable drawbacks: it requires the array to be modifiable, as the end position must be altered and restored, which precludes its use with read-only data structures like constant arrays or memory-mapped files. Additionally, the temporary modification introduces minor extra time and space overhead for saving and restoring the sentinel position, and it may necessitate an enlarged array buffer if the original structure cannot be directly extended. These limitations make the variant less versatile than the standard linear search in modern, high-level programming contexts where array immutability or safety checks are prioritized.
### Ordered Array
When applying linear search to an ordered array, the algorithm proceeds sequentially from the first element but includes an early termination condition: if the current element exceeds the target value, the search halts, as all remaining elements are larger due to the ascending sort order.[](https://runestone.academy/ns/books/published/pythonds/SortSearch/TheSequentialSearch.html)[](https://opendsa-server.cs.vt.edu/ODSA/Books/CS4104/html/SortedSearch.html) This modification exploits the sorted structure to avoid unnecessary comparisons, especially beneficial when the target is absent or positioned early.[](https://runestone.academy/ns/books/published/pythonds/SortSearch/TheSequentialSearch.html)
The steps involve initializing the search at index 0 and iterating through the array. For each position *i*, compare the element at *array* with the target: if equal, return *i*; if *array* > target, return not found immediately; otherwise, advance to the next index. If the end of the array is reached without a match or early stop, conclude the target is absent.[](https://runestone.academy/ns/books/published/pythonds/SortSearch/TheSequentialSearch.html)[](https://opendsa-server.cs.vt.edu/ODSA/Books/CS4104/html/SortedSearch.html)
The following pseudocode illustrates this process:
A key advantage of this variant is the reduction in per-iteration overhead, as the loop performs only a single comparison (for equality to the target) rather than dual checks for both equality and array bounds, potentially improving performance by a small constant factor in environments sensitive to branch instructions or tight loops. It is particularly beneficial in low-level implementations, such as [assembly code](/page/Assembly_language), where minimizing conditional branches can enhance execution speed on older or resource-constrained hardware.
Despite these gains, the sentinel approach has notable drawbacks: it requires the array to be modifiable, as the end position must be altered and restored, which precludes its use with read-only data structures like constant arrays or memory-mapped files. Additionally, the temporary modification introduces minor extra time and space overhead for saving and restoring the sentinel position, and it may necessitate an enlarged array buffer if the original structure cannot be directly extended. These limitations make the variant less versatile than the standard linear search in modern, high-level programming contexts where array immutability or safety checks are prioritized.
### Ordered Array
When applying linear search to an ordered array, the algorithm proceeds sequentially from the first element but includes an early termination condition: if the current element exceeds the target value, the search halts, as all remaining elements are larger due to the ascending sort order.[](https://runestone.academy/ns/books/published/pythonds/SortSearch/TheSequentialSearch.html)[](https://opendsa-server.cs.vt.edu/ODSA/Books/CS4104/html/SortedSearch.html) This modification exploits the sorted structure to avoid unnecessary comparisons, especially beneficial when the target is absent or positioned early.[](https://runestone.academy/ns/books/published/pythonds/SortSearch/TheSequentialSearch.html)
The steps involve initializing the search at index 0 and iterating through the array. For each position *i*, compare the element at *array* with the target: if equal, return *i*; if *array* > target, return not found immediately; otherwise, advance to the next index. If the end of the array is reached without a match or early stop, conclude the target is absent.[](https://runestone.academy/ns/books/published/pythonds/SortSearch/TheSequentialSearch.html)[](https://opendsa-server.cs.vt.edu/ODSA/Books/CS4104/html/SortedSearch.html)
The following pseudocode illustrates this process:
function orderedLinearSearch(array, target):
for i from 0 to length(array) - 1:
if array[i] == target:
return i
if array[i] > target:
return -1 // or not found
return -1 // or not found
[](https://runestone.academy/ns/books/published/pythonds/SortSearch/TheSequentialSearch.html)
For example, in the sorted [array](/page/Array) `[0, 1, 2, 8, 13, 17, 19, 32, 42]`, searching for 3 involves checking 0, 1, and 2 (all less than 3), then reaching 8 (greater than 3) and terminating without further checks, determining the target is absent after four comparisons.[](https://runestone.academy/ns/books/published/pythonds/SortSearch/TheSequentialSearch.html) In contrast, searching for 13 finds it at [index](/page/Index) 4 after five comparisons.
This approach enhances average-case efficiency for absent targets in uniformly distributed sorted data, typically requiring about *n*/2 comparisons when the item is not present, versus *n* for the unsorted variant.[](https://runestone.academy/ns/books/published/pythonds/SortSearch/TheSequentialSearch.html) Nonetheless, the worst-case time complexity stays *O*(*n*), occurring when the target is at the end or absent after full traversal.[](https://runestone.academy/ns/books/published/pythonds/SortSearch/TheSequentialSearch.html)[](https://opendsa-server.cs.vt.edu/ODSA/Books/CS4104/html/SortedSearch.html)
Although binary search is typically favored for sorted arrays due to its logarithmic performance, the ordered linear search offers simpler [implementation](/page/Implementation) and may be preferable for very small arrays, where binary search overhead can make it slower.[](https://runestone.academy/ns/books/published/pythonds/SortSearch/TheSequentialSearch.html)[](https://courses.cs.washington.edu/courses/cse332/25wi/lectures/cse332-25wi-lec02-AlgorithmAnalysis.pdf) It also suits scenarios with occasional searches or constraints limiting recursive or branching logic.[](https://courses.cs.washington.edu/courses/cse332/25wi/lectures/cse332-25wi-lec02-AlgorithmAnalysis.pdf)
## Analysis
### Time Complexity
The time complexity of linear search is determined primarily by the number of element comparisons required to locate the target or confirm its absence. In the best-case scenario, the target is found at the first position, requiring only a single comparison and yielding a time complexity of $O(1)$.[](https://www.cs.toronto.edu/~david/course-notes/csc110-111/19-average-case/02-linear-search-analysis.html) The worst-case scenario occurs when the target is absent or positioned at the end of the array, necessitating a full scan of all $n$ elements and performing $n$ comparisons, resulting in $O(n)$ time complexity.[](https://courses.cs.duke.edu/cps102/fall13/lects/cps230-131010/lecture13-131010algorithms2-4up.pdf)
For the average case, the analysis assumes a successful search where the target is equally likely to appear in any [position](/page/Position) within the [array](/page/Array) of $n$ elements. Under this [uniform distribution](/page/Uniform_distribution), the expected number of comparisons is the [mean](/page/Mean) of the positions from 1 to $n$, which simplifies to $\frac{n+1}{2}$.[](https://www.geeksforgeeks.org/dsa/searching-and-sorting-algorithm-notes-for-gate-exam/)
$$
\frac{1}{n} \sum_{k=1}^{n} k = \frac{1}{n} \cdot \frac{n(n+1)}{2} = \frac{n+1}{2}
$$
Thus, the average-case [time complexity](/page/Time_complexity) is also $O(n)$.[](https://www.cs.toronto.edu/~david/course-notes/csc110-111/19-average-case/02-linear-search-analysis.html)
The impact of data distribution on time complexity varies by array organization. In unsorted arrays, the algorithm must always potentially scan all $n$ elements in the worst case, with no opportunity for early termination beyond finding the target.[](https://courses.cs.duke.edu/cps102/fall13/lects/cps230-131010/lecture13-131010algorithms2-4up.pdf) In ordered arrays, however, the search can incorporate early termination for unsuccessful queries: the algorithm stops upon identifying the first element exceeding the target value (in ascending order), which can reduce comparisons for absent targets smaller than some elements in the array.[](https://runestone.academy/ns/books/published/pythonds/SortSearch/TheSequentialSearch.html)
Factors such as hardware characteristics also influence practical performance. Linear search's sequential access pattern promotes strong spatial locality in array-based implementations, minimizing cache misses and favoring efficiency for small $n$ where the entire array fits within cache.[](https://pages.cs.wisc.edu/~jignesh/publ/Revenge_of_the_Interpolation_Search.pdf)
### Space Complexity
The space complexity of the linear search algorithm, when implemented iteratively on an unsorted array, is O(1) auxiliary space, as it requires only a constant amount of extra memory for a loop variable and index, without allocating additional data structures proportional to the input size.[](https://www.geeksforgeeks.org/dsa/linear-search/) This in-place nature ensures that the algorithm operates directly on the given array, minimizing memory overhead beyond the input itself.[](https://www.programiz.com/dsa/linear-search)
In the sentinel variant of linear search, where a temporary copy of the target element is appended to the end of the [array](/page/Array) to simplify [boundary](/page/Boundary) checks, the auxiliary [space](/page/Space_complexity) remains O(1), since the addition involves only a single extra element regardless of the array length.[](https://www.geeksforgeeks.org/sentinel-linear-search/)
When performing linear search on a [linked list](/page/Linked_list), the space complexity is also O(1) auxiliary space, as the traversal relies solely on pointer updates to navigate nodes without copying the list or using extra storage.[](https://www.geeksforgeeks.org/dsa/time-and-space-complexity-of-linked-list/)
Although recursive implementations of linear search exist, they are uncommon due to their [O(n](/page/O(n)) space complexity from the [recursion](/page/Recursion) stack, which grows linearly with the depth of calls equal to the [list](/page/List) size; iterative versions are thus preferred for their constant space efficiency.[](https://www.geeksforgeeks.org/dsa/recursive-c-program-linearly-search-element-given-array/)
This constant auxiliary space usage makes linear search highly memory-efficient, rendering it particularly suitable for [embedded](/page/Embedded) systems where resources are constrained and additional [data](/page/Data) structures cannot be afforded.[](https://www.upgrad.com/blog/introduction-to-linear-search-algorithm/)
## Implementations
### Pseudocode
The pseudocode for linear search provides a high-level, [language-agnostic](/page/Language-agnostic) description of the algorithm's logic, facilitating understanding across different implementations. It typically involves iterating through elements sequentially until a match is found or the end is reached. Best practices in pseudocode include explicitly checking for empty or [null](/page/Null) inputs to avoid errors, such as returning an indicator for "not found" immediately if the structure is empty.[](https://faculty.cs.niu.edu/~mcmahon/CS241/Notes/Search_Algorithms/linear_search.html)
### Basic Unsorted Array
The following [pseudocode](/page/Pseudocode) implements linear search on an unsorted [array](/page/Array), using a [for loop](/page/For_loop) to iterate through indices and return the first matching index or -1 if not found. Comments explain each step for clarity.
[](https://runestone.academy/ns/books/published/pythonds/SortSearch/TheSequentialSearch.html)
For example, in the sorted [array](/page/Array) `[0, 1, 2, 8, 13, 17, 19, 32, 42]`, searching for 3 involves checking 0, 1, and 2 (all less than 3), then reaching 8 (greater than 3) and terminating without further checks, determining the target is absent after four comparisons.[](https://runestone.academy/ns/books/published/pythonds/SortSearch/TheSequentialSearch.html) In contrast, searching for 13 finds it at [index](/page/Index) 4 after five comparisons.
This approach enhances average-case efficiency for absent targets in uniformly distributed sorted data, typically requiring about *n*/2 comparisons when the item is not present, versus *n* for the unsorted variant.[](https://runestone.academy/ns/books/published/pythonds/SortSearch/TheSequentialSearch.html) Nonetheless, the worst-case time complexity stays *O*(*n*), occurring when the target is at the end or absent after full traversal.[](https://runestone.academy/ns/books/published/pythonds/SortSearch/TheSequentialSearch.html)[](https://opendsa-server.cs.vt.edu/ODSA/Books/CS4104/html/SortedSearch.html)
Although binary search is typically favored for sorted arrays due to its logarithmic performance, the ordered linear search offers simpler [implementation](/page/Implementation) and may be preferable for very small arrays, where binary search overhead can make it slower.[](https://runestone.academy/ns/books/published/pythonds/SortSearch/TheSequentialSearch.html)[](https://courses.cs.washington.edu/courses/cse332/25wi/lectures/cse332-25wi-lec02-AlgorithmAnalysis.pdf) It also suits scenarios with occasional searches or constraints limiting recursive or branching logic.[](https://courses.cs.washington.edu/courses/cse332/25wi/lectures/cse332-25wi-lec02-AlgorithmAnalysis.pdf)
## Analysis
### Time Complexity
The time complexity of linear search is determined primarily by the number of element comparisons required to locate the target or confirm its absence. In the best-case scenario, the target is found at the first position, requiring only a single comparison and yielding a time complexity of $O(1)$.[](https://www.cs.toronto.edu/~david/course-notes/csc110-111/19-average-case/02-linear-search-analysis.html) The worst-case scenario occurs when the target is absent or positioned at the end of the array, necessitating a full scan of all $n$ elements and performing $n$ comparisons, resulting in $O(n)$ time complexity.[](https://courses.cs.duke.edu/cps102/fall13/lects/cps230-131010/lecture13-131010algorithms2-4up.pdf)
For the average case, the analysis assumes a successful search where the target is equally likely to appear in any [position](/page/Position) within the [array](/page/Array) of $n$ elements. Under this [uniform distribution](/page/Uniform_distribution), the expected number of comparisons is the [mean](/page/Mean) of the positions from 1 to $n$, which simplifies to $\frac{n+1}{2}$.[](https://www.geeksforgeeks.org/dsa/searching-and-sorting-algorithm-notes-for-gate-exam/)
$$
\frac{1}{n} \sum_{k=1}^{n} k = \frac{1}{n} \cdot \frac{n(n+1)}{2} = \frac{n+1}{2}
$$
Thus, the average-case [time complexity](/page/Time_complexity) is also $O(n)$.[](https://www.cs.toronto.edu/~david/course-notes/csc110-111/19-average-case/02-linear-search-analysis.html)
The impact of data distribution on time complexity varies by array organization. In unsorted arrays, the algorithm must always potentially scan all $n$ elements in the worst case, with no opportunity for early termination beyond finding the target.[](https://courses.cs.duke.edu/cps102/fall13/lects/cps230-131010/lecture13-131010algorithms2-4up.pdf) In ordered arrays, however, the search can incorporate early termination for unsuccessful queries: the algorithm stops upon identifying the first element exceeding the target value (in ascending order), which can reduce comparisons for absent targets smaller than some elements in the array.[](https://runestone.academy/ns/books/published/pythonds/SortSearch/TheSequentialSearch.html)
Factors such as hardware characteristics also influence practical performance. Linear search's sequential access pattern promotes strong spatial locality in array-based implementations, minimizing cache misses and favoring efficiency for small $n$ where the entire array fits within cache.[](https://pages.cs.wisc.edu/~jignesh/publ/Revenge_of_the_Interpolation_Search.pdf)
### Space Complexity
The space complexity of the linear search algorithm, when implemented iteratively on an unsorted array, is O(1) auxiliary space, as it requires only a constant amount of extra memory for a loop variable and index, without allocating additional data structures proportional to the input size.[](https://www.geeksforgeeks.org/dsa/linear-search/) This in-place nature ensures that the algorithm operates directly on the given array, minimizing memory overhead beyond the input itself.[](https://www.programiz.com/dsa/linear-search)
In the sentinel variant of linear search, where a temporary copy of the target element is appended to the end of the [array](/page/Array) to simplify [boundary](/page/Boundary) checks, the auxiliary [space](/page/Space_complexity) remains O(1), since the addition involves only a single extra element regardless of the array length.[](https://www.geeksforgeeks.org/sentinel-linear-search/)
When performing linear search on a [linked list](/page/Linked_list), the space complexity is also O(1) auxiliary space, as the traversal relies solely on pointer updates to navigate nodes without copying the list or using extra storage.[](https://www.geeksforgeeks.org/dsa/time-and-space-complexity-of-linked-list/)
Although recursive implementations of linear search exist, they are uncommon due to their [O(n](/page/O(n)) space complexity from the [recursion](/page/Recursion) stack, which grows linearly with the depth of calls equal to the [list](/page/List) size; iterative versions are thus preferred for their constant space efficiency.[](https://www.geeksforgeeks.org/dsa/recursive-c-program-linearly-search-element-given-array/)
This constant auxiliary space usage makes linear search highly memory-efficient, rendering it particularly suitable for [embedded](/page/Embedded) systems where resources are constrained and additional [data](/page/Data) structures cannot be afforded.[](https://www.upgrad.com/blog/introduction-to-linear-search-algorithm/)
## Implementations
### Pseudocode
The pseudocode for linear search provides a high-level, [language-agnostic](/page/Language-agnostic) description of the algorithm's logic, facilitating understanding across different implementations. It typically involves iterating through elements sequentially until a match is found or the end is reached. Best practices in pseudocode include explicitly checking for empty or [null](/page/Null) inputs to avoid errors, such as returning an indicator for "not found" immediately if the structure is empty.[](https://faculty.cs.niu.edu/~mcmahon/CS241/Notes/Search_Algorithms/linear_search.html)
### Basic Unsorted Array
The following [pseudocode](/page/Pseudocode) implements linear search on an unsorted [array](/page/Array), using a [for loop](/page/For_loop) to iterate through indices and return the first matching index or -1 if not found. Comments explain each step for clarity.
function linearSearch(array, n, target)
// Handle empty array case
if n <= 0 or array is null
return -1 // Not found
// Iterate through each element
for i from 0 to n-1
// Compare current element with target
if [array](/page/Array)[i] == target
// Return index if match found
return i
// No match found after full scan
return -1
// Iterate through each element
for i from 0 to n-1
// Compare current element with target
if [array](/page/Array)[i] == target
// Return index if match found
return i
// No match found after full scan
return -1
This formulation ensures the loop bounds are respected and terminates early upon finding the target.[](https://faculty.cs.niu.edu/~mcmahon/CS241/Notes/Search_Algorithms/linear_search.html)
### With Sentinel
Sentinel linear search optimizes the basic version by temporarily placing the target value at the end of the array, eliminating the need to check array bounds within the loop. This reduces comparisons in cases where the target is absent. The original last element is saved and restored if necessary. Comments detail the modification and restoration logic.
This formulation ensures the loop bounds are respected and terminates early upon finding the target.[](https://faculty.cs.niu.edu/~mcmahon/CS241/Notes/Search_Algorithms/linear_search.html)
### With Sentinel
Sentinel linear search optimizes the basic version by temporarily placing the target value at the end of the array, eliminating the need to check array bounds within the loop. This reduces comparisons in cases where the target is absent. The original last element is saved and restored if necessary. Comments detail the modification and restoration logic.
function sentinelLinearSearch(array, n, target)
// Handle empty array case
if n <= 0 or array is null
return -1 // Not found
// Save the original last element
last = array[n-1]
// Place sentinel (target) at the end
array[n-1] = target
i = 0
// Loop until target is found (guaranteed by sentinel)
while array[i] != target
i = i + 1
// Restore original last element
array[n-1] = last
// Verify if found within original bounds
if i < n-1 or (i == n-1 and last == target)
return i
else
return -1 // Not found
// Save the original last element
last = array[n-1]
// Place sentinel (target) at the end
array[n-1] = target
i = 0
// Loop until target is found (guaranteed by sentinel)
while array[i] != target
i = i + 1
// Restore original last element
array[n-1] = last
// Verify if found within original bounds
if i < n-1 or (i == n-1 and last == target)
return i
else
return -1 // Not found
This approach assumes the array can be modified temporarily; in read-only scenarios, a copy may be used instead.[](https://cs.brynmawr.edu/Courses/cs337/spring2023/Labs/Lab02.pdf)
### Ordered Array
For a sorted array in ascending order, linear search can include an early exit condition: if the current element exceeds the [target](/page/Target), the search stops as no further matches are possible. This variant maintains the sequential scan but improves average-case efficiency for absent targets. Comments highlight the additional comparison.
This approach assumes the array can be modified temporarily; in read-only scenarios, a copy may be used instead.[](https://cs.brynmawr.edu/Courses/cs337/spring2023/Labs/Lab02.pdf)
### Ordered Array
For a sorted array in ascending order, linear search can include an early exit condition: if the current element exceeds the [target](/page/Target), the search stops as no further matches are possible. This variant maintains the sequential scan but improves average-case efficiency for absent targets. Comments highlight the additional comparison.
function orderedLinearSearch(array, n, target)
// Handle empty array case
if n <= 0 or array is null
return -1 // Not found
// Iterate through each element
for i from 0 to n-1
// Early exit if current element > [target](/page/Target) (sorted ascending)
if [array](/page/Array)[i] > [target](/page/Target)
return -1 // Target not present (would be before this point)
// Compare current element with [target](/page/Target)
if [array](/page/Array)[i] == [target](/page/Target)
// Return index if match found
return i
// No match found after full scan
return -1
// Iterate through each element
for i from 0 to n-1
// Early exit if current element > [target](/page/Target) (sorted ascending)
if [array](/page/Array)[i] > [target](/page/Target)
return -1 // Target not present (would be before this point)
// Compare current element with [target](/page/Target)
if [array](/page/Array)[i] == [target](/page/Target)
// Return index if match found
return i
// No match found after full scan
return -1
The early exit leverages the sorted property without altering the worst-case behavior.[](https://opendsa-server.cs.vt.edu/ODSA/Books/CS4104/html/SortedSearch.html)
### Variations for Lists
Linear search on a singly [linked list](/page/Linked_list) uses a [while loop](/page/While_loop) to traverse nodes via next pointers, starting from the head. It checks for [null](/page/Null) to handle empty [lists](/page/List_of_even-toed_ungulates_by_population) and terminates at the end or upon match. This avoids index-based [access](/page/Access), suitable for [dynamic structures](/page/Dynamic_Structures). Comments describe the traversal.
The early exit leverages the sorted property without altering the worst-case behavior.[](https://opendsa-server.cs.vt.edu/ODSA/Books/CS4104/html/SortedSearch.html)
### Variations for Lists
Linear search on a singly [linked list](/page/Linked_list) uses a [while loop](/page/While_loop) to traverse nodes via next pointers, starting from the head. It checks for [null](/page/Null) to handle empty [lists](/page/List_of_even-toed_ungulates_by_population) and terminates at the end or upon match. This avoids index-based [access](/page/Access), suitable for [dynamic structures](/page/Dynamic_Structures). Comments describe the traversal.
function listLinearSearch(head, target)
// Handle empty list case
if head is null
return null // Not found (or -1 equivalent)
[current](/page/Current) = head
// Traverse until end or match
while [current](/page/Current) is not [null](/page/Null) and [current](/page/Current).data != target
// Move to next [node](/page/Node)
[current](/page/Current) = [current](/page/Current).next
// [Return](/page/Return) [node](/page/Node) if found, else [null](/page/Null)
[return](/page/Return) [current](/page/Current)
[current](/page/Current) = head
// Traverse until end or match
while [current](/page/Current) is not [null](/page/Null) and [current](/page/Current).data != target
// Move to next [node](/page/Node)
[current](/page/Current) = [current](/page/Current).next
// [Return](/page/Return) [node](/page/Node) if found, else [null](/page/Null)
[return](/page/Return) [current](/page/Current)
This returns the matching [node](/page/Node) (or [null](/page/Null) if absent), allowing [access](/page/Access) to the element's [position](/page/Position) via prior traversal if needed. For doubly linked [lists](/page/List_of_even-toed_ungulates_by_population), traversal can proceed from either end based on estimated [position](/page/Position), but the core logic remains similar.[](https://home.cs.colorado.edu/~main/lab/java4.pdf)
### Real-World Examples
In practical applications, linear search is often coded directly in programming languages for small datasets or when simplicity outweighs performance concerns. The following examples illustrate standard implementations in popular languages, incorporating error handling such as bounds checking to avoid runtime errors.
### Python
[Python](/page/Python) provides a straightforward linear search using a [for loop](/page/For_loop) with `enumerate` to track indices, returning the index of the first match or -1 if not found. This approach leverages [Python](/page/Python)'s dynamic typing, with inherent bounds safety as lists prevent out-of-bounds access via exceptions.
```python
def linear_search(arr, target):
for index, value in enumerate(arr):
if value == target:
return index
return -1
# Example usage
my_list = [10, 20, 30, 40]
result = linear_search(my_list, 30) # Returns 2
print(result) # Output: 2
This returns the matching [node](/page/Node) (or [null](/page/Null) if absent), allowing [access](/page/Access) to the element's [position](/page/Position) via prior traversal if needed. For doubly linked [lists](/page/List_of_even-toed_ungulates_by_population), traversal can proceed from either end based on estimated [position](/page/Position), but the core logic remains similar.[](https://home.cs.colorado.edu/~main/lab/java4.pdf)
### Real-World Examples
In practical applications, linear search is often coded directly in programming languages for small datasets or when simplicity outweighs performance concerns. The following examples illustrate standard implementations in popular languages, incorporating error handling such as bounds checking to avoid runtime errors.
### Python
[Python](/page/Python) provides a straightforward linear search using a [for loop](/page/For_loop) with `enumerate` to track indices, returning the index of the first match or -1 if not found. This approach leverages [Python](/page/Python)'s dynamic typing, with inherent bounds safety as lists prevent out-of-bounds access via exceptions.
```python
def linear_search(arr, target):
for index, value in enumerate(arr):
if value == target:
return index
return -1
# Example usage
my_list = [10, 20, 30, 40]
result = linear_search(my_list, 30) # Returns 2
print(result) # Output: 2
Additionally, Python's built-in in operator for lists performs a linear search under the hood via the CPython list_contains function, which iterates sequentially and returns a boolean without raising index errors.[15]
Java
In Java, linear search on an array uses a traditional for loop with the array's length for bounds checking, returning -1 on failure to indicate the element was not found. This ensures type safety through compile-time checks and prevents array index out-of-bounds exceptions via the loop condition.
java
public class LinearSearch {
public static int linearSearch(int[] arr, int target) {
for (int i = 0; i < arr.length; i++) {
if (arr[i] == target) {
return i;
}
}
return -1;
}
public static void main(String[] args) {
int[] myArray = {10, 20, 30, 40};
int result = linearSearch(myArray, 30); // Returns 2
System.out.println(result); // Output: 2
}
}
public class LinearSearch {
public static int linearSearch(int[] arr, int target) {
for (int i = 0; i < arr.length; i++) {
if (arr[i] == target) {
return i;
}
}
return -1;
}
public static void main(String[] args) {
int[] myArray = {10, 20, 30, 40};
int result = linearSearch(myArray, 30); // Returns 2
System.out.println(result); // Output: 2
}
}
C++
C++ implements linear search on a std::vector using iterators for traversal, which automatically handles bounds via the end() iterator to avoid dereferencing invalid memory. A sentinel value can be incorporated if the data includes a known terminator (e.g., -1 for positive integers), but standard cases omit it for generality.
cpp
#include <vector>
#include <iostream>
int linearSearch(const std::vector<int>& vec, int target) {
auto it = vec.begin();
while (it != vec.end()) {
if (*it == target) {
return std::distance(vec.begin(), it);
}
++it;
}
return -1;
}
int main() {
std::vector<int> myVec = {10, 20, 30, 40};
int result = linearSearch(myVec, 30); // Returns 2
std::cout << result << std::endl; // Output: 2
return 0;
}
#include <vector>
#include <iostream>
int linearSearch(const std::vector<int>& vec, int target) {
auto it = vec.begin();
while (it != vec.end()) {
if (*it == target) {
return std::distance(vec.begin(), it);
}
++it;
}
return -1;
}
int main() {
std::vector<int> myVec = {10, 20, 30, 40};
int result = linearSearch(myVec, 30); // Returns 2
std::cout << result << std::endl; // Output: 2
return 0;
}
Rust
As of 2025, modern languages like Rust promote safe linear search through iterator methods on slices or vectors, which inherently prevent buffer overflows and index panics by respecting collection bounds without manual index arithmetic. The position method provides the index of the first match or None if absent, with compile-time guarantees for type safety.
rust
fn linear_search(vec: &[i32], target: i32) -> Option<usize> {
vec.[iter](/page/ITER)().position(|&x| x == target)
}
fn main() {
let my_vec = vec![10, 20, 30, 40];
match linear_search(&my_vec, 30) {
Some([index](/page/Index)) => println!("Found at [index](/page/Index): {}", [index](/page/Index)), // Output: Found at [index](/page/Index): 2
None => println!("Not found"),
}
}
```[](https://doc.rust-lang.org/std/iter/trait.Iterator.html#method.position)
## Applications
### Common Use Cases
Linear search is particularly suitable for small datasets, where the number of elements is typically less than 100, as its straightforward implementation avoids the preprocessing overhead required for more complex algorithms like binary search, making it faster in practice for such scales.[](https://www.evanjones.ca/linear-search.html) This approach leverages its O(1) [space complexity](/page/Space_complexity) and simple logic, which minimizes constant factors in execution time for limited data sizes.[](https://dl.acm.org/doi/10.1145/2980976)
In scenarios involving unsorted or dynamic data, linear search excels due to its ability to handle real-time insertions without the need for maintaining sorted order, such as in resolving hash collisions via [linear probing](/page/Linear_probing) in hash tables.[](https://www.geeksforgeeks.org/dsa/open-addressing-collision-handling-technique-in-hashing/)[](https://dl.acm.org/doi/pdf/10.5555/324493.324618) For instance, it is commonly applied in small caches where frequent updates occur, allowing quick sequential checks without restructuring the data.[](https://www.geeksforgeeks.org/dsa/linear-search/)
Embedded systems often employ linear search for its low computational overhead in resource-constrained environments.[](https://www.geeksforgeeks.org/dsa/linear-search/)
### Limitations and Alternatives
Linear search exhibits significant inefficiencies when applied to large datasets, as its [time complexity](/page/Time_complexity) of [O(n](/page/O(n)) requires examining each element sequentially in the worst case, contrasting sharply with the O(log n) performance of [binary](/page/Binary) search on sorted arrays. This linear scaling makes it impractical for datasets exceeding thousands of elements, where the number of comparisons can grow prohibitively.[](https://research.ijcaonline.org/volume121/number3/pxc3904495.pdf) Additionally, linear search derives no advantage from [data preprocessing](/page/Data_Preprocessing) such as [sorting](/page/Sorting), rendering it suboptimal in environments where data organization is feasible.[](https://research.ijcaonline.org/volume121/number3/pxc3904495.pdf)
It is advisable to avoid linear search in scenarios involving sorted large arrays, where binary search provides logarithmic efficiency, or when searches occur frequently on static data, in which case pre-sorting followed by indexing or [binary search](/page/Binary_Search) is preferable.[](https://research.ijcaonline.org/volume121/number3/pxc3904495.pdf) In contrast to its utility in small or unsorted collections, these limitations highlight its restricted applicability in performance-critical systems.[](https://cs50.harvard.edu/college/2022/fall/notes/3/)
Prominent alternatives include binary search for sorted datasets, which halves the search space iteratively to achieve O(log n) time; hash tables, offering average-case O(1) lookups through key hashing despite potential worst-case degradation; and [interpolation search](/page/Interpolation_search), which estimates the target position based on uniform key distribution for potentially superior average-case performance over binary search in evenly spaced data.[](https://research.ijcaonline.org/volume121/number3/pxc3904495.pdf)[](https://dl.acm.org/doi/10.1145/359545.359557)
Under non-uniform access patterns following [Zipf's law](/page/Zipf's_law)—where a small [subset](/page/Subset) of elements receives disproportionately high access frequency—linear search can underperform even more severely without adaptations like [self-organization](/page/self-organization), as frequent items buried deep in the list incur repeated full traversals, elevating average costs beyond uniform expectations.[](https://ics.uci.edu/~dan/pubs/searchsurv.pdf) Modern programming libraries default to linear search implementations primarily for tiny datasets or unsorted structures, reserving more efficient methods for scalable needs.
The core trade-off of linear search lies in its implementation simplicity, requiring minimal code and no data prerequisites, against its poor scalability; in contemporary applications like cloud-based [stream processing](/page/Stream_processing), it serves mainly as a fallback for unsortable or dynamically changing [data](/page/Data) where sorting overhead would dominate.[](https://cs50.harvard.edu/college/2022/fall/notes/3/)[](https://www-cs-faculty.stanford.edu/~knuth/taocp.html)
fn linear_search(vec: &[i32], target: i32) -> Option<usize> {
vec.[iter](/page/ITER)().position(|&x| x == target)
}
fn main() {
let my_vec = vec![10, 20, 30, 40];
match linear_search(&my_vec, 30) {
Some([index](/page/Index)) => println!("Found at [index](/page/Index): {}", [index](/page/Index)), // Output: Found at [index](/page/Index): 2
None => println!("Not found"),
}
}
```[](https://doc.rust-lang.org/std/iter/trait.Iterator.html#method.position)
## Applications
### Common Use Cases
Linear search is particularly suitable for small datasets, where the number of elements is typically less than 100, as its straightforward implementation avoids the preprocessing overhead required for more complex algorithms like binary search, making it faster in practice for such scales.[](https://www.evanjones.ca/linear-search.html) This approach leverages its O(1) [space complexity](/page/Space_complexity) and simple logic, which minimizes constant factors in execution time for limited data sizes.[](https://dl.acm.org/doi/10.1145/2980976)
In scenarios involving unsorted or dynamic data, linear search excels due to its ability to handle real-time insertions without the need for maintaining sorted order, such as in resolving hash collisions via [linear probing](/page/Linear_probing) in hash tables.[](https://www.geeksforgeeks.org/dsa/open-addressing-collision-handling-technique-in-hashing/)[](https://dl.acm.org/doi/pdf/10.5555/324493.324618) For instance, it is commonly applied in small caches where frequent updates occur, allowing quick sequential checks without restructuring the data.[](https://www.geeksforgeeks.org/dsa/linear-search/)
Embedded systems often employ linear search for its low computational overhead in resource-constrained environments.[](https://www.geeksforgeeks.org/dsa/linear-search/)
### Limitations and Alternatives
Linear search exhibits significant inefficiencies when applied to large datasets, as its [time complexity](/page/Time_complexity) of [O(n](/page/O(n)) requires examining each element sequentially in the worst case, contrasting sharply with the O(log n) performance of [binary](/page/Binary) search on sorted arrays. This linear scaling makes it impractical for datasets exceeding thousands of elements, where the number of comparisons can grow prohibitively.[](https://research.ijcaonline.org/volume121/number3/pxc3904495.pdf) Additionally, linear search derives no advantage from [data preprocessing](/page/Data_Preprocessing) such as [sorting](/page/Sorting), rendering it suboptimal in environments where data organization is feasible.[](https://research.ijcaonline.org/volume121/number3/pxc3904495.pdf)
It is advisable to avoid linear search in scenarios involving sorted large arrays, where binary search provides logarithmic efficiency, or when searches occur frequently on static data, in which case pre-sorting followed by indexing or [binary search](/page/Binary_Search) is preferable.[](https://research.ijcaonline.org/volume121/number3/pxc3904495.pdf) In contrast to its utility in small or unsorted collections, these limitations highlight its restricted applicability in performance-critical systems.[](https://cs50.harvard.edu/college/2022/fall/notes/3/)
Prominent alternatives include binary search for sorted datasets, which halves the search space iteratively to achieve O(log n) time; hash tables, offering average-case O(1) lookups through key hashing despite potential worst-case degradation; and [interpolation search](/page/Interpolation_search), which estimates the target position based on uniform key distribution for potentially superior average-case performance over binary search in evenly spaced data.[](https://research.ijcaonline.org/volume121/number3/pxc3904495.pdf)[](https://dl.acm.org/doi/10.1145/359545.359557)
Under non-uniform access patterns following [Zipf's law](/page/Zipf's_law)—where a small [subset](/page/Subset) of elements receives disproportionately high access frequency—linear search can underperform even more severely without adaptations like [self-organization](/page/self-organization), as frequent items buried deep in the list incur repeated full traversals, elevating average costs beyond uniform expectations.[](https://ics.uci.edu/~dan/pubs/searchsurv.pdf) Modern programming libraries default to linear search implementations primarily for tiny datasets or unsorted structures, reserving more efficient methods for scalable needs.
The core trade-off of linear search lies in its implementation simplicity, requiring minimal code and no data prerequisites, against its poor scalability; in contemporary applications like cloud-based [stream processing](/page/Stream_processing), it serves mainly as a fallback for unsortable or dynamically changing [data](/page/Data) where sorting overhead would dominate.[](https://cs50.harvard.edu/college/2022/fall/notes/3/)[](https://www-cs-faculty.stanford.edu/~knuth/taocp.html)