Array
An array is a systematic arrangement of similar objects, usually in rows and columns.[1] The term is used across various fields. In mathematics and computing, it refers to ordered collections of data, such as matrices or data structures for efficient storage and access. In physical sciences and engineering, arrays describe configurations like antenna or telescope arrays for signal processing. Biological applications include DNA microarrays for gene analysis and protein arrays for diagnostics. Other uses appear in music (e.g., sound arrays) and military contexts (e.g., historical formations).Mathematics and Computing
Mathematical arrays
In mathematics, an array is defined as a systematic arrangement of numbers, symbols, or expressions organized in rows and columns, forming a rectangular structure that facilitates organized data representation and computation.[1] This concept is often used interchangeably with the term "matrix" in elementary contexts, where a matrix specifically denotes a two-dimensional array equipped with algebraic operations, though arrays can extend to higher dimensions as ordered lists of lists with uniform lengths at each level.[1] One-dimensional arrays, such as row vectors (arranged horizontally) or column vectors (arranged vertically), represent linear sequences, while multidimensional arrays generalize this to tensors in advanced settings.[2] The historical development of mathematical arrays traces back to early tabulations in the 18th and 19th centuries, where they served as tools for organizing complex calculations. Leonhard Euler, in his work around 1782, explored square arrays of symbols known as Graeco-Latin squares, which are orthogonal arrangements ensuring unique pairings in rows and columns, laying groundwork for combinatorial designs. Carl Friedrich Gauss advanced their application in 1809 through his "Theoria Motus Corporum Coelestium," where he employed array-like structures to solve systems of linear equations via least squares methods, treating observations as rectangular tabulations for astronomical data reduction.[3] These early uses evolved into the formal matrix theory formalized by Arthur Cayley in the 1850s, emphasizing arrays as foundational for linear algebra.[3] Key properties of mathematical arrays include indexing, which assigns positions to elements—denoted as A_{ij} for the element in the i-th row and j-th column in a two-dimensional array—and support for various operations when dimensions align. For addition, if two arrays A and B have the same dimensions, their sum C = A + B is defined component-wise such that C_{ij} = A_{ij} + B_{ij} for all i, j.[2] Scalar multiplication scales each element by a constant k, yielding (kA)_{ij} = k \cdot A_{ij}, while transposition swaps rows and columns, producing A^T where (A^T)_{ij} = A_{ji}.[2] These operations preserve the array structure and enable manipulations like solving linear systems, where a coefficient array A (e.g., a 2×2 matrix \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}) multiplies a column vector \mathbf{x} to equal another vector \mathbf{b}, as in A\mathbf{x} = \mathbf{b}.[2] Examples illustrate these concepts: a row vector like [1, 2, 3] arrays scalars horizontally for sequence representation, a column vector \begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix} does so vertically for vector spaces, and a simple 2D array such as \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} models coefficients in linear equations, solvable via methods like Gaussian elimination.[2] In statistics, arrays underpin contingency tables, which are two-dimensional count arrays displaying frequencies of categorical variables' joint occurrences, enabling analyses like chi-squared tests for independence. Data matrices, as rectangular arrays of observations and variables, further support multivariate statistical techniques, such as principal component analysis.[4]Arrays in computer science
In computer science, an array is a fundamental data structure that stores a fixed-size collection of elements of the same data type in a contiguous block of memory, allowing efficient access via indices that typically start from 0.[5][6] This structure enables direct indexing to retrieve or modify elements, making it suitable for scenarios where the number of elements is known in advance and random access is frequent.[7] Arrays come in various types to accommodate different needs. Static arrays have a fixed size determined at compile time, with memory allocated once and unresizable during execution, while dynamic arrays allow resizing at runtime through mechanisms like heap allocation, though this may involve reallocation and copying for efficiency.[8] One-dimensional arrays represent linear sequences, such as a list of numbers, whereas multidimensional arrays simulate grids or matrices, like a 2D array for image pixels accessed as array.[9] Jagged arrays, a variant of multidimensional arrays, consist of arrays of varying lengths within a single dimension, enabling irregular structures without wasting space in rectangular formats.[10] Key operations on arrays include initialization, which sets all elements to a default value; access and modification, both achieving O(1) time complexity due to direct index calculation; insertion and deletion, which require shifting elements and thus take O(n) time in the worst case; and searching, which is O(n) for linear scans but O(log n) for binary search on sorted arrays.[11][12] Traversal, a common operation, can be implemented via a simple loop, as shown in the following pseudocode:This iterates through all elements sequentially in O(n time.[13] Memory management for arrays relies on contiguous allocation, where elements occupy sequential addresses to facilitate cache-friendly access and constant-time indexing via offset calculations. Many programming languages incorporate bounds checking to verify indices before access, preventing buffer overflows that could lead to security vulnerabilities or crashes, though this adds overhead in performance-critical code.[14] The concept of arrays originated in the 1950s with the development of FORTRAN, the first high-level programming language designed for scientific computing, where arrays enabled efficient numerical processing on early computers.[15] Over time, arrays evolved to support parallelism in modern languages, such as through coarrays in Fortran standards, allowing distributed memory access across processors for high-performance computing applications.[16] Static arrays face limitations due to their fixed size, which can lead to inefficiency or failure if the required capacity changes unpredictably, prompting alternatives like linked lists that offer dynamic sizing at the cost of slower access times.[17][18]for i from 0 to length-1: process array[i]for i from 0 to length-1: process array[i]