Fact-checked by Grok 2 weeks ago

Index notation

Index notation, also known as tensor index notation or , is a mathematical formalism that represents vectors, matrices, and higher-order tensors using indices to denote components, enabling compact expression of algebraic operations and equations in physics and engineering. Building on earlier work in by , it relies on the Einstein summation convention, where a repeated index in a product implies an implicit over its range (typically 1 to 3 in Cartesian coordinates), eliminating the need for explicit summation symbols and simplifying multi-dimensional calculations. The Einstein summation convention, a key aspect of index notation, was introduced by in 1916 during his development of to handle tensor equations efficiently, providing a coordinate-independent framework that extends naturally to and facilitates operations like dot products (\vec{A} \cdot \vec{B} = A_i B_i), cross products ((\vec{A} \times \vec{B})_k = \epsilon_{ijk} A_i B_j using the ), and or computations. Key rules include distinguishing free indices (which vary and define the tensor rank) from dummy indices (repeated and summed over), ensuring no index appears more than twice in a term unless specified, and using comma notation (u_{i,j}) for partial derivatives. This notation is foundational in fields such as , where it describes stress tensors (\sigma_{ij}) and equilibrium equations (\sigma_{ij,j} = 0), for , and for metrics.

In Mathematics

Vectors and One-Dimensional Arrays

In index notation, one-dimensional arrays, commonly referred to as vectors in , are represented as ordered sequences of scalar components denoted by a lowercase with a single subscript, such as a_i, where the index i specifies the position and ranges from 1 to n, the of the vector. This notation treats the vector as a list of numbers, allowing precise reference to individual elements without explicit coordinate systems beyond the indexing. For instance, consider the \mathbf{a} = (10, 8, 9, 6, 3, 5), which has n = 6; its components are then a_1 = 10, a_2 = 8, a_3 = 9, a_4 = 6, a_5 = 3, and a_6 = 5. Such representation facilitates the handling of as arrays in algebraic manipulations. Basic operations on using index notation emphasize component-wise actions. Element-wise of two \mathbf{a} and \mathbf{b} to form \mathbf{c} is expressed as c_i = a_i + b_i for each i = 1 to n, yielding n independent scalar equations that collectively define the resulting . Similarly, follows (k \mathbf{a})_i = k a_i. In mathematical contexts, the index range conventionally begins at 1, aligning with the natural numbering of positions in sequences, though adjustments may occur in specific applications. Summation over indices is not implied in this basic form and requires explicit indication, distinguishing it from more advanced conventions. The subscript notation for emerged in 19th-century linear algebra texts to denote coordinate representations, with early systematic use appearing in Hermann Grassmann's Die lineale Ausdehnungslehre (1844), which employed indices for vector components in higher-dimensional extensions. This approach was further developed by J. Willard Gibbs in his vector analysis notes around 1881–1884, solidifying its role in coordinate-based algebra.

Matrices and Two-Dimensional Arrays

In , matrices are defined as rectangular arrays of numbers arranged in rows and columns, represented using double-index notation with two subscripts a_{ij}, where i denotes the row ranging from 1 to m (the number of rows) and j denotes the column ranging from 1 to n (the number of columns). This structure allows precise access to individual elements, such as the entry in the i-th row and j-th column. Matrices are typically denoted by uppercase letters in boldface, such as \mathbf{A}, to distinguish them from scalars and vectors, which use italicized lowercase letters or bold lowercase for with a single index. Building on , a can be viewed as an of row , where the i-th row is itself a \mathbf{a}_i = (a_{i1}, a_{i2}, \dots, a_{in}). For example, consider the $4 \times 3 \mathbf{A} = \begin{pmatrix} 9 & 8 & 6 \\ 1 & 2 & 7 \\ 4 & 9 & 2 \\ 6 & 0 & 5 \end{pmatrix}, where a_{11} = 9 is the element in the first row and first column, and a_{23} = 7 is the element in the second row and third column. The double-subscript notation for matrices has roots in early 19th-century work on determinants and linear systems, with using a_{ij} as early as 1812. It was systematized by in his 1858 memoir on the theory of matrices, establishing matrices as algebraic objects with indexed components. A fundamental operation on matrices is element-wise , applicable to two matrices \mathbf{A} and \mathbf{B} of the same dimensions m \times n. The resulting matrix \mathbf{C} = \mathbf{A} + \mathbf{B} has entries given by c_{ij} = a_{ij} + b_{ij} for all i = 1, \dots, m and j = 1, \dots, n, producing m \times n independent equations that define the sum. This operation preserves the rectangular structure and is both commutative and associative, mirroring scalar .

Tensors and Higher-Dimensional Arrays

In index notation, tensors extend the concepts of vectors and matrices to multi-dimensional arrays that require multiple indices to specify their components. A tensor of order k (also referred to as rank k) is formally defined as a multidimensional or N-way , where N = k \geq 0, with elements accessed via indices i_1, i_2, \dots, i_k, each ranging from 1 to the size of the corresponding (rank-0 for scalars, rank-1 for vectors, rank-2 for matrices, and higher for more complex structures). This notation allows tensors to represent data with more than two dimensions, capturing relationships in multi-way structures such as those arising in scientific computing and physical modeling. The order or rank of a tensor indicates the number of indices needed to identify a unique component, distinguishing it from lower-order objects: rank-1 tensors correspond to vectors (one index), rank-2 to matrices (two indices), and higher ranks to more complex multi-way data arrays. For instance, a rank-3 tensor T_{ijk} in has $3^3 = 27 components, with each index i, j, k typically running from 1 to 3. Components of tensors are conventionally denoted using lowercase letters with multiple subscripts placed below the symbol, such as t_{i_1 i_2 \dots i_k}, where the absence of repeated indices implies no summation is performed. The for tensors was developed in the late 19th and early 20th centuries as part of absolute differential calculus, primarily by in works from the 1880s to 1900, with refinements by , enabling the compact handling of multilinear forms in and physics. Basic operations on tensors, such as , are performed component-wise across matching indices. For a rank-3 tensor addition, if A and B are tensors with dimensions m \times n \times p, the resulting tensor C has components given by c_{ijk} = a_{ijk} + b_{ijk}, where i = 1 to m, j = 1 to n, and k = 1 to p. This preserves the multi-dimensional structure without altering the index ranges. Tensors find applications in , where higher-order tensors describe complex physical phenomena, such as the stress tensor that quantifies internal forces within materials, though the standard stress tensor is rank-2; extensions to higher ranks appear in advanced models of material behavior.

Advanced Mathematical Applications

Einstein Summation Convention

The , also known as , is a notational in where a repeated in a mathematical expression implies an implicit over that , eliminating the need for explicit symbols. Specifically, when an appears exactly twice in a term—typically once as a subscript (lower ) and once as a superscript (upper ) in contexts involving metrics, or both as subscripts in spaces—the expression denotes a over all possible values of that , ranging from 1 to the dimension of the space. For instance, the scalar product of two vectors \mathbf{a} and \mathbf{b} in three dimensions is written as a_i b_i = \sum_{i=1}^3 a_i b_i, where the repeated i indicates the without the explicit \sum . This convention was introduced by Albert Einstein in his 1916 paper "Die Grundlage der allgemeinen Relativitätstheorie," where it served to streamline the complex tensor equations of general relativity by suppressing summation signs and reducing notational clutter. Einstein employed it to handle the multi-index contractions essential for describing spacetime curvature, marking a significant simplification over earlier explicit summation methods used in tensor calculus. The rules of the convention are precise to ensure unambiguous interpretation: exactly one pair of repeated indices per term triggers the , and no more than two occurrences of the same are allowed in a single term to avoid confusion; indices that appear only once in an expression are termed "free indices" and determine the (or tensor ) of the resulting object. For example, in the expression for the j-th component of a cross product or higher operations, free indices like i or j remain unsummed, while dummies (repeated ones) are contracted. Violation of these rules, such as triple repetition, requires explicit clarification or reformulation. Illustrative examples highlight its application. The dot product of vectors \mathbf{a} and \mathbf{b} is compactly expressed as \mathbf{a} \cdot \mathbf{b} = a_i b^i, where the raises one index, implying over i. For matrix multiplication, the i j-component of the product \mathbf{c} = \mathbf{a} \mathbf{b} is c_{ij} = a_{ik} b_k^j, with over the repeated k (from 1 to the matrix ), transforming what would be a verbose triple loop into a single concise equation. The primary advantages of the Einstein summation convention lie in its ability to reduce the verbosity of tensor equations, making them more readable and less prone to notational errors in fields like physics and , where multi-index manipulations are routine. By implicitly handling contractions, it facilitates quicker derivations and in covariant formulations, as seen in and , without altering the underlying .

Covariant and Contravariant Indices

In tensor notation, indices are distinguished as covariant or contravariant based on their transformation properties under changes of coordinates. Covariant indices, denoted by subscripts (e.g., T_{ij}), transform in the same manner as the coordinate basis vectors, following the rule T'_{ij} = \frac{\partial x^k}{\partial x'^i} \frac{\partial x^l}{\partial x'^j} T_{kl}. Contravariant indices, denoted by superscripts (e.g., T^{ij}), transform inversely to the basis, as T'^{ij} = \frac{\partial x'^i}{\partial x^k} \frac{\partial x'^j}{\partial x^l} T^{kl}. For a covector, such as the differential form \omega_i, the components carry a lower index and transform covariantly, while a position vector x^i has contravariant components with an upper index. Tensors of higher rank incorporate both types of indices, with the total transformation determined by the number of each. A tensor with two contravariant indices, T^{ij}, is fully contravariant, while T_{kl} is fully covariant; mixed tensors, such as a (1,1) tensor T^i_j, have one of each and transform as T'^i_j = \frac{\partial x'^i}{\partial x^k} \frac{\partial x^l}{\partial x'^j} T^k_l. This notation ensures the tensor's multilinearity and invariance under coordinate changes, preserving the geometric object it represents. The metric tensor g_{ij} provides a mechanism to raise or lower indices, interconverting between covariant and contravariant forms. To raise a covariant index, one uses the contravariant metric g^{ij}, as in v^i = g^{ij} v_j, where the inverse metric satisfies g^{ik} g_{kj} = \delta^i_j. Conversely, lowering a contravariant index yields v_i = g_{ij} v^j. In Euclidean space with the flat metric g_{ij} = \delta_{ij}, this operation is trivial, as covariant and contravariant components coincide numerically. However, in non-Euclidean settings like , the distinction is essential. The Minkowski \eta_{\mu\nu} = \operatorname{diag}(1,-1,-1,-1) governs 4-vectors, such as the infinitesimal displacement dx^\mu = (c dt, dx, dy, dz), whose contravariant components transform under Lorentz boosts, while the covariant form dx_\mu = \eta_{\mu\nu} dx^\nu includes the metric's signature to maintain invariance of the interval ds^2 = \eta_{\mu\nu} dx^\mu dx^\nu. This ensures physical quantities like remain coordinate-independent. These index conventions are fundamental in and , where they facilitate the description of curved spacetimes and ensure tensor equations are form-invariant. In , the metric g_{\mu\nu} defines geometry, enabling covariant derivatives and the , which rely on precise index positioning to capture gravitational effects. In , they underpin the transformation laws for manifolds, preserving intrinsic properties like distances and angles under diffeomorphisms.

In Computing

Array Indexing Principles

In computing, array indexing refers to the process of accessing individual elements stored in contiguous memory locations using integer offsets from a base address. The memory address of an element in a one-dimensional array is calculated using the formula: address = base address + (index × element size), where the index denotes the position of the element relative to the start of the array and the element size is the number of bytes occupied by each item. For instance, consider a one-dimensional array of integers (each 4 bytes) with a base address of 3000; the address of the element at index i is given by $3000 + 4i. A key distinction in array indexing arises between zero-based and one-based conventions. In zero-based indexing, prevalent in , indices from 0 to n-1 for an of n , aligning with pointer where the first is at 0 from the . This contrasts with one-based indexing, common in where sequences often start at 1, but favors zero-based for its simplicity in address calculations, as the directly multiplies by the size without . For multidimensional arrays, elements are linearized into a one-dimensional in memory using either row-major or column-major order. In row-major order, elements of each row are stored contiguously, with the address of element a_{ij} in a 2D calculated as base address + (i \times \text{number of columns} + j) \times \text{element size}. Column-major order, conversely, stores elements of each column contiguously, reversing the index progression for better locality in column-wise operations. These storage conventions ensure efficient traversal but require careful mapping to compute correct addresses. Bounds checking verifies that indices fall within valid ranges (e.g., 0 to n-1 for zero-based arrays) to prevent out-of-bounds access, which can lead to memory corruption, security vulnerabilities, or program crashes. Such checks are crucial in preventing exploits like buffer overflows, a prevalent of memory errors in low-level languages. The evolution of array indexing traces back to the 1950s with Fortran, which adopted one-based indexing to mirror mathematical notation and simplify scientific computations. Subsequent languages like BCPL in the 1960s introduced zero-based indexing for efficient memory addressing, influencing B and then C in the 1970s, which popularized zero-based defaults in modern computing for their alignment with hardware offsets. Column-major storage originated in early Fortran implementations to optimize vector operations on hardware of the era.

Multidimensional Arrays in Programming

In programming, multidimensional arrays extend index notation to represent data structures with multiple dimensions, such as matrices or higher-order tensors, through declarations that specify sizes in brackets. For instance, in C, the declaration int a[3][4]; creates a two-dimensional capable of holding 12 elements, arranged as 3 rows and 4 columns each, with individual elements accessed using the notation a[i][j] where i ranges from 0 to 2 and j from 0 to 3. These arrays are typically allocated as a single contiguous block in memory using row-major order, where elements within each row are stored sequentially before moving to the next row. This layout optimizes sequential access along rows, as seen in cache-friendly operations common in numerical computing. For a 3x3 integer matrix declared as int m[3][3];, the memory address of element m[i][j] is computed as the base address plus (i \times 3 + j) \times \sizeof(int), ensuring efficient linear indexing from the flattened storage. In scientific computing and , index notation, particularly the , is implemented to handle tensor operations efficiently. Libraries such as , , and provide the einsum function, which uses index notation to specify tensor contractions, outer products, and other operations compactly. For example, in , the of two vectors \vec{A} and \vec{B} can be computed as np.einsum('i,i->', A, B), where repeated indices like i imply , mirroring the mathematical without explicit loops. This approach simplifies complex expressions in applications like layers and physical simulations. Element-wise operations on multidimensional arrays rely on nested loops to iterate over indices, enabling straightforward manipulations like addition or without requiring specialized libraries for basic cases. For example, adding two 2D arrays a and b of size [m][n] involves a double loop: for each i from 0 to m-1 and j from 0 to n-1, compute c[i][j] = a[i][j] + b[i][j], which directly translates index notation to sequential memory access. A key challenge arises with arrays, which are implemented as arrays of arrays (e.g., int** jagged in C) allowing rows of unequal lengths, leading to non-contiguous memory that requires explicit stride calculations—such as offsets based on each sub-array's size—for efficient and traversal. Modern languages and libraries extend support to higher dimensions, facilitating or greater arrays in applications like scientific simulations, where a array might model spatial grids in or data in , with declaration like double vol[10][20][30]; and via vol[x][y][z].

Variations Across Languages

In C and C++, array indexing is zero-based, meaning the first element is accessed at index 0, and multidimensional arrays follow row-major order where elements are stored contiguously by rows. This allows efficient pointer arithmetic, such as accessing the i-th element of a one-dimensional array via *(a + i), which directly computes the memory offset from the base address. Python employs zero-based indexing for both built-in lists and NumPy arrays, supporting flexible slicing operations like a[i:j] for subsets and a[i,j] for two-dimensional access, with additional features such as broadcasting to handle operations across arrays of different shapes without explicit loops. NumPy's advanced indexing, including boolean and integer array indices, further enhances tensor-like manipulations in scientific computing, complemented by einsum for Einstein notation-based operations. Java also uses zero-based indexing for arrays, with strict static typing requiring declaration of array dimensions at compile time, as in int[][] matrix = new int[rows][cols]; followed by access via matrix[i][j]. Unlike C++, Java lacks direct pointer manipulation, emphasizing bounds checking to prevent runtime errors from invalid indices. Fortran, particularly in its modern standards, defaults to one-based indexing where the first element is at index 1, and arrays are stored in column-major order, optimizing access for scientific simulations by prioritizing contiguous storage along columns. This convention, retained from early versions, allows explicit specification of index bounds, such as REAL :: array(-1:1, 0:10), supporting non-standard ranges for domain-specific modeling. Julia maintains one-based indexing by default, aligning with mathematical conventions, and supports custom index offsets for arrays, including multidimensional tensors via Cartesian indexing like A[i,j,k]. In GPU-accelerated contexts, libraries like CuArrays extend this indexing to devices, preserving Julia's semantics for high-performance tensor operations without shifting to zero-based conventions. A notable trend since the 1970s, influenced by the design of C, has been the widespread adoption of zero-based indexing in most general-purpose languages to simplify pointer arithmetic and memory addressing, though domain-specific languages like Fortran and Julia retain one-based indexing for compatibility with scientific and mathematical workflows. Modern libraries, such as NumPy in Python, bridge these differences by providing tensor support that abstracts away storage order variations.

References

  1. [1]
    [PDF] Matrix and Index Notation - MIT
    Sep 18, 2000 · Here the first subscript index denotes the row and the second the column. The indices also have a physical meaning, for instance σ23 indicates ...
  2. [2]
    [PDF] Primer on Index Notation
    Index notation is introduced to help answer these questions and to simplify many other calculations with vectors. In his presentation of relativity theory, ...
  3. [3]
    [PDF] A Primer on Index Notation
    Aug 28, 2006 · Index versus Vector Notation. Index notation (a.k.a. Cartesian notation) is a powerful tool for manip- ulating multidimensional equations.<|control11|><|separator|>
  4. [4]
    [PDF] A History of Mathematical Notations, 2 Vols - Monoskop
    PREFACE. The study of the history of mathematical notations was sug- gested to me by Professor E. H. Moore, of the University of Chicago.
  5. [5]
    [PDF] A Primer on Matrices - EE263
    Sep 17, 2012 · These notes describe the notation of matrices, the mechanics of matrix manipulation, and how to use matrices to formulate and solve sets of ...
  6. [6]
    [PDF] Math Notation
    Index of a matrix: Xij, Yjk, Zii, or X[i, j], Y [j, k], Z[i, i]. • For an indexed matrix Xij or X[i, j], i indexes rows and j indexes columns. We use non bold ...
  7. [7]
    [PDF] Tensor Decompositions and Applications
    A tensor is a multidimensional or N-way array. Decompositions of higher-order tensors (i.e., N-way arrays with N ≥ 3) have applications in psycho- metrics, ...
  8. [8]
    [PDF] An Introduction to Tensors for Students of Physics and Engineering
    It not only shows the index notation in full swing, but also provides two relationships fundamental to tensor algebra, specifically. Page 26. NASA/TM 2002 ...Missing: multi- | Show results with:multi-
  9. [9]
    [PDF] Introduction to Continuum Mechanics - Physics Internal Website
    The stress tensor Tij is a second order tensor. By extension we can call vectors first order tensors and scalars zeroth order tensors. The dot product of two ...
  10. [10]
    [PDF] Primer on Index Notation - DSpace@MIT
    Summation Convention Rule #1. Repeated, doubled indices in quantities multiplied together are implicitly summed. According to the Einstein Summation Convention, ...
  11. [11]
    [PDF] The Einstein Summation Notation
    The Einstein summation notation is an algebraic short-hand for expressing multicomponent Carte- sian quantities, manipulating them, simplifying.
  12. [12]
    [PDF] The Einstein convention, indices and networks - Bard Faculty
    An example from your textbook is the definition of the angular momentum of the αth particle in a collection of N particles, `α = rα × pα (no sum). Finally we ...
  13. [13]
    [PDF] Die Grundlage der allgemeinen Relativitätstheorie
    1916. ANNALEN DER PHYSIK. VIERTE FOLGE. BAND 49. 1. Die Grundlage der allgemeinen Relativitätstheorie; von A. Einstein. № 7. +. Die im nachfolgenden ...
  14. [14]
    Einsteins Summation Convention - an overview | ScienceDirect Topics
    The Einstein summation convention is defined as a notational method in tensor mathematics where an index that occurs exactly twice in a tensor expression is ...
  15. [15]
    [PDF] Introduction to Tensor Calculus
    4. 4.3. Contraction of indices. With tensors of at least one covariant and at least one contravariant index we can define a kind of 'internal inner product'.
  16. [16]
    [PDF] 1. Vectors, contravariant and covariant
    The upper index is the row and the lower index is the column, so for contravariant transformations, α is the row and β is the column of the matrix. For example ...
  17. [17]
    [PDF] The displacement 4-vector - MIT
    Objects with indices up, like ∆xµ, are often called contravariant vector components. ... contravariant and covariant components are worth knowing about, but ( ...<|control11|><|separator|>
  18. [18]
    Arrays in Memory - Computer Science and Engineering
    An array allocates N*M bytes of contiguous memory, where M is the size of each element. The compiler remembers the first byte's address, and the computer ...Missing: formula | Show results with:formula
  19. [19]
    [PDF] Arrays Theory An array is a series of elements of the same type ...
    static memory whose size must be determined at compile time, before the program runs. ➔ Address Calculation. ○ Address = Base_Address + size_of_one_element * ...
  20. [20]
    The Development of the C Language - Nokia
    The original BCPL compiler was transported both to Multics and to the GE-635 GECOS system by Rudd Canaday and others at Bell Labs [Canaday 69]; during the final ...
  21. [21]
    Row-Major Order - an overview | ScienceDirect Topics
    The primary alternative to row-major order is column-major order, which stores elements of each column in contiguous memory locations. Historically, Fortran ...
  22. [22]
    [PDF] Baggy Bounds Checking: An Efficient and Backwards-Compatible ...
    Abstract. Attacks that exploit out-of-bounds errors in C and C++ programs are still prevalent despite many years of re- search on bounds checking.<|separator|>
  23. [23]
    [PDF] Light-weight Bounds Checking - Academic Commons
    Out of bounds array and pointer accesses are an important subclass of memory-related errors. Despite many years of research in bounds-checking, current ...
  24. [24]
    [PDF] Reference Manual 709/7090 Fortran Programming System
    Variables consisting of only one element of data will be read into storage in the ordinary way; those which are arrays will be read with indexing obtained from.<|separator|>
  25. [25]
    Row Major Order and Column Major Order - GeeksforGeeks
    Dec 5, 2023 · Row major ordering assigns successive elements, moving across the rows and then down the next row, to successive memory locations.Row Major Order · How to find address using... · Column Major Order
  26. [26]
    Arrays in C
    2-D Array Memory Layout. Data layout in memory is in Row Major Order. This means that all of row 0's elements are first, followed by all of row 1's, ...
  27. [27]
    C Multidimensional Arrays (2d and 3d Array) - Programiz
    In this tutorial, you will learn to work with multidimensional arrays (two-dimensional and three-dimensional arrays) in C programming with the help of ...
  28. [28]
    Differences between a multidimensional array "[,]" and an array of ...
    Feb 28, 2009 · Array of arrays (jagged arrays) are faster than multi-dimensional arrays and can be used more effectively. Multidimensional arrays have nicer syntax.What is the advantage of rectangular arrays over jagged arrays if ...Difference between jagged and rectangular arrays - Stack OverflowMore results from stackoverflow.com
  29. [29]
    What are some interesting/practical uses for arrays with three or ...
    Feb 9, 2011 · We just start calling these higher dimensional arrays tensors. Scalars are 0-dimensional tensors, vectors are 1-dimensional tensors ...
  30. [30]
    Arrays (C++) - Microsoft Learn
    Feb 13, 2023 · Learn how to declare and use the native array type in the standard C++ programming language.Stack declarations · Heap declarations
  31. [31]
    Indexing on ndarrays — NumPy v2.3 Manual
    ndarrays can be indexed using the standard Python x[obj] syntax, where x is the array and obj the selection. There are different kinds of indexing available ...Basic Indexing · Advanced Indexing · Integer Array Indexing
  32. [32]
    Arrays - Learning the Java Language
    An array is a container object that holds a fixed number of values of a single type. The length of an array is established when the array is created.
  33. [33]
    Arrays and strings — Fortran Programming Language
    Arrays in Fortran are one-based by default; this means that the first element along any dimension is at index 1. Array declaration#. We can declare arrays of ...Missing: historical paper
  34. [34]
    Array Indexing and Order (Fortran Programming Guide)
    Array indexing and order differ between Fortran and C. Array Indexing C arrays always start at zero, but by default Fortran arrays start at 1.Missing: historical paper
  35. [35]
    Single- and multi-dimensional Arrays - Julia Documentation
    Julia's compiler uses type inference and generates optimized code for scalar array indexing, allowing programs to be written in a style that is convenient ...Array literals · Indexing · Indexed Assignment · Supported index types
  36. [36]
    Why Do Programmers Count from Zero - Codefinity
    Zero-based indexing in programming explains the convention of counting elements starting from zero, tracing back to programming language C in the 1970s for ...