Fact-checked by Grok 2 weeks ago

NumPy

NumPy is the fundamental package for scientific computing with , providing a powerful multidimensional array object called ndarray along with a collection of functions for numerical operations, including mathematical, statistical, linear algebra, and routines. It enables efficient handling of large datasets through vectorized operations, broadcasting, and indexing, achieving near-C speeds while maintaining 's simplicity and readability. As an open-source library licensed under the BSD License, NumPy serves as the for array computing in and underpins the broader scientific ecosystem, including libraries like , , and . The origins of NumPy trace back to the mid-1990s with the development of the Numeric package at , which introduced array objects and numerical functions for scientific applications in . In the early , Numarray emerged as an enhanced reimplementation of Numeric, adding support for features like memory-mapped files for handling large datasets from projects such as the . To resolve compatibility issues between these competing libraries, NumPy was created in 2005 through a community effort that unified and extended their capabilities, incorporating contributions from students, faculty, and researchers. This merger, driven by the need for a stable, high-performance array interface, was led by developer , who integrated Numarray's features into Numeric with significant modifications. Since its inception, NumPy has evolved into a community-maintained project hosted on , supported by diverse contributors and funding from organizations such as the Moore and Sloan Foundations and the . Its core ndarray object supports homogeneous data types, fixed-size dimensions, and optimized C-based implementations for operations like sorting, I/O, and , making it interoperable with , GPU acceleration, and sparse array libraries. NumPy's adoption has made it indispensable in fields ranging from astronomy—such as imaging the M87 —to detection and , establishing it as a cornerstone of modern scientific workflows in .

History

Origins and Early Projects

In the mid-1990s, Python was emerging as a versatile programming language, but its scientific computing ecosystem faced significant limitations compared to established tools like MATLAB and Fortran, particularly in handling efficient array operations and high-dimensional data structures for numerical tasks. To address these gaps and foster discussions on matrix and array processing in Python, Jim Hugunin and other developers initiated the matrix-sig mailing list in 1995, which served as a collaborative forum for exploring extensions to enable numerical computations within Python's general-purpose framework. This effort was motivated by the need to blend Python's scripting strengths with performant numerical capabilities, such as vectorized operations, to support applications like data analysis on supercomputers without relying solely on specialized languages. Building on these discussions, Jim Hugunin, then a student at , developed Numeric in as the first comprehensive for numerical computing in . Numeric introduced a multi-dimensional object with a C-level API, enabling efficient element-wise operations, basic linear algebra routines, and functions, which allowed Python users to perform manipulations at speeds comparable to compiled languages. It also supported for operations on arrays of different shapes and provided mechanisms for interfacing with existing numerical libraries, marking a pivotal shift toward Python as a viable platform for scientific programming. Key early contributors to Numeric and the matrix-sig included Paul F. Dubois, who served as the first maintainer and contributed to its documentation and extensions, and Konrad Hinsen, who provided core patches for array functionality and prototyping. Their involvement helped refine Numeric's design, drawing from collaborative ideas within the to overcome Python's initial shortcomings in memory efficiency and multidimensional indexing. These foundational efforts laid the groundwork for the ndarray concept, evolving Numeric's array structure into a more robust data model for later libraries.

Development of Numeric and Numarray

Numeric, first released in 1995 by Jim Hugunin at , underwent significant expansion through the late 1990s and into the early 2000s, incorporating contributions from developers such as Jim Fulton, David Ascher, Paul Dubois, and Konrad Hinsen. This growth included enhancements to its core N-dimensional array object, a C-API for extensions, and support for efficient array operations like slicing and basic linear algebra, which laid the foundation for scientific computing in . By 2001, Numeric had reached version 23.8, with improved integration into 2.x releases, enabling seamless compatibility with the evolving ecosystem starting from 2.0 in 2000. Early versions of Numeric also featured precursors to universal functions (ufuncs), such as element-wise operations and reductions like add.reduce, which provided high-performance vectorized computations on arrays. In 2001, Numarray was introduced by a team at the , led by Perry Greenfield, Todd Miller, and Rick White, as a more flexible alternative to Numeric aimed at handling larger datasets and complex data structures. Motivated by Numeric's limitations in memory efficiency and , Numarray emphasized advanced , including support for memory-mapped files and object arrays that could store heterogeneous data types. Key innovations included record arrays for structured data and views, allowing efficient manipulation without data duplication, which improved performance for astronomical and scientific applications like data processing. The primary differences between Numeric and Numarray centered on design philosophy and capabilities: Numeric relied on fixed-size, homogeneous arrays optimized for speed on smaller datasets, while Numarray introduced record arrays and advanced views for greater flexibility with variable-sized and structured data. However, Numarray's incompatible C-API and slower performance for basic operations led to a , as developers faced challenges maintaining compatibility across the two libraries, fragmenting efforts in the scientific ecosystem. Travis Oliphant became involved in 2001, contributing to Numarray's development while ensuring with Numeric through his work on , which helped bridge the growing divide between the projects.

Unification and NumPy 1.0

In 2005, , then a researcher at the , led efforts to unify the divergent Numeric and Numarray libraries into a single, comprehensive package called NumPy, aiming to incorporate the performance advantages of Numeric for small arrays and its rich C API alongside Numarray's advanced features such as flexible indexing and memory mapping. This initiative addressed the community fragmentation caused by the two competing implementations, fostering a standardized foundation for numerical in . The unification culminated in the release of NumPy 1.0 on October 26, 2006, which maintained with existing Numeric code while adopting Numarray's views and structured array capabilities to enable efficient data manipulation without unnecessary copying. The central ndarray object served as the unifying , providing a versatile, multi-dimensional array with consistent behavior across operations. Following the release, NumPy shifted to under the project umbrella, with an initial core team led by coordinating development through community contributions hosted on scipy.org. This structure evolved into the formal NumPy Steering Council by around 2012, ensuring sustained maintenance and decision-making by key contributors. NumPy saw early adoption in the library, which had originated in 2001 using Numeric but transitioned to NumPy post-unification to leverage its enhanced array functionality for scientific routines. The project progressed through milestones, including versions 1.1 to 1.5 with improvements in ufunc performance and datetime support.

Modern Era and NumPy 2.0

Following the unification under NumPy 1.0, the project entered a period of steady evolution from 2010 onward, with regular releases enhancing functionality, performance, and compatibility. A key milestone was NumPy 1.6.0 on May 14, 2011, which introduced a new iterator interface, generalized universal functions (gufuncs) enabling ufuncs to operate over multiple dimensions, and functions like einsum for optimized tensor operations. Subsequent versions built on this foundation through incremental improvements, including optimized linear algebra routines via integration with libraries like starting in NumPy 1.14.0 (2018), which boosted performance for matrix operations on various hardware. In 2020, NumPy 1.19.0 marked the end of 2 support, aligning with the broader ecosystem's shift to 3 and allowing developers to focus on modern features without legacy constraints. This release and those following, such as NumPy 1.20.0 (2021), emphasized annotations for better type hinting and compatibility with evolving versions (3.7–3.9), while ongoing maintenance releases addressed performance bottlenecks, such as faster sorting algorithms and improved memory efficiency in array manipulations. These updates ensured NumPy remained a robust backend for scientific , with steady enhancements in and parallelism to handle larger datasets efficiently. The modern era culminated in NumPy 2.0.0, released on June 16, 2024, representing the first major version bump since 2006 and introducing breaking changes for long-term consistency. This release overhauled the dtype system with dedicated StringDType and BytesDType for better handling of text and , stabilized APIs for core functions like creation and , and deprecated inconsistent behaviors (e.g., lax type promotion rules) to prevent future ambiguities. An ABI break was included to enable foundational refactoring, though most user-facing code required minimal updates via a migration guide. These changes improved with downstream libraries and enhanced performance in string operations, which previously relied on object s. As of November 2025, the latest stable release is NumPy 2.3.5 (November 16, 2025), a patch version emphasizing reliability with bug fixes for edge cases in dtype handling, SIMD () optimizations for faster computations on modern CPUs, and enhancements to the random module for more efficient generation of random numbers and distributions. NumPy's adoption has surged, with billions of downloads annually on PyPI as of 2025, reflecting its central role in ecosystems like , where frameworks such as rely on NumPy arrays for tensor operations and .

Core Data Structures and Design

The ndarray Object

The ndarray (N-dimensional ) is NumPy's fundamental , serving as a multidimensional, homogeneous container for fixed-size items of the same . It organizes data along one or more , or , where each represents a of array traversal, and the overall structure is defined by a specifying the size of each . access in an ndarray is managed through strides, which are a of byte offsets indicating the step size for traversing each in the underlying contiguous block of . This design enables efficient, vectorized operations without the need for explicit loops, forming the basis for NumPy's high-performance numerical computing capabilities. Key attributes of an ndarray provide essential about its and contents. The ndim attribute returns the number of dimensions as an , such as 2 for a matrix-like . The shape attribute is a of representing the lengths of each dimension, for example (3, 4) for an with 3 rows and 4 columns. The size attribute gives the total number of elements across all dimensions, calculated as the product of the shape values. Additionally, itemsize specifies the memory footprint of a single element in bytes, depending on the . These attributes are read-only properties that allow users to inspect and manipulate programmatically without altering the underlying . Arrays are typically created using the np.array() constructor, which takes an iterable (such as a or nested ) and converts it into an ndarray, inferring the if not specified. A core requirement is homogeneity: all elements must share the same (dtype), which can be explicitly set during creation to ensure consistency, such as np.array([[1, 2], [3, 4]], dtype=np.float64). This enforcement promotes uniform memory allocation and optimized computations, distinguishing ndarray from more flexible structures. In contrast to Python lists, which are heterogeneous, dynamically resizable collections with per-element type checking, ndarray objects store data in a single contiguous block without type verification. This contiguous layout and lack of overhead yield significant efficiency gains, including reduced usage (often by a factor of 3-5 for numerical data) and faster access times due to cache-friendly locality. For instance, operations on large arrays can be orders of quicker than equivalent list-based computations, making ndarray ideal for scientific and data-intensive applications. extends these efficiencies by allowing operations between arrays of compatible shapes without data duplication.

Data Types (dtypes)

NumPy provides a rich system of scalar data types, known as dtypes, which define the interpretation of data stored in arrays, ensuring homogeneity and efficient computation. These dtypes encompass numeric types for integers, floats, and numbers, as well as non-numeric types like booleans, objects, and strings. The core numeric dtypes include signed integers (int8, int16, int32, int64), unsigned integers (uint8 to uint64), floating-point numbers (float16, float32, float64), and numbers (complex64, complex128). Booleans are represented by bool (1 byte, values True or False), while the object dtype allows storage of arbitrary objects, though it sacrifices performance due to lack of homogeneity. String and bytes types include fixed-length strings (str_ or <U with specified width, e.g., <U10) and bytes (bytes_ or S with fixed width, e.g., S10), as well as void for arbitrary byte sequences. The dtype object, an instance of numpy.dtype, encapsulates the properties of these types, including item size, byte order, and alignment. It can be created explicitly, such as np.dtype('float64') for a 64-bit floating-point type or np.dtype('i4') for a 32-bit signed integer, using type codes (e.g., f for float, i for signed integer) or full names. Type promotion rules determine the resulting dtype during operations between arrays or scalars of different types; for instance, adding an integer array to a float array promotes the result to float, as in np.int32(1) + np.float64(2.0) yielding a float64. These rules follow a hierarchy where integers promote to floats, and floats to complex, prioritizing precision while avoiding unnecessary upcasting. Structured dtypes extend scalar types by composing them into records with named fields, enabling array elements to behave like lightweight structs. For example, np.dtype([('name', 'S10'), ('age', 'i4')]) defines a dtype with a 10-byte string field name and a 32-bit integer field age, allowing access via field names like arr['name']. This is particularly useful for heterogeneous data, such as tabular records, without resorting to the slower object dtype. In NumPy 2.0, significant updates enhance dtype flexibility and consistency: a new StringDType supports variable-length UTF-8 strings, replacing inefficient object arrays for text data and integrating with a numpy.strings namespace for optimized ufuncs like concatenation and searching. Additionally, per NEP 50, promotion rules adopt stricter semantics by basing outcomes solely on dtypes rather than scalar values, removing legacy value-based upcasting (e.g., np.uint8(1) + 2 now yields uint8 instead of int64) to prevent silent precision loss and align with standards like the array API. Legacy dtype aliases such as int0 and str0 were removed for clarity. These changes, while breaking some older code, promote more predictable behavior in ndarray operations.

Memory Layout and Views

NumPy arrays, specifically the ndarray object, store data in a contiguous block of memory that is interpreted as multidimensional through metadata such as shape and strides. This memory layout allows for efficient access and manipulation by mapping N-dimensional indices to offsets in the one-dimensional memory buffer. The layout can follow either row-major order, also known as , where elements in the last dimension vary fastest and data is stored row by row, or column-major order, known as , where elements in the first dimension vary fastest and data is stored column by column. Strides define the memory layout by specifying the number of bytes to advance in the buffer to move one step along each axis. For an element at indices (i[0], i[1], ..., i[n]), the byte offset from the array's base is calculated as the sum of i[k] * strides[k] for each dimension k. In a contiguous array of float64 elements (8 bytes each) with shape (2, 3), the strides in C-order would be (24, 8), meaning 24 bytes to the next row and 8 bytes to the next column. The itemsize of the data type influences stride calculations, as it determines the byte size of each element. Arrays can be contiguous or non-contiguous in memory. A contiguous array occupies a single, uninterrupted block, checked via flags such as C_CONTIGUOUS for row-major layout (where the stride of the last axis equals the itemsize) or F_CONTIGUOUS for column-major layout (where the stride of the first axis equals the itemsize). Non-contiguous arrays arise from operations like transposing or slicing, where strides adjust to reinterpret the same buffer without copying data, potentially leading to strided access patterns. Both flags can be true for one-dimensional or empty arrays, or when certain dimensions have size 1. Views provide a mechanism for creating new ndarray instances that share the underlying data buffer without duplication, enabling zero-copy operations for efficiency. For instance, arr.view() produces a view with potentially altered metadata like dtype or shape, while arr.T creates a transposed view by swapping axes and adjusting strides. Modifications to a view propagate to the original array and vice versa, as they reference the same memory. To manage memory ownership and prevent premature garbage collection of the shared buffer, views track the original array through the base attribute. If an ndarray is a view, base points to the owning array; otherwise, it is None for independent arrays or copies. This reference ensures the data buffer remains valid as long as any view exists.

Fundamental Operations

Array Creation and Initialization

NumPy offers a variety of functions to create instances of the object, the core data structure for multidimensional arrays, allowing users to generate arrays from iterables, fill them with specific values, produce sequences, or derive them from external sources or custom computations. These creation routines support specification of the array's shape, typically as a tuple defining dimensions (e.g., (3,4) for a 3x4 array), and , which determines the data type of elements (e.g., float64, int32) to ensure memory efficiency and precision. The np.array() function is the fundamental way to create an ndarray from Python sequences like lists or tuples, inferring the shape from the input's nesting and defaulting to a copy of the data unless specified otherwise. For example, np.array([1, 2, 3]) produces a one-dimensional array of integers, while np.array([[1, 2], [3, 4]], dtype=float) yields a two-dimensional array with floating-point elements. In contrast, np.asarray() performs a similar conversion but avoids copying if the input is already an ndarray with a compatible dtype, promoting views for efficiency when working with existing arrays. For arrays filled with constant values, np.zeros(shape, dtype=None, order='C') allocates memory and initializes all elements to zero, with the default dtype being float64 and order specifying the memory layout (C for row-major). Similarly, np.ones(shape, dtype=None, order='C') fills the array with ones, and np.full(shape, fill_value, dtype=None, order='C') uses a user-specified fill_value, which can be numeric or otherwise compatible with the dtype. These functions are essential for initializing arrays of arbitrary dimensions, such as np.zeros((3,4), dtype=int) for a matrix of integers. np.empty(shape, dtype=None, order='C'), however, creates an array without initializing its contents, leaving garbage values from allocated memory, which is faster but requires immediate overwriting to avoid undefined behavior. To generate sequences, np.arange(start=0, stop, step=1, dtype=None) produces a one-dimensional array of evenly spaced values up to but excluding stop, analogous to Python's range but returning an ndarray. For instance, np.arange(0, 10, 2) yields [0, 2, 4, 6, 8] with dtype inferred as int. np.linspace(start, stop, num=50, endpoint=True, dtype=None) instead creates an array with a fixed number of equally spaced samples between start and stop, inclusive by default, useful for uniform partitioning; np.linspace(0, 1, 5) results in [0.0, 0.25, 0.5, 0.75, 1.0]. Specialized creation includes np.eye(N, M=None, k=0, dtype=float, order='C'), which generates a two-dimensional identity matrix with ones on the main diagonal (or offset by k) and zeros elsewhere, where M defaults to N for square matrices. For example, np.eye(3, dtype=int) produces a 3x3 diagonal array of integers. Arrays can also be created from external data sources, such as np.fromfile(file, dtype=float, count=-1, sep='', offset=0), which reads raw binary or text data from a file into an ndarray, interpreting it according to the specified dtype and optional separator for text files. This is particularly useful for loading large datasets, as in np.fromfile('data.bin', dtype=float). For programmatic generation, np.fromfunction(func, shape, dtype=None) constructs an array by applying a user-defined function to the indices of each position, with shape defining the output dimensions; for example, np.fromfunction(lambda i, j: i * j, (3, 3), dtype=int) creates a 3x3 array where each element is the product of its row and column indices.

Indexing, Slicing, and Boolean Masking

NumPy provides several mechanisms for accessing and modifying elements or subsets of arrays, enabling efficient data manipulation without necessarily creating new arrays. Basic indexing allows selection of single elements using integer indices, similar to Python lists but extended to multidimensional arrays. For a one-dimensional array x = np.arange(10), x[2] retrieves the element at position 2 (value 2), while negative indices like x[-2] access from the end (value 8). In multidimensional arrays, such as x reshaped to (2, 5), x[1, 3] selects the element at row 1, column 3 (value 8). These operations return views of the original array, meaning changes to the selected elements modify the source data, which aligns with NumPy's memory-efficient design using contiguous storage. Slicing extends basic indexing to ranges, using the Python syntax start:stop:step across multiple dimensions. For instance, in the array x = np.arange(10), x[1:7:2] yields [1, 3, 5], selecting every second element from index 1 to 6. Omitting start or stop defaults to the array's beginning or end, and a step of -1 enables reversal, such as x[::-1] for the full array in reverse order. Like single-element indexing, slices return views, preserving a reference to the original data for potential mutability; to obtain an independent copy, the .copy() method must be invoked explicitly. The ellipsis (...) shorthand expands to full slices (:) for all unspecified dimensions, simplifying access in higher-dimensional arrays—for example, x[..., 0] is equivalent to x[:, :, 0] in a 3D array, selecting the first element along the last axis. Advanced indexing, also known as fancy indexing, employs arrays or lists of integers to select non-contiguous or multiple elements simultaneously, always resulting in a copy rather than a view to ensure data independence. Using the same x array, x[[1, 3]] returns [1, 3], gathering elements at the specified positions. In two dimensions, x[np.array([0, 2]), np.array([0, 1])] might yield specific paired elements like [0, 15] from a reshaped array, broadcasting the index arrays to match shapes. The np.newaxis (or None) inserts a new axis of length 1, altering the dimensionality; for example, x[:, np.newaxis] transforms a 1D array into a column vector with shape (10, 1). This feature is particularly useful for reshaping selections without copying data unnecessarily, though combined with advanced indexing, it still produces copies. Boolean masking facilitates conditional selection by applying a boolean array of the same shape as the target, where True positions are included and False are excluded, again returning a flattened copy. For x = np.array([-2, 1, 5, 3]), x[x > 0] selects [1, 5, 3], filtering positive values. Masks can combine with slices or other indices, such as x[1:3][x[1:3] > 0] to apply conditions to subsets. Negation via ~ inverts the mask, e.g., x[~np.isnan(x)] removes entries from an array with missing values. Unlike basic indexing, boolean operations always yield copies, ensuring modifications do not affect the original array, which supports safe conditional manipulations in scientific computing workflows.

Broadcasting Rules

Broadcasting in NumPy is a mechanism that enables arithmetic operations between arrays of different shapes by implicitly expanding the smaller along specified dimensions to match the shape of the larger one, without creating copies of the . This process, known as , allows for efficient vectorized computations, where loops are performed in compiled code rather than in interpreted . The broadcasting rules dictate compatibility between array shapes, starting from the trailing (rightmost) dimensions and proceeding leftward. Two dimensions are compatible if they are equal or if one of them is 1, in which case the array with the dimension of size 1 is conceptually replicated along that dimension to match the other. If the arrays have different numbers of dimensions, the shape of the shorter array is padded with ones on the left until both shapes have the same length for comparison. The resulting shape of the operation is determined by taking the maximum size in each dimension across the input shapes. In cases of incompatibility, NumPy raises a ValueError indicating that the operands could not be broadcast together. For example, adding a scalar (shape ()) to a array of shape (3, 3) treats the scalar as having shape (1, 1), which expands to (3, 3) by replicating the value across all elements.
python
import numpy as np
a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
b = 2.0
result = a + b  # Shape (3, 3), no error
# result = [[3., 4., 5.], [6., 7., 8.], [9., 10., 11.]]
In contrast, attempting to add arrays of shapes (3,) and (4,) fails because the dimensions (3 and 4) are neither equal nor 1.
python
c = np.array([1, 2, 3])
d = np.array([1, 2, 3, 4])
# result = c + d  # Raises ValueError: operands could not be broadcast together
A more complex compatible case involves a 3D array of shape (256, 256, 3) and a 1D array of shape (3,), where the latter expands along the leading dimensions to (256, 256, 3). Similarly, a 2D array of shape (5, 4) added to a scalar of shape (1,) results in shape (5, 4). Broadcasting can be made explicit using np.newaxis (equivalent to None) during indexing to insert a dimension of size 1, facilitating operations between otherwise incompatible arrays. For instance, adding a 1D array of shape (4,) to another of shape (4,) by inserting a new axis on the second yields a 2D result of shape (4, 4).
python
a = np.array([0.0, 10.0, 20.0, 30.0])
b = np.array([1.0, 2.0, 3.0])
result = a[:, np.newaxis] + b  # Shape (4, 3)
# result = [[ 1.,  2.,  3.],
#           [11., 12., 13.],
#           [21., 22., 23.],
#           [31., 32., 33.]]
Additionally, the np.broadcast function provides an explicit way to create a broadcast object from input arrays, which encapsulates the resulting shape and allows iteration over the broadcasted indices without performing the operation. This is useful for low-level control or when preparing output arrays for custom computations.
python
x = np.array([[1], [2], [3]])  # Shape (3, 1)
y = np.array([4, 5, 6])        # Shape (3,)
b = np.broadcast(x, y)
out = np.empty(b.shape)
for (i, j), xi, yi in b:
    out[i, j] = xi + yi  # Fills out with broadcasted sum
# out = [[5., 6., 7.],
#        [6., 7., 8.],
#        [7., 8., 9.]]

Mathematical and Scientific Computing

Universal Functions (ufuncs)

Universal functions, or ufuncs, in NumPy are vectorized functions that perform element-wise operations on ndarrays, enabling efficient computation across arrays without explicit loops. These functions, such as np.add and np.sin, apply operations to each element individually while supporting automatic broadcasting to handle arrays of compatible shapes, type casting between data types, and other optimizations that enhance performance. Ufuncs are implemented in C for speed and integrated into Python, allowing seamless operation on large datasets and promoting extensibility through user-defined functions. NumPy provides a wide array of built-in ufuncs categorized as unary or binary based on the number of input arguments. Unary ufuncs operate on a single array, including trigonometric functions like np.sin and np.cos for computing sines and cosines element-wise, and arithmetic functions such as np.negative for negation. Binary ufuncs take two arrays and produce an output array of the same shape, encompassing arithmetic operations like np.add for element-wise addition and np.subtract for subtraction, as well as comparison operations such as np.greater to check if elements in one array exceed those in another and np.equal for equality checks. These ufuncs integrate with broadcasting rules to align arrays of different dimensions effortlessly, ensuring vectorized execution remains efficient even for mismatched shapes. Beyond direct element-wise application, ufuncs support advanced aggregation through methods like reduce, which iteratively applies the function along a specified to collapse dimensions, such as using np.add.reduce(arr, axis=0) to compute the sum of array elements along the first . For generating pairwise combinations, the outer method computes the by applying the ufunc to every pair of elements from two input arrays, as in np.multiply.outer(a, b) to produce a of products. Additionally, accumulate enables cumulative operations by maintaining an running result along an , for example, np.add.accumulate(arr) to generate prefix sums that track the progressive total of elements. These methods extend ufunc versatility for tasks like statistical reductions and convolution-like computations, all while preserving the efficiency of vectorized processing.

Linear Algebra Capabilities

NumPy's linalg submodule provides efficient implementations of common linear algebra operations, leveraging optimized BLAS and libraries for performance on multidimensional arrays (ndarrays) with compatible shapes. These functions support batched operations on stacked arrays, enabling computations over multiple matrices simultaneously without explicit loops. Basic matrix operations include the via np.dot(a, b), which computes the sum of products of corresponding elements for vectors or performs for 2D arrays, with inputs of shapes where the last dimensions are compatible (e.g., (m, n) and (n, p) yielding (m, p)). For dedicated , np.matmul(a, b) was introduced in NumPy 1.10 and implements the semantics of 3.5's @ operator, handling 2D matrices as well as higher-dimensional tensors by treating the last two dimensions as matrices. is accessible directly via the arr.T attribute, swapping the first two axes for 2D arrays (e.g., shape (m, n) becomes (n, m)) or more generally via np.transpose(arr). Key decompositions encompass eigenvalue computation with np.linalg.eig(a), which returns eigenvalues and right eigenvectors for square matrices of shape (..., M, M), solving the characteristic equation \det(A - \lambda I) = 0. Singular value decomposition is provided by np.linalg.svd(a), factoring a matrix of shape (..., M, N) into U \Sigma V^H where U and V^H are unitary, and \Sigma is diagonal with non-negative singular values, useful for dimensionality reduction and pseudoinverses. For solving systems, np.linalg.solve(a, b) efficiently computes the to the A \mathbf{x} = \mathbf{b} for a of shape (..., M, M) and right-hand side b of shape (..., M) or (..., M, K), returning \mathbf{x} of shape (..., M) or (..., M, K), assuming A is invertible.
python
import [numpy as np](/page/Python)
A = np.array([[3, 2, 0], [1, -1, 0], [0, 5, 1]])
b = np.array([2, 4, -1])
x = np.linalg.solve(A, b)
print(x)  # Output: [ 2. -2.  9.]
The least-squares for overdetermined systems is handled by np.linalg.lstsq(a, b), minimizing \| A \mathbf{x} - \mathbf{b} \|^2 for a of shape (..., M, N) with M \geq N and b of compatible shape, returning the \mathbf{x}, residuals, , and singular values. Additionally, the is computed via np.linalg.det(a) for , using internally to evaluate \det(A), which is zero if A is singular.

Statistical and Random Functions

NumPy provides a suite of functions for computing on arrays, enabling efficient calculations such as , standard deviations, and , which can be performed along specified or on flattened . The np.mean function computes the of array elements, supporting parameters like axis to specify the for reduction and keepdims to preserve the array's shape. Similarly, np.std and np.var calculate the standard deviation and variance, respectively, with options for (ddof) to adjust for sample versus population statistics, and they operate axis-wise or globally via flattening. The np.median function determines the value, elements internally and handling even-length arrays by averaging the two central values, also configurable along axes. These functions leverage NumPy's vectorized operations, allowing seamless application to multidimensional arrays, and can utilize rules for axis-wise computations when needed. For correlation and covariance analysis, NumPy offers np.corrcoef and np.cov, which are essential for examining relationships in multivariate data represented as arrays. The np.corrcoef function returns the matrix for input arrays, treating rows or columns as variables based on the rowvar parameter, and supports bias correction via ddof. In contrast, np.cov estimates the , accommodating optional weights (fweights) and weights (aweights), and handles both single and paired input arrays to produce a symmetric output . These functions are optimized for on large datasets, assuming centered data for computations. The numpy.random module facilitates pseudo-random number generation, supporting reproducibility and a variety of probability distributions for simulations and sampling. Seeding is managed via np.random.seed, which initializes the random state with an integer value to ensure deterministic outputs across runs, though modern usage recommends the default_rng for better entropy. Basic distributions include np.random.normal for Gaussian samples, parameterized by mean (loc) and standard deviation (scale), and np.random.uniform for values in a specified interval [low, high). Permutations are handled by np.random.shuffle, which reorders array elements in-place without returning a new array. These generators produce array outputs of arbitrary shape via the size parameter, enabling direct integration with NumPy arrays. Advanced features in the random module, introduced in NumPy 1.17, include random streams via the Generator class and BitGenerators like PCG64, which enhance parallelism and state management for high-performance applications. The np.random.multivariate_normal function draws samples from a multivariate Gaussian distribution, requiring a mean vector and covariance matrix as inputs, and supports broadcasting for batch generation. This upgrade, outlined in NEP 19, shifts from a global state to independent generators, improving thread-safety and scalability in computational workflows.

Advanced Usage and Integration

Multidimensional Array Operations

NumPy offers powerful tools for reorganizing and traversing multidimensional arrays, allowing users to reshape dimensions, combine or arrays along axes, and iterate efficiently over elements while respecting the array's memory layout and rules. These operations are essential for data preparation in scientific computing, as they enable flexible manipulation without unnecessary data copying, often returning views into the original array for performance gains.

Reshaping Operations

Reshaping functions in NumPy modify the dimensional structure of s without changing the underlying data elements, which is particularly useful for adapting s to different computational requirements in multidimensional contexts. The np.reshape function reconfigures an to a specified new , provided the total number of elements remains the same; it first flattens the in the specified order (default 'C' for row-major) and then refills the new . For instance, a 2D of (3, 4) with 12 elements can be reshaped to (2, 6) as follows:
python
import numpy as np
a = np.arange(12).reshape(3, 4)
b = np.reshape(a, (2, 6))
print(b)
# Output:
# [[ 0  1  2  3  4  5]
#  [ 6  7  8  9 10 11]]
This returns a if possible, avoiding data duplication. To reduce dimensionality to one, np.ravel flattens a multidimensional into a 1D in row-major order by default, returning a when the is contiguous in ; alternatively, a.flatten() creates a copy if needed. For a 2D like np.array([[1, 2], [3, 4]]), np.ravel(a) yields [1, 2, 3, 4]. Unlike np.reshape to a 1D , ravel is optimized for multi-index traversal and is commonly used before other operations requiring linear access. Axis permutation is handled by np.transpose, which rearranges the dimensions of a multidimensional according to a specified order of ; for a 2D , transposing swaps rows and columns, while for higher dimensions, it enables arbitrary reordering. For example, np.transpose(a, (1, 0)) on a (3, 4) yields a (4, 3) , preserving order along the new . This is crucial for aligning data with algorithms expecting specific dimensional layouts, such as in operations.

Joining and Splitting Operations

Joining functions combine multiple arrays into a single multidimensional structure, either along existing axes or by introducing new ones, facilitating the assembly of composite datasets. The np.concatenate function joins a sequence of arrays along a specified existing axis, requiring compatible shapes except in the concatenation dimension; for 2D arrays, setting axis=0 appends rows vertically. An example merges two (2, 3) arrays along axis 0 to form a (4, 3) array: np.concatenate([arr1, arr2], axis=0). This general-purpose tool is foundational for building larger arrays from subcomponents. Specialized stacking functions simplify common cases: np.vstack stacks arrays vertically by concatenating along the first after promoting 1D arrays to rows, ideal for data like images or tables, while np.hstack stacks horizontally along the second , treating 1D arrays as columns. For two (2, 3) arrays, np.vstack([a, b]) produces a (4, 3) result, and np.hstack([a, b]) yields a (2, 6) . These are equivalent to np.concatenate with fixed axes but handle shape promotion automatically for convenience in multidimensional workflows. For introducing a new axis, np.stack joins arrays along a fresh dimension at the specified position, transforming, for example, two (3, 4) arrays into a (2, 3, 4) 3D array when axis=0:
python
a = np.ones((3, 4))
b = np.zeros((3, 4))
c = np.stack([a, b], axis=0)
print(c.shape)  # (2, 3, 4)
This is distinct from concatenation, as it adds dimensionality, useful for batching data in machine learning pipelines. Splitting operations partition a single into subarrays along a chosen , enabling division for or data segmentation. The np.split function divides an array into roughly equal parts based on section counts or explicit indices; for a (6, 3) array split into three (2, 3) subarrays along axis=0, np.split(a, 3, axis=0) returns a list of three arrays. It requires the axis length to be divisible by the number of sections for even splits, supporting uneven partitioning via index lists, which is vital for handling variable-sized multidimensional chunks.

Iteration Techniques

Iteration over multidimensional arrays in NumPy goes beyond simple loops by leveraging the np.nditer object, introduced in version 1.6, which provides a flexible, efficient for visiting elements in a controlled manner, including support for nested structures and multiple arrays. This traverses arrays in a systematic fashion, defaulting to C-order (row-major) for compatibility with NumPy's memory layout, and can be customized with flags to enable modification, buffering, or external loop control for performance in large-scale computations. For nested loops, nditer unifies traversal, avoiding explicit index management; for a 2D array a = np.arange(6).reshape(2, 3), basic iteration yields elements sequentially:
python
it = np.nditer(a)
for x in it:
    print(x)
# Output: [0 1 2 3 4 5]
The multi_index flag allows access to indices during iteration, simulating nested loops: flags=['multi_index'] provides a of coordinates per step. Broadcasting integration in nditer extends iteration to multiple arrays of compatible shapes, applying NumPy's rules to align dimensions automatically; this is key for element-wise operations across mismatched arrays, such as adding a 1D to each row of a 2D . For arrays a (shape (2, 3)) and b (shape (3,)), np.nditer([a, b], flags=['multi_index'], op_flags=[['readwrite'], ['readonly']]) iterates over broadcasted pairs, enabling in-place updates like accumulation. Flags like reduce_ok permit reduction operations during traversal, enhancing efficiency for tasks like summing over dimensions without explicit loops. This approach scales well for high-dimensional data, reducing overhead compared to pure loops.

Interfacing with Other Libraries

NumPy's ndarray serves as a foundational that facilitates seamless across the scientific ecosystem, allowing libraries to leverage its efficient array operations without extensive . SciPy builds directly on NumPy arrays to provide advanced scientific computing capabilities, using them as the core building blocks for specialized functionalities. For instance, the scipy.sparse submodule offers representations that store only non-zero elements, enabling efficient handling of large datasets where most values are zero, such as in network graphs or . The scipy.optimize submodule utilizes NumPy arrays for optimization algorithms, including minimization routines like least-squares fitting, which operate on array inputs to solve nonlinear problems in fields like physics and engineering. Similarly, the scipy.signal submodule employs NumPy arrays for tasks, such as filtering and Fourier transforms, to analyze time-series data in applications like audio and . Pandas relies on NumPy arrays as the underlying storage mechanism for its primary data structures, the Series and DataFrame, which store homogeneous columns using NumPy's efficient ndarray for numerical data types like floats and integers. This integration ensures that Pandas benefits from NumPy's vectorized operations and memory efficiency, with DataFrames preserving NumPy dtypes during computations unless upcasting is required for mixed types. Interoperability is achieved through methods like DataFrame.to_numpy(), which extracts the underlying NumPy array, and the deprecated .values attribute, allowing direct manipulation or transfer to other NumPy-based tools without unnecessary copying for compatible dtypes. In , TensorFlow incorporates a subset of the NumPy API through tf.experimental.numpy, enabling users to prepare with familiar NumPy operations before converting arrays to tensors via tf.convert_to_tensor(), which supports seamless type promotion and acceleration on GPUs. PyTorch achieves interoperability by sharing memory between CPU tensors and NumPy arrays, using torch.from_numpy() to create tensors from arrays and tensor.numpy() for the reverse conversion, facilitating loading and preprocessing workflows in models. JAX extends NumPy with an accelerated, NumPy-compatible interface via jax.numpy, allowing the same array code to run on accelerators like GPUs or TPUs through , which is particularly valuable for in research. For , OpenCV's bindings convert all image and matrix structures to NumPy arrays, enabling efficient processing; for example, cv2.imread() loads images directly as NumPy ndarray objects with shape (height, width, channels), ready for manipulation in vision pipelines. This compatibility extends to tasks like nearest-neighbor searches, where SciPy's scipy.spatial submodule provides data structures such as KDTree and cKDTree that operate on NumPy arrays of points, supporting fast queries in applications like feature matching or .

Extending NumPy with Compiled Code

NumPy provides several mechanisms to extend its functionality with compiled code, allowing users to incorporate custom high-performance routines in languages like C, C++, Fortran, or even accelerate existing Python code using just-in-time (JIT) compilation. These extensions integrate seamlessly with NumPy's ndarray objects and universal functions (ufuncs), enabling the creation of new array methods or optimized computations that leverage low-level optimizations while maintaining Python's ease of use. This approach is particularly useful for performance-critical applications where pure Python or even standard NumPy operations may not suffice. One primary method involves the NumPy C API, which exposes functions like PyArray_* for direct manipulation of ndarrays within C or C++ extension modules. Developers define Python-callable functions using the PyMethodDef structure and register them in an initialization function such as init{name}, which imports the array API via import_array(). For instance, PyArray_FromAny converts Python objects to ndarrays with specified types and flags, while PyArray_NewFromDescr creates new arrays from custom descriptors; these can be used to build wrappers that handle input validation and output array creation, ensuring compatibility with NumPy's memory management through reference counting with Py_INCREF and Py_DECREF. The API also provides access to array data via PyArray_DATA and strides via PyArray_STRIDES, facilitating efficient element-wise operations in custom routines. For leveraging legacy or specialized code, F2PY serves as NumPy's interface generator, compiling Fortran subroutines into Python extension modules that can operate on ndarrays, including as ufuncs. The process begins with the command f2py -m module_name source.f, which scans the Fortran file, generates a signature file if needed, and builds the module; this allows direct calls to Fortran routines from , with automatic handling of array dimensions and data types via intent specifications like intent(in) or intent(out). F2PY supports 77 through modern standards with iso_c_binding, enabling wrappers for numerical algorithms that pass NumPy arrays as arguments without manual data copying. Cython offers a Python-like syntax for writing extensions that compile to C, with strong support for typed NumPy arrays to achieve near-C speeds. By cimporting numpy as cnp and declaring variables as cdef np.ndarray[DTYPE_t, ndim=N], users enable and eliminate Python overhead in loops or array operations; for example, a array can be typed as cdef cnp.ndarray[cnp.int64_t, ndim=2] arr, allowing efficient indexing like arr[i, j] without bounds checking when directives such as @cython.boundscheck(False) are applied. This approach is ideal for wrapping complex computations while retaining readability, as Cython seamlessly bridges NumPy's buffer protocol for array passing. Numba provides JIT compilation for NumPy-intensive Python code using the @njit decorator, which translates loops and array operations into optimized without requiring manual C or Fortran writing. Applied to a function like @njit def compute(arr: np.ndarray): for i in range(arr.shape): arr *= 2, Numba compiles it on first invocation in nopython mode, supporting a wide subset of NumPy functions and eliminating interpreter overhead for numerical loops. This makes it accessible for accelerating existing NumPy scripts, particularly those with explicit iterations over arrays.

Limitations and Performance Considerations

Known Limitations

NumPy arrays, known as ndarrays, are designed with a fixed size that cannot be altered after creation without explicit operations such as . This constraint stems from the array's underlying contiguous memory allocation, which prioritizes efficient access and computation but prevents dynamic resizing akin to Python lists. A core limitation of NumPy arrays is their requirement for homogeneous data types, meaning all elements must share the same dtype to enable optimized vectorized operations. While an object dtype can serve as a to store mixed types, such arrays forfeit the performance benefits of NumPy's compiled loops and do not release the (GIL) during operations, resulting in slower execution compared to numeric dtypes. In standard builds, due to 's GIL, NumPy's threading support is restricted for certain CPU-bound universal functions (ufuncs), as the lock prevents true parallelism in operations that hold it, necessitating alternatives like for scalable computation. NumPy has supported free-threaded builds (without GIL) since version 2.1 in 2024, with improvements as of version 2.3 in June 2025, enabling greater parallelism potential. Many ufuncs release the GIL to allow concurrent execution, but shared array mutations across threads can lead to race conditions or crashes without additional safeguards. The release of NumPy 2.0 introduced stricter dtype promotion rules aligned with NEP 50, which base promotions solely on dtypes rather than data values, potentially altering results and breaking legacy code that relied on prior behaviors. Additionally, NumPy lacks built-in support for GPU acceleration, confining its operations to CPU memory and requiring external libraries for hardware-specific optimizations.

Optimization Techniques

NumPy's performance can be significantly enhanced through targeted optimization techniques that leverage its underlying compiled C code and efficient memory management. These strategies focus on minimizing Python-level overhead, reducing memory allocations, and exploiting hardware capabilities such as SIMD instructions, enabling computations on large arrays to approach native speeds. Vectorization is a core optimization in NumPy, where operations on arrays are performed element-wise without explicit Python loops, delegating the iteration to optimized C code that often utilizes SIMD instructions for parallel processing within a single core. This approach avoids the interpretive overhead of Python for-loops, resulting in substantial speedups; for instance, multiplying two large arrays via a * b executes at near-C speeds compared to equivalent looped Python code. Preferring universal functions (ufuncs) like np.add or np.sin over loops ensures these benefits, as ufuncs are implemented to harness SIMD where possible, such as in SSE2 or AVX instructions for floating-point operations. Broadcasting complements vectorization by allowing operations on arrays of different shapes without explicit reshaping, further reducing the need for loops in one sentence. In-place operations provide another key avenue for gains by modifying directly, thereby avoiding the creation of temporary that consume additional and time. NumPy ufuncs support an out to specify an output , such as np.add(a, b, out=a) which computes a + b and stores the result back in a, preventing unnecessary allocations. This is particularly effective for chained operations; for example, G = A * B; np.add(G, C, out=G) is faster than G = A * B + C because it eliminates intermediate temporaries. The syntax forms like += or -= also perform in-place updates when possible, though users should verify compatibility to avoid unexpected copies due to overlapping data. Memory efficiency is crucial for handling large datasets, and NumPy offers tools like views, slices, and careful dtype selection to minimize footprint and access costs. Views and basic slicing (e.g., a[1:5]) create lightweight references to the original data buffer without copying, allowing modifications that propagate back to the source array while saving —essential for operations on datasets exceeding available . Ensuring arrays are contiguous (via np.ascontiguousarray) optimizes locality and SIMD utilization, as non-contiguous strides can degrade performance by up to an in iterative access patterns. Selecting appropriate dtypes further enhances ; for example, using float32 instead of the default float64 halves usage and can accelerate computations on memory-bound tasks, provided requirements permit, as seen in workflows. For parallelism, NumPy's np.einsum function includes built-in optimization for tensor contractions, evaluating the lowest-cost to minimize intermediate sizes and s. The optimize , when set to 'optimal' or '[greedy](/page/Greedy)', can yield dramatic speedups—up to 10x in multi-operand expressions—by avoiding inefficient paths, though it trades some upfront time for overall efficiency. For large-scale parallelism beyond single-machine cores, integrating NumPy with external tools like Dask enables out-of-core processing; Dask arrays wrap NumPy s into chunked, lazy s distributed across processes or clusters, achieving near-linear scaling for operations like reductions on terabyte-scale data while adhering to best practices such as chunk sizes around 100 to balance overhead. Similarly, Python's [multiprocessing](/page/Multiprocessing) can parallelize NumPy workflows by sharing arrays via memory mapping (e.g., multiprocessing.shared_memory), but requires careful management to mitigate costs and ensure thread-safety in BLAS-linked operations.

References

  1. [1]
  2. [2]
    Array programming with NumPy - Nature
    ### Summary of NumPy from the Article
  3. [3]
    NumPy
    ### Summary of NumPy
  4. [4]
    Introduction to NumPy - W3Schools
    NumPy was created in 2005 by Travis Oliphant. It is an open source project and you can use it freely.
  5. [5]
    numpy · PyPI
    NumPy is a community-driven open source project developed by a diverse group of contributors. The NumPy leadership has made a strong commitment to creating an ...
  6. [6]
    Extending Python for Numerical Computation
    Submitted for the December 1995 Python Workshop by Jim Hugunin. This work is based on Jim Fulton's original implementation of a matrix object as a container ...
  7. [7]
    SciPy 1.0: fundamental algorithms for scientific computing in Python
    Feb 3, 2020 · In 1995, Jim Hugunin, a graduate student at the Massachusetts ... Python Matrix Special Interest Group (Matrix-SIG) mailing list:.
  8. [8]
    [PDF] Guide to NumPy - MIT
    Dec 7, 2006 · Numeric was originally written in 1995 largely by Jim Hugunin while he was a graduate student at MIT. He received help from many people ...
  9. [9]
  10. [10]
    [ANN] NumPy 1.0 release | Python - Coding Forums
    Oct 26, 2006 · [ANN] NumPy 1.0 release. Thread starter Travis E. Oliphant; Start date Oct 26, 2006 ... * Travis Oliphant for the majority of the code adaptation ...
  11. [11]
    About Us - NumPy
    It was created in 2005 building on the early work of the Numeric and Numarray libraries. ... Travis Oliphant (project founder, 2005-2012); Nathaniel Smith (2012- ...
  12. [12]
    NEP 5 — Generalized universal functions - NumPy
    We propose to realize this concept by generalizing the universal functions (ufuncs), and provide a C implementation that adds ~500 lines to the numpy code base.Missing: 1.7 | Show results with:1.7
  13. [13]
    News - NumPy
    Jun 7, 2025 · 16 Jun, 2024 – NumPy 2.0.0 is the first major release since 2006. It is the result of 11 months of development since the last feature release ...
  14. [14]
    NumPy 1.20.0 Release Notes
    The Python versions supported for this release are 3.7-3.9, support for Python 3.6 has been dropped. Highlights are. Annotations for NumPy functions. This ...
  15. [15]
    NumPy 2.0.0 Release Notes
    NumPy 2.0.0 is the first major release since 2006. It is the result of 11 months of development since the last feature release and is the work of 212 ...
  16. [16]
    NumPy 2.0 migration guide
    This document contains a set of instructions on how to update your code to work with NumPy 2.0. It covers changes in NumPy's Python and C APIs.
  17. [17]
    numpy - PyPI Download Stats
    Author: Travis E. Oliphant et al. License: Copyright (c) 2005-2025, NumPy Developers. All rights reserved. Redistribution and use in source and binary forms ...
  18. [18]
    The N-dimensional array (ndarray) — NumPy v2.3 Manual
    An ndarray is a multidimensional container of items of the same type and size, with its shape defining the number of dimensions and items.
  19. [19]
    numpy.array — NumPy v2.3 Manual
    Create an array. ... Specify the memory layout of the array. If object is not an array, the newly created array will be in C order (row major) unless 'F' is ...
  20. [20]
    Data types — NumPy v2.4.dev0 Manual
    There are 5 basic numerical types representing booleans ( bool ), integers ( int ), unsigned integers ( uint ) floating point ( float ) and complex.
  21. [21]
    Data type objects (dtype) — NumPy v2.3 Manual
    Structured data types are formed by creating a data type whose field contain other data types. Each field has a name by which it can be accessed.Specifying And Constructing... · Dtype · Attributes
  22. [22]
    NEP 50 — Promotion rules for Python scalars — NumPy Enhancement Proposals
    ### Summary of Changes to Type Promotion Rules in NumPy 2.0 (NEP 50)
  23. [23]
    numpy.ndarray.strides — NumPy v2.3 Manual
    - **Definition**: `ndarray.strides` is a tuple of bytes indicating the step size in each dimension when traversing an array.
  24. [24]
    numpy.ndarray.flags — NumPy v2.3 Manual
    ### Summary of C_CONTIGUOUS and F_CONTIGUOUS Flags
  25. [25]
    Copies and views — NumPy v2.3 Manual
    ### Summary of NumPy Views from https://numpy.org/doc/stable/user/basics.copies.html
  26. [26]
  27. [27]
    Array creation — NumPy v2.3 Manual
    Below is a structured summary of array creation methods from https://numpy.org/doc/stable/user/basics.creation.html, focusing on the specified functions. Each section includes key descriptions, important parameters (e.g., shape, dtype), and representative code examples for conceptual understanding.
  28. [28]
    Array creation routines — NumPy v2.3 Manual
    Below is a summary of the specified NumPy array creation functions from https://numpy.org/doc/stable/reference/routines.array-creation.html, including detailed descriptions, parameters, notes on shape and dtype specification, and simple usage examples where available. The information is organized by function in a concise, dense format.
  29. [29]
  30. [30]
  31. [31]
  32. [32]
  33. [33]
  34. [34]
  35. [35]
  36. [36]
  37. [37]
  38. [38]
  39. [39]
    Indexing on ndarrays — NumPy v2.3 Manual
    All arrays generated by basic slicing are always views of the original array. NumPy slicing creates a view instead of a copy as in the case of built-in Python ...
  40. [40]
    Broadcasting — NumPy v2.3 Manual
    ### Definition and Rules of Broadcasting in NumPy
  41. [41]
    numpy.broadcast — NumPy v2.3 Manual
    ### Summary of `numpy.broadcast`
  42. [42]
    Universal functions (ufunc) — NumPy v2.3 Manual
    A universal function (or ufunc for short) is a function that operates on ndarrays in an element-by-element fashion, supporting array broadcasting, type casting ...
  43. [43]
  44. [44]
  45. [45]
    Linear algebra — NumPy v2.3 Manual
    ### Summary of Key Linear Algebra Functions in NumPy's linalg Submodule
  46. [46]
    Statistics — NumPy v2.1 Manual
    Order statistics; Averages and variances; Correlating; Histograms. © Copyright 2008-2024, NumPy Developers. Created using Sphinx 7.2.6. Built with the PyData ...Missing: downloads 2025
  47. [47]
  48. [48]
  49. [49]
  50. [50]
  51. [51]
    NEP 19 — Random number generator policy - NumPy
    May 24, 2018 · For the past decade, NumPy has had a strict backwards compatibility policy for the number stream of all of its random number distributions.
  52. [52]
    Array manipulation routines — NumPy v2.3 Manual
    Changing array shape# ; reshape (a, /[, shape, order, newshape, copy]). Gives a new shape to an array without changing its data. ; ravel (a[, order]). Return a ...Numpy.concatenate · Numpy.append · Numpy.unique · Numpy.repeat
  53. [53]
  54. [54]
    numpy.transpose — NumPy v2.2 Manual
    Returns an array with axes transposed. For a 1-D array, this returns an unchanged view of the original array, as a transposed vector is simply the same vector.Numpy.ndarray.transpose · numpy.ndarray.T · Numpy.permute_dimsMissing: reshaping | Show results with:reshaping
  55. [55]
  56. [56]
    numpy.vstack — NumPy v2.4.dev0 Manual
    numpy.vstack stacks arrays vertically (row wise), equivalent to concatenation along the first axis, and is best for arrays with up to 3 dimensions.
  57. [57]
    numpy.hstack — NumPy v2.1 Manual
    numpy.hstack stacks arrays horizontally (column wise), equivalent to concatenation along the second axis, except for 1-D arrays.
  58. [58]
    numpy.stack — NumPy v2.4.dev0 Manual
    If provided, the destination to place the result. The shape must be correct, matching that of what stack would have returned if no out argument were specified.
  59. [59]
    numpy.split — NumPy v2.3 Manual
    concatenate. Join a sequence of arrays along an existing axis. stack. Join a sequence of arrays along a new axis. hstack. Stack arrays in sequence horizontally ...Numpy.array_split · Numpy.hsplit · Numpy.vsplit
  60. [60]
    Iterating over arrays — NumPy v2.3 Manual
    The iterator object nditer, introduced in NumPy 1.6, provides many flexible ways to visit all the elements of one or more arrays in a systematic fashion.
  61. [61]
    Interoperability with NumPy
    The first set of interoperability features from the NumPy API allows foreign objects to be treated as NumPy arrays whenever possible.1. Using Arbitrary Objects... · 2. Operating On Foreign... · Interoperability Examples
  62. [62]
    Essential basic functionality — pandas 2.3.3 documentation
    For many types, the underlying array is a numpy. ... For the most part, pandas uses NumPy arrays and dtypes for Series or individual columns of a DataFrame.Pandas 1.5.2 documentation · Dev · 2.1 · 2.0
  63. [63]
    pandas.DataFrame.to_numpy — pandas 2.3.3 documentation
    Convert the DataFrame to a NumPy array. By default, the dtype of the returned array will be the common NumPy dtype of all types in the DataFrame.
  64. [64]
    NumPy API on TensorFlow
    Aug 15, 2024 · TensorFlow implements a subset of the NumPy API, available as tf.experimental.numpy. This allows running NumPy code, accelerated by TensorFlow.Setup · TensorFlow NumPy ND array · TF NumPy and TensorFlow
  65. [65]
    torch.from_numpy — PyTorch 2.7 documentation
    Creates a Tensor from a numpy. ndarray . The returned tensor and ndarray share the same memory.
  66. [66]
    Quickstart: How to think in JAX - JAX documentation
    JAX provides a unified NumPy-like interface to computations that run on CPU, GPU, or TPU, in local or distributed settings. JAX features built-in Just-In-Time ( ...Key concepts · JAX 101 · Installation · Just-in-time compilation
  67. [67]
    Introduction to OpenCV-Python Tutorials
    All the OpenCV array structures are converted to and from Numpy arrays. This also makes it easier to integrate with other libraries that use Numpy such as ...
  68. [68]
    Spatial algorithms and data structures (scipy.spatial) — SciPy v1.16.2 Manual
    **Summary of scipy.spatial for Nearest-Neighbor Searches and NumPy Arrays:**
  69. [69]
    How to extend NumPy
    The purpose of this section is not to document this tool but to document the more basic steps to writing an extension module that this tool depends on.Defining Functions · Dealing With Array Objects · Converting An Arbitrary...
  70. [70]
    Array API — NumPy v2.4.dev0 Manual
    The unaligned loops are currently only used in casts and will never be picked in ufuncs (ufuncs create a temporary copy to ensure aligned inputs). These ...
  71. [71]
    F2PY user guide and reference manual - NumPy
    The purpose of the F2PY –Fortran to Python interface generator– utility is to provide a connection between Python and Fortran.Using F2PY · F2PY reference manual · F2PY and Windows · F2PY test suite
  72. [72]
    Working with NumPy — Cython 3.2.0 documentation
    The main purpose of typing things as ndarray is to allow efficient indexing of single elements, and to speed up access to a small number of attributes such as .
  73. [73]
    5 minute guide to Numba
    Numba is a just-in-time compiler for Python that works best on code that uses NumPy arrays and functions, and loops. The most common way to use Numba is through ...Creating NumPy universal... · Performance Tips · Automatic parallelization
  74. [74]
    NumPy: the absolute basics for beginners
    NumPy is a Python library for multidimensional arrays (ndarray). Import it with `import numpy as np`. Arrays are homogeneous and rectangular. Import with ` ...Array Fundamentals · How To Create An Array From... · Creating Matrices
  75. [75]
    Thread Safety — NumPy v2.3 Manual
    ### Summary of GIL Limitations for Threading in NumPy, Especially for Ufuncs
  76. [76]
    Interoperability with NumPy — NumPy v2.2 Manual
    Since NumPy only supports CPU, it can only convert objects whose data exists on the CPU. But other libraries, like PyTorch and CuPy, may exchange data on GPU ...1. Using Arbitrary Objects... · 2. Operating On Foreign... · Interoperability Examples
  77. [77]
    NEP 38 — Using SIMD optimization instructions for performance
    Nov 25, 2019 · Traditionally NumPy has depended on compilers to generate optimal code specifically for the target architecture. However few users today compile ...
  78. [78]
    NumPy 1.8.0 Release Notes
    Performance improvements via SSE2 vectorization​​ Several functions have been optimized to make use of SSE2 CPU SIMD instructions. Float32 and float64:
  79. [79]
  80. [80]
    NEP 10 — Optimizing iterator/UFunc performance - NumPy
    Nov 25, 2010 · This NEP proposes to replace the NumPy iterator and multi-iterator with a single new iterator, designed to be more flexible and allow for more cache-friendly ...Missing: threading | Show results with:threading
  81. [81]
    numpy.einsum — NumPy v2.3 Manual
    ### Optimization in `numpy.einsum`
  82. [82]
    Best Practices — Dask documentation
    ### Summary: Integrating Dask with NumPy for Large-Scale Parallelism and Performance