Fact-checked by Grok 2 weeks ago

Discretization

Discretization is the process of converting continuous mathematical objects—such as functions, variables, domains, or —into discrete approximations, allowing for numerical and where exact continuous solutions are infeasible. This technique bridges the gap between theoretical continuous models and practical discrete implementations on computers, minimizing information loss while simplifying problem-solving. In , discretization plays a central role in solving differential equations by replacing derivatives with finite differences or integrals over discrete elements, transforming partial differential equations (PDEs) into systems of algebraic equations. Key methods include the finite difference method (FDM), which approximates derivatives using point-wise differences on a ; the finite element method (FEM), which divides the domain into subdomains (elements) for variational approximations, particularly suited for complex geometries; and the finite volume method (FVM), which ensures conservation properties by integrating over control volumes, widely used in . These approaches are critical for modeling physical phenomena like fluid flow, , and , with accuracy depending on grid resolution and error analysis techniques such as local estimation. Stability, consistency, and convergence are fundamental properties evaluated to ensure reliable solutions, as per the Lax equivalence theorem in numerical PDE theory. In and , discretization preprocesses continuous attributes by partitioning them into finite intervals or , converting numerical data into categorical forms to enhance algorithm efficiency and interpretability. Common techniques encompass methods like equal-width ning, which divides the range into uniform intervals, and equal-frequency ning, which ensures equal numbers of instances per ; supervised approaches, such as entropy-based minimum length (MDLP), leverage labels to optimize cuts for predictive accuracy. This process reduces data complexity, accelerates learning in algorithms like decision trees, and uncovers patterns in datasets, though it risks information loss if are poorly chosen. Applications span analysis, where it simplifies trend detection, to in tasks. Overall, discretization's versatility underscores its foundational role across disciplines, balancing computational tractability with fidelity to continuous realities, and continues to evolve with advances in adaptive gridding and methods.

General Concepts

Definition and Purpose

Discretization refers to the process of approximating continuous mathematical models, functions, or domains—such as real numbers representing , or other variables—by them onto finite or countable sets, while aiming to preserve key properties like structural integrity or statistical characteristics. This transformation converts infinite-dimensional continuous problems into manageable finite representations suitable for computational handling. In essence, it bridges the gap between theoretical continuous frameworks and practical implementations in fields like and . Historically, discretization traces its roots to early numerical methods for solving differential equations, with Leonhard Euler introducing a foundational approach in 1768 through his work on integral calculus, where he approximated solutions by stepping through discrete increments. This method, now known as , marked an initial shift toward discretizing continuous dynamics for practical computation, predating widespread digital tools but laying groundwork for modern simulations driven by the need for efficiency in processing complex systems. The primary purpose of discretization is to enable the solution of continuous problems on digital computers, which inherently operate with , thereby simplifying and reducing problems to finite, solvable ones. For instance, it facilitates converting analog signals into digital formats for or approximating via grid-based models in simulations, making intractable continuous computations feasible. In and dynamical systems, it supports and stability assessments by transforming raw continuous inputs into structures. A key trade-off in discretization involves balancing loss of —manifested as approximation errors that deviate from the exact continuous —with gains in tractability, allowing efficient numerical and storage.

Fundamental Methods

Sampling represents a primary for discretizing continuous-time signals by capturing their values at discrete instants. In uniform sampling, samples are taken at fixed time intervals T, resulting in a sampling rate f_s = 1/T. To faithfully represent the signal without , the Nyquist-Shannon sampling theorem requires that f_s exceed twice the highest component f_{\max} in the signal's , i.e., f_s > 2f_{\max}. Non-uniform sampling, by contrast, employs irregular intervals, which can reduce the total number of samples for bandlimited signals while preserving reconstructibility under certain conditions, such as when samples are sufficiently dense on average. Quantization discretizes the range of a continuous or sampled signal by mapping values to a of representation levels. Uniform quantization divides the range into equally spaced intervals of width \Delta, each value to the nearest level; the resulting distortion is commonly quantified by the (MSE), approximated as \frac{\Delta^2}{12} for signals uniformly distributed over the interval assuming overload is negligible. Non-uniform schemes, such as logarithmic quantization, use smaller intervals for low- values and larger ones for high amplitudes to better match perceptual or statistical signal characteristics, thereby reducing MSE for non-uniform distributions at the same . Partitioning extends discretization to domains by segmenting a continuous into or cells. Equal-width partitioning divides the into of identical length, promoting simplicity and uniformity in coverage regardless of . Equal-frequency partitioning, conversely, adjusts boundaries so each contains roughly the same number of observations, which helps in datasets with varying densities but may produce uneven widths. Interpolation serves as the inverse operation to discretization, reconstructing an approximate continuous signal from discrete points. assigns to any point the value of its closest sample, offering computational efficiency but introducing discontinuities. , a method, estimates values along straight lines connecting adjacent samples, providing smoother results with minimal overhead. These techniques approximate the ideal process, which for bandlimited signals involves interpolation to achieve perfect recovery when the is met. Such methods also find brief application in discretizing continuous state-space models by selecting time steps for approximation.

Discretization in Dynamical Systems

Continuous to Discrete Time Conversion

The conversion of continuous-time dynamical systems to discrete-time equivalents is a fundamental step in digital and , enabling the of algorithms on sampled-data platforms. This typically assumes a (ZOH) on the input, where the signal remains constant between sampling instants, transforming equations into equations. Under the ZOH , the input u(t) is held fixed at u_k for kT \leq t < (k+1)T, where T is the sampling period, allowing exact derivation of the discrete model for linear systems and approximate methods for nonlinear ones. For linear systems described by \dot{x} = Ax + Bu, the exact discrete-time equivalent under ZOH is obtained by solving the differential equation over one sampling period. The state evolution becomes x_{k+1} = e^{A \Delta t} x_k + \int_0^{\Delta t} e^{A(\Delta t - \tau)} B u(\tau) \, d\tau, where \Delta t = T is the sampling interval and the integral accounts for the input contribution. Since ZOH holds u(\tau) = u_k constant, the input term simplifies to \left( \int_0^{\Delta t} e^{A(\Delta t - \tau)} B \, d\tau \right) u_k, yielding the discrete matrices A_d = e^{A \Delta t} and B_d = \int_0^{\Delta t} e^{A \sigma} B \, d\tau (via substitution \sigma = \Delta t - \tau). This formulation provides an exact sampled equivalent without approximation errors in the state updates at sampling instants, though inter-sample behavior is not captured. The choice of sampling period \Delta t critically influences the accuracy and stability of the discrete model. It is typically selected based on the system's bandwidth or rise time, with a common guideline of \Delta t \approx 1/10 of the rise time (from 10% to 90% of steady-state response) to ensure sufficient resolution of dynamics. Smaller \Delta t enhances approximation accuracy by closely mimicking continuous behavior and preserving stability (as discrete poles z_i = e^{s_i \Delta t} remain inside the unit circle if continuous poles s_i have negative real parts), but increases computational demands. Conversely, larger \Delta t may introduce aliasing, degrade accuracy, or induce instability if it causes discrete poles to exceed modulus 1, particularly in systems with fast modes. Discretizing nonlinear systems presents greater challenges, as closed-form solutions like the matrix exponential do not exist, necessitating numerical integration over each \Delta t. Methods such as Runge-Kutta schemes approximate the state transition by evaluating the vector field at multiple points within the interval; for instance, a second-order Runge-Kutta discretization converts a continuous nonlinear model \dot{x} = f(x, u) into a discrete form x_{k+1} = x_k + \frac{\Delta t}{2} [f(x_k, u_k) + f(x_k + \Delta t f(x_k, u_k), u_k)], assuming ZOH on u. Higher-order variants, like fourth-order Runge-Kutta, offer improved accuracy for stiff or highly nonlinear dynamics but require careful tuning of \Delta t to balance error accumulation and stability, often verified through Lyapunov analysis.

Linear State Space Models

Linear state space models provide a framework for representing dynamical systems in continuous time, where the state evolution and output are described by differential equations. The continuous-time linear time-invariant (LTI) model is given by \dot{x}(t) = A x(t) + B u(t) + w(t), y(t) = C x(t) + v(t), where x(t) \in \mathbb{R}^n is the state vector, u(t) \in \mathbb{R}^m is the input, y(t) \in \mathbb{R}^p is the output, w(t) represents process noise, v(t) is measurement noise, and A, B, C are constant matrices of appropriate dimensions. The solution to the state equation, starting from an initial state x(0), is x(t) = e^{A t} x(0) + \int_0^t e^{A(t-\tau)} \left( B u(\tau) + w(\tau) \right) d\tau, which captures the propagation of the state through the matrix exponential and the integrated effect of inputs and noise. Discretization arises when implementing control on digital computers, converting the continuous model to a discrete-time equivalent sampled at intervals \Delta t = T. Assuming a zero-order hold (ZOH) on the input, where u(t) remains constant between samples (u(t) = u_k for kT \leq t < (k+1)T), the discrete state update becomes x_{k+1} = \Phi x_k + \Gamma u_k + w_k, with \Phi = e^{A T} as the state transition matrix, \Gamma = \int_0^T e^{A \tau} B \, d\tau as the input matrix, and w_k the integrated process noise over the interval. The output at sample times is y_k = C x_k + v_k. This ZOH assumption yields an exact discrete equivalent for the deterministic part of the LTI system, preserving the inter-sample behavior under piecewise constant inputs. Computing \Phi and \Gamma involves evaluating the matrix exponential, which can be done via the power series expansion e^{A T} = I + A T + \frac{(A T)^2}{2!} + \cdots, suitable for small T or when A has favorable structure. For general cases, diagonalization (if A is diagonalizable) yields e^{A T} = V e^{D T} V^{-1}, where D is diagonal with eigenvalues of A; alternatively, the Cayley-Hamilton theorem allows polynomial approximation using the characteristic equation of A. These methods ensure numerical stability for moderate-sized systems. The discretization maps continuous-time stability to discrete-time stability: if all eigenvalues \lambda_i of A satisfy \operatorname{Re}(\lambda_i) < 0, then the eigenvalues \mu_i = e^{\lambda_i T} of \Phi satisfy |\mu_i| < 1, preserving asymptotic stability for sufficiently small T > 0. Pathological sampling periods where e^{\lambda_i T} = 1 for some unstable \lambda_i must be avoided, but such cases are isolated. Discretization also affects system properties like controllability and observability. The discrete pair (\Phi, \Gamma) is controllable if the continuous pair (A, B) is controllable and the sampling period avoids values where the rank of the discrete controllability matrix drops, specifically when \operatorname{rank} \begin{bmatrix} \Gamma & \Phi \Gamma & \cdots & \Phi^{n-1} \Gamma \end{bmatrix} = n. Similarly, observability of ( \Phi, C ) holds if \operatorname{rank} \begin{bmatrix} C \\ C \Phi \\ \vdots \\ C \Phi^{n-1} \end{bmatrix} = n, preserved under generic sampling from a continuous observable system. These rank conditions ensure the discrete model retains the structural properties necessary for state feedback design and observer construction. Noise handling in the discrete model extends these properties by incorporating covariance propagation, though detailed approximations fall outside exact ZOH discretization.

Noise and Approximation Techniques

In the discretization of linear state-space models for dynamical systems, process noise introduces stochasticity that must be accurately propagated from continuous to discrete time. Consider a continuous-time model \dot{x}(t) = A x(t) + B u(t) + w(t), where w(t) is zero-mean white noise with power spectral density Q_c. The corresponding discrete-time noise w_k at sampling interval \Delta t has covariance Q_d = \int_0^{\Delta t} e^{A \tau} Q_c e^{A^T \tau} \, d\tau, which exactly captures the integrated effect of the noise over the sampling period for linear systems. This integral arises from solving the stochastic differential equation and ensuring the discrete model preserves the statistical properties of the continuous noise. Computing Q_d directly can be challenging for high-dimensional systems, but Van Loan's method provides an efficient numerical approach by evaluating the matrix exponential of an augmented matrix \Theta = \begin{pmatrix} -A & Q_c \\ 0 & A^T \end{pmatrix}, from which Q_d is computed as the transpose of the (2,2) block times the (1,2) block of e^{\Theta \Delta t}, i.e., e^{A \Delta t} times the off-diagonal block. Measurement noise in discretized models, appearing in the output equation y_k = C x_k + v_k, is handled similarly but often with simplifying assumptions. For continuous-time white measurement noise with power spectral density R_c, the discrete covariance R_d is R_d = R_c / \Delta t to account for the sampling bandwidth. However, in many practical applications, especially when measurements are inherently discrete or the sampling rate is sufficiently high, v_k is modeled as zero-mean white noise with constant covariance R_d, independent of \Delta t, to facilitate implementation in filters like the . This assumption holds well for systems where noise sources are dominated by sensor characteristics rather than continuous integration effects. When exact discretization is intractable—due to computational cost or model complexity—approximation techniques are employed to derive discrete equivalents. The forward provides a simple explicit : x_{k+1} = (I + A \Delta t) x_k + B \Delta t u_k, derived from a expansion of the , suitable for stiff systems with small \Delta t. The offers an implicit alternative: x_{k+1} = (I - A \Delta t)^{-1} (x_k + B \Delta t u_{k+1}), which enhances for larger steps but requires solving a at each step. For frequency-domain preservation, the bilinear (Tustin) transform s = \frac{2}{\Delta t} \frac{1 - z^{-1}}{1 + z^{-1}} maps continuous-time transfer functions to discrete ones, maintaining and response up to the ; in state-space form, it involves a to obtain the discrete matrices. These approximations introduce errors that must be managed for reliable performance. Truncation errors dominate in Euler methods, with local truncation error O(\Delta t^2) from neglecting higher-order terms in the state derivative, accumulating to global error O(\Delta t). Stability issues arise particularly in forward Euler, where the method can diverge if \Delta t exceeds bounds related to the eigenvalues of A (e.g., \Delta t < 2 / |\lambda_{\max}| for scalar unstable systems), violating the discrete model's boundedness even if the continuous system is stable. The bilinear transform mitigates frequency warping but can introduce phase errors at high frequencies, requiring pre-warping for critical poles. In stochastic contexts, these errors propagate through noise covariances, potentially inflating Q_d or R_d estimates if \Delta t is not tuned appropriately.

Discretization in Data Analysis

Continuous Feature Transformation

Continuous feature transformation, also known as discretization of continuous variables, is a preprocessing technique in and that converts real-valued features into categorical or ordinal representations by partitioning their range into discrete intervals or bins. This process is particularly motivated in contexts where algorithms such as Naive Bayes classifiers and decision trees require discrete inputs to compute probabilities or splits efficiently, as continuous features can complicate or lead to infinite branching in trees. Additionally, discretization helps mitigate by smoothing noise in continuous data and simplifying model complexity, enabling better generalization in predictive tasks. Unsupervised methods for continuous rely solely on the of the feature values without considering labels, making them suitable for or when labels are unavailable. Equal-width binning divides the range of the feature into a fixed number of intervals of equal size, determined by the minimum and maximum values, which is straightforward but sensitive to outliers that can bin boundaries. In contrast, equal-frequency binning, also called quantile binning, partitions the data into bins containing approximately the same number of observations, ensuring balanced representation across categories and better handling of distributions, though it may produce uneven interval widths. These approaches provide a simple way to approximate continuous distributions with categorical proxies, often serving as baselines in preprocessing pipelines. Supervised methods incorporate labels to guide the partitioning, aiming to maximize information gain or minimize predictive error, which typically yields more effective discretizations for tasks. The Fayyad-Irani , an -based , recursively selects cut-points that minimize within bins and applies the minimum description length (MDL) to stop splitting when further divisions do not sufficiently compress the data description. Similarly, the ChiMerge uses a bottom-up merging based on the chi-squared to combine adjacent intervals whose distributions do not differ significantly, ensuring statistical homogeneity while preserving relevant boundaries. These methods, rooted in , adapt binning to the underlying structure, often outperforming unsupervised alternatives in scenarios. Recent developments, such as the Max-Relevance-Min-Divergence (MRmD) criterion (2024), further enhance supervised discretization by maximizing discriminant information while minimizing divergence for better generalization in classifiers like Naive Bayes, showing superior performance on benchmark datasets as of 2024. The transformation of continuous features via discretization impacts model performance by influencing predictive accuracy and interpretability, with well-designed bins preserving key relationships such as monotonicity between the feature and target. For instance, monotonic binning ensures that bin labels maintain the original feature's ordering, preventing reversal of trends that could degrade classifier performance in tasks like credit scoring. Empirical studies demonstrate that entropy-based supervised discretization can improve Naive Bayes accuracy by up to 10-15% on benchmark datasets compared to raw continuous inputs, as it reduces variance without substantial . However, excessive binning may introduce information loss, underscoring the need for methods that balance granularity and fidelity to the original data distribution.

Binning and Quantization Strategies

In , uniform scalar quantization discretizes a continuous signal by dividing its range into equal intervals of step size \Delta, where each interval is represented by a discrete level, typically the midpoint. This approach assumes of quantization error, modeled as additive with variance \Delta^2 / 12. For an N-bit quantizer applied to a full-scale sinusoidal input, the signal-to-quantization noise ratio (SQNR) is given by \text{SQNR} = 6.02N + 1.76 \, \text{dB}, where the term $6.02N arises from the doubling of signal power per bit relative to the noise, and the $1.76 dB offset accounts for the sinusoidal signal's RMS value being peak over \sqrt{2}. Adaptive binning strategies enhance discretization by adjusting interval boundaries to the underlying distribution, avoiding the limitations of fixed-width bins. One method employs (KDE) to select boundaries non-parametrically; KDE approximates the using a centered at each data point, with chosen via cross-validation to match bin width, enabling cut-points that minimize the squared difference between the binned density and the KDE estimate. This approach adapts to and , significantly outperforming equal-width and equal-frequency binning on 27% to 39.5% of attributes in UCI datasets, based on cross-validated log-likelihood tests. Complementing this, supervised adaptive binning uses dynamic programming to find optimal cuts that minimize , defined as the weighted average across partitions: E = -\sum_{i=1}^{c} p_i \log_2 p_i, where p_i is the proportion of i in a bin, and the algorithm recursively evaluates thresholds to reduce total entropy while applying the minimum description length principle to halt splitting and prevent . This entropy-based method, with O(m \log m + k^2 m) for m instances and k intervals, produces partitions with higher class purity than greedy alternatives. For multi-dimensional discretization, axis-aligned strategies partition the feature space into rectangular bins by independently binning each dimension, such as through equal-width or entropy-minimizing cuts per , which preserve interpretability but suffer from the curse of dimensionality as bin volume explodes. In contrast, clustering-based approaches like k-means perform by iteratively assigning multi-dimensional points to k centroids that minimize within-cluster sum-of-squares distance, effectively partitioning the space into Voronoi regions that adapt to data without axis constraints. This method, rooted in optimal quantization theory, converges to a local minimum and is particularly effective for non-uniform distributions, though it requires specifying k via metrics like the elbow method. Evaluation of discretization strategies emphasizes metrics that balance utility and robustness. Consistency measures reproducibility across data samples, computed as the proportion of identical partitions when resampling the , ensuring stable boundaries for reliable downstream modeling. quantifies compactness via the number of bins or intervals, favoring fewer partitions to reduce complexity while maintaining information preservation, often optimized under criteria like minimum description length. These metrics, alongside accuracy on held-out data, guide selection by trading off risks with descriptive power.

Discretization in Numerical Methods

Function Discretization on Grids

Function discretization on grids involves representing a f(x) defined on a \Omega \subseteq \mathbb{R}^d by its values at discrete points arranged in a , enabling numerical computations such as solving equations or simulating physical processes. This approach approximates the through basis functions or operators that interpolate or project values across the grid, facilitating efficient evaluation and manipulation in computational frameworks. Uniform Cartesian s, consisting of equally spaced points in one or more dimensions, form the foundation for many such discretizations, allowing straightforward implementation of schemes on rectangular domains. In one dimension (1D), a uniform partitions the [a, b] into N subintervals of width h = (b - a)/N, with points x_i = a + i h for i = 0, \dots, N. This extends naturally to higher dimensions: in , a forms a \{(x_i, y_j)\}; in nD, it creates a structure. Such grids simplify indexing and neighbor searches, making them ideal for in large-scale simulations. For problems involving partial differential equations (PDEs) like the Navier-Stokes equations in , staggered grids offset variables across cell faces to enhance and conserve —velocities are stored at edges, while pressures reside at cell centers. This arrangement, introduced in the marker-and-cell () , prevents odd-even decoupling and spurious oscillations in pressure fields. Interpolation schemes reconstruct the from values, ensuring accurate point evaluations between nodes. Lagrange polynomials provide a classical basis for this, where the interpolant p(x) = \sum_{j=0}^n f(x_j) \ell_j(x) uses basis functions \ell_j(x) = \prod_{k \neq j} \frac{x - x_k}{x_j - x_k} to match f exactly at points x_0, \dots, x_n. For low-order approximations, linear basis functions, such as the hat function \phi_i(x) = \max(1 - |x - x_i|/h, 0), enable on uniform 1D , forming the building blocks for finite element methods. In barycentric form, this achieves on Chebyshev-distributed , avoiding the Runge phenomenon associated with equispaced points. For periodic functions, spectral methods employ a Fourier basis to discretize the function globally, representing f(x) as \sum_{k=-M}^M \hat{f}_k e^{i k x} on a uniform grid over [-\pi, \pi]. The coefficients \hat{f}_k are computed via the discrete Fourier transform, yielding exponential convergence for smooth periodic data and enabling precise differentiation through multiplication by i k in frequency space. This approach excels in applications requiring high accuracy with fewer degrees of freedom compared to local methods. Collocation methods enforce the discretized equations directly at grid points, transforming continuous problems into algebraic systems. For ordinary differential equations (ODEs), like u'(x) = g(x, u), collocation sets u(x_i) \approx \sum_j c_j \phi_j(x_i) and requires the residual to vanish at selected points, often using polynomial bases for high-order accuracy. In PDE contexts, such as elliptic equations with random inputs, sparse grid collocation evaluates solutions at tensor-product nodes, mitigating the curse of dimensionality while approximating expectations efficiently. These techniques find widespread use in image processing, where continuous scenes are discretized onto grids—rectangular arrays of values—to enable filtering and . Each represents an averaged function value over a small area, facilitating operations like for . In , voxelization discretizes 3D models into volumetric grids of voxels, supporting ray tracing and by converting polygonal surfaces into discrete maps with controlled connectivity.90054-Y)

Error Analysis and Convergence

In the context of numerical discretization of smooth functions on grids, truncation error arises from the approximation of continuous operators by discrete ones, such as finite differences. The local truncation error represents the discrepancy at a single grid point or step, typically derived via Taylor series expansion of the function around that point, while the global truncation error accumulates over the entire domain or time interval, often scaling with the number of steps. For a p-th order finite difference scheme approximating a derivative, the local truncation error is of order O(h^p), where h denotes the grid spacing; consequently, the global error for solving initial value problems or boundary value problems is also O(h^p) under suitable stability conditions. Convergence of discrete approximations to the continuous requires both and of the discretization . ensures that the local approaches zero as h \to 0, meaning the converges to the continuous one in an appropriate norm. prevents error amplification across iterations or grid points; for linear methods applied to well-posed partial differential equations (PDEs), the Lax equivalence theorem states that and are equivalent to of the numerical to the exact as h \to 0. is often assessed using von Neumann analysis, which decomposes the solution into modes and examines the amplification factor for each mode, ensuring that the of the amplification remains bounded independently of h. Round-off errors stem from finite-precision , where each operation introduces a relative error bounded by \epsilon, typically around $10^{-16} for double . These errors accumulate during computations, particularly in iterative schemes or over large grids, often behaving like a with scaling as \sqrt{N} \epsilon, where N is the number of operations proportional to $1/h. To balance and round-off errors, grid refinement must be optimized; for approximations, such as forward differences in , the total error is minimized when h \sim \sqrt{\epsilon}, yielding an optimal error of order \epsilon^{1/2}. In practice, excessive refinement increases round-off dominance, while coarse grids amplify errors, necessitating adaptive choices based on problem and limits. Asymptotic analysis provides rigorous bounds on convergence rates for smooth functions by expanding the exact solution in Taylor series and comparing it to the discrete approximation. For the composite midpoint rule in numerical quadrature, which discretizes the integral \int_a^b f(x) \, dx over subintervals of width h, the Taylor expansion of f around the midpoint x_i + h/2 reveals that the local error per subinterval is O(h^3), leading to a global error of O(h^2) over a fixed interval as h \to 0./I%3A_Numerical_Methods/3%3A_Integration) \int_{x_i}^{x_i + h} f(x) \, dx = h f\left(x_i + \frac{h}{2}\right) + \frac{h^3}{24} f''(\xi_i) for some \xi_i \in [x_i, x_i + h], confirming the second-order for twice-differentiable f./I%3A_Numerical_Methods/3%3A_Integration) Similar expansions underpin error estimates for stencils, where higher-order terms dictate the method's accuracy, assuming sufficient of the underlying function.

References

  1. [1]
    Discretization - an overview | ScienceDirect Topics
    Discretization concerns the process of transferring a continuous function into one that is solved only at discrete points.
  2. [2]
    [PDF] INTRODUCTION TO DISCRETIZATION
    Discretization is the name given to the processes and protocols that we use to convert a continuous equation into a form that can be used to calculate numerical ...
  3. [3]
    [PDF] Introduction to Discretization
    The basic idea is that we discretize our domain, in this case a time interval, and then derive a difference equation which approximates the differential ...
  4. [4]
    [PDF] Estimation of Discretization Errors using the Method of Nearby ...
    The Method of Nearby Problems (MNP) is developed as an approach for estimating numerical errors due to insufficient mesh resolution.<|control11|><|separator|>
  5. [5]
    [PDF] Stability, consistency, and convergence of numerical discretizations
    Stability of a discretization refers to a quantitative measure of the well-posedness of the discrete problem. A fundamental result in numerical analysis is ...
  6. [6]
    [PDF] Discretization: An Enabling Technique
    Discrete values have important roles in data mining and knowledge discovery. They are about intervals of numbers which are more concise to represent and specify ...
  7. [7]
    Data discretization in machine learning - Train in Data's Blog
    Jul 4, 2022 · Data discretization, also known as binning, is the process of grouping continuous values of variables into contiguous intervals.Discretization Methods · Equal--Width Discretization · Final Thoughts
  8. [8]
    Discretization of Time Series Data - PMC - PubMed Central - NIH
    Discretization of real data into a typically small number of finite values is often required by machine learning algorithms (Dougherty, 1995), data mining (Han, ...
  9. [9]
    What is Discretization in Machine Learning? - Analytics Vidhya
    Nov 22, 2024 · Discretization is a fundamental preprocessing technique in data analysis and machine learning, bridging the gap between continuous data and methods designed ...
  10. [10]
    [PDF] Data Discretization Unification - Kent State University
    Data discretization is defined as a process of converting contin- uous data attribute values into a finite set of intervals with mini- mal loss of information.
  11. [11]
    [PDF] Euler's Method in Euler's Words - Computing for Scientists
    Apr 4, 2007 · [7] Leonhard Euler, Institutionum Calculi integralis, vol. I, St. Petersburg, 1768. 197. Available from The Euler Archive (www.eulerarchive.org) ...
  12. [12]
    Discretization Based on Entropy and Multiple Scanning - MDPI
    We will discuss two basic discretization techniques based on entropy. The first discretization technique is called Dominant Attribute (or Starting from One ...
  13. [13]
    [PDF] discretization of continuous systems - F.L. Lewis
    Oct 30, 2013 · A continuous state variable system is x= Ax + Bu, y= Cx + Du. A discrete version is xk+1= Asxk + Bsuk, where As and Bs are derived from A and B.<|separator|>
  14. [14]
    [PDF] Signals and Systems - Lecture 1: From Continuous Time to Discrete ...
    simplifying the state equations in (1), by taking into account that the input is constant during a sampling period (zero-order hold assumption). 2 rewriting the ...
  15. [15]
    [PDF] Automatic Control 1 - Discrete-time linear systems
    Sampling continuous-time systems. Exact sampling. Consider the continuos-time ... More on the choice of sampling time in the second part of the course ...<|separator|>
  16. [16]
    State-Feedback Control Design for Polynomial Discrete-Time ...
    The continuous-time polynomial nonlinear model is discretized by the second-order Runge-Kutta method. The Lyapunov theory and the exponential stability were ...
  17. [17]
    [PDF] Effective computational discretization scheme for nonlinear ...
    In this section, basic concepts of numerical computing, fourth-order Runge-Kutta discretization method and observability of dynamical systems are briefly ...
  18. [18]
    Sampled-data Control Systems - Google Books
    "This book deals with the theory of sampled-data systems, a subject which has been of increasing interest and importance to engineers and scientists for the ...Missing: discretization | Show results with:discretization
  19. [19]
  20. [20]
    Computing integrals involving the matrix exponential - IEEE Xplore
    A new algorithm for computing integrals involving the matrix exponential is given. The method employs diagonal Padé approximation with scaling and squaring.
  21. [21]
    [PDF] Van Loan's variance formula - UPV
    Discretisation of linear state-space processes with noise: Van Loan's variance formula ... Process with deterministic input in computer control have an exact ZOH.
  22. [22]
    How to compute the discrete form of the measurement noise matrix?
    Dec 18, 2024 · The relation between the discrete measurement noise matrix, Rk, and the continuous measurement noise matrix,R(t), of the Kalman filter is given in the book.
  23. [23]
    [PDF] Estimation II 1 Discrete-time Kalman filter
    It turns out that if the system is time-invariant (i.e. F;G; and H are constant), and the measurement and process noise are stationary. (Q and R are constant) ...Missing: exact | Show results with:exact
  24. [24]
    Motion Model, State, and Process Noise - MATLAB & Simulink
    where T is the time step size of the discrete model, k is the time step index, and wx(k) is the process noise in the x-direction at the k-th time step. From the ...<|control11|><|separator|>
  25. [25]
    [PDF] 1.5 Discrete-time state-space systems - syscop
    The formula (1.31) is called Forward Euler approximation. There are other ways that are widely used: Backward Euler, Runge Kutta methods. 1.6 Linearization ...
  26. [26]
    [PDF] The Tustin Transform
    Apr 3, 2004 · This section introduces the Tustin transform for transfr matrices and state space models, and describes some useful properties of the transform.
  27. [27]
    [PDF] 8 Discrete Time Systems
    Discrete time systems use z-transform and difference equations. The stable area is |z| < 1. Continuous to discrete transformation is also discussed.
  28. [28]
    [PDF] Supervised and Unsupervised Discretization of Continuous Features
    We found that the performance of the Naive-Bayes algorithm signi cantly improved when features were discretized us- ing an entropy-based method. In fact, over.
  29. [29]
    [PDF] Multi-Interval Discretization of Continuous-Valued Attributes ...
    This paper addresses the use of the entropy minimization heuristic for discretizing the range of a continuous-valued attribute into multiple intervals.
  30. [30]
    [PDF] fayyad-discretization.pdf
    The results serve to justify extending the algorithm to derive multiple intervals. We formally derive a criterion based on the minimum description length.
  31. [31]
    [PDF] 1992-ChiMerge: Discretization of Numeric Attributes
    This paper describes ChiMerge, a general, robust algorithm that uses the x2 statistic to dis- cretize (quantize) numeric attributes. ntroduction. Discretization ...
  32. [32]
  33. [33]
    (PDF) Unsupervised Discretization Using Kernel Density Estimation.
    Discretization, defined as a set of cuts over domains of attributes, represents an important pre-processing task for numeric data analysis.
  34. [34]
    [PDF] Error-Based and Entropy-Based Discretization of Continuous Features
    This algorithm discretizes a continuous feature by producing an optimal set of k or fewer intervals that results in the minimum error on the training set if ...
  35. [35]
    [PDF] Quantization and the Method of k-Means
    T HE THEORY developed in the statistical literature for the method of k-means can be applied to the study of optimal k-level vector quantizers.
  36. [36]
    A Sparse Grid Stochastic Collocation Method for Partial Differential ...
    This work proposes and analyzes a stochastic collocation method for solving elliptic partial differential equations with random coefficients and forcing terms.
  37. [37]
    [PDF] Survey of the Stability of Linear Finite Difference Equations
    IX, 267–293 (1956). Survey of the Stability of Linear Finite Difference. Equations*. P. D. LAX and R. D. RICHTMYER. PART I. AN EQUIVALENCE THEOREM. 1 ...Missing: original | Show results with:original
  38. [38]
    [PDF] Chapter 4. Accuracy, Stability, and Convergence - People
    Lax and R. D. Richtmyer, \Survey of the stability of linear finite difference equations," Comm. Pure Appl. Math. 9 (1956), 267{293. EXERCISES . 4.2.1. Order of ...Missing: original | Show results with:original<|control11|><|separator|>
  39. [39]
    Numerical differentiation - Wikipedia
    A possible approach is as follows: h := sqrt(eps) * x; xph := x + h; dx := xph - x; slope := (F(xph) - F(x)) / dx; However, with computers, compiler ...Step size · Three Point methods · Higher derivatives · Complex-variable methods
  40. [40]
    [PDF] 11. Finite Difference Methods for Partial Differential Equations
    May 18, 2008 · Let us apply the von Neumann analysis to investigate the stability of the implicit scheme. Again, we need only look at the effect of the scheme ...