Fact-checked by Grok 2 weeks ago

Covering number

In mathematics, the covering number of a subset F of a metric space (E, \rho), denoted \mathcal{N}(\epsilon, F, \rho) for \epsilon > 0, is defined as the smallest cardinality of a set C \subset E such that every element of F is within distance less than \epsilon of some point in C, i.e., F is covered by the union of open balls of radius \epsilon centered at points in C. This concept, also known as the \epsilon-covering number, quantifies the minimal number of such balls required to cover F, and it is infinite if no finite cover exists. Closely related is the packing number \mathcal{M}(\epsilon, F, \rho), which is the largest cardinality of an \epsilon-separated subset of F, where any two distinct points are at least distance \epsilon apart under \rho. Covering and packing numbers are intertwined by the inequalities \mathcal{M}(2\epsilon, F, \rho) \leq \mathcal{N}(\epsilon, F, \rho) \leq \mathcal{M}(\epsilon, F, \rho), providing bounds on the "size" or complexity of F in the metric space. For example, in the unit ball of the Euclidean space \mathbb{R}^d under the \ell_2-norm, the covering number satisfies (1/\epsilon)^d \leq \mathcal{N}(\epsilon) \leq (3/\epsilon)^d for \epsilon < 1. Covering numbers play a central role in statistical learning theory and empirical processes, where they measure the complexity of function classes (e.g., via metric entropy, defined as \log \mathcal{N}(\epsilon, F, \rho)) to derive bounds on generalization error and convergence rates in algorithms like empirical risk minimization. In the context of convex functions, for the class of M-bounded convex functions on [0,1]^d, sharp bounds on the logarithm of the covering number under the L_p-metric are of the order \epsilon^{-d/(d+1)} for small \epsilon > 0, which informs optimal rates in convexity-constrained estimation problems such as convex regression.

Fundamentals

Definition

In a metric space (X, d), the external covering number N_r^{\text{ext}}(K) of a subset K \subset X at radius r > 0 is defined as the smallest cardinality of a collection of open balls of radius r centered at arbitrary points in X whose union covers K. This measures the minimal number of such balls needed to encompass all points in K, allowing centers outside K. The internal covering number N_r^{\text{int}}(K) is similarly defined as the smallest cardinality of a collection of open balls of radius r whose union covers K, but with the additional requirement that all centers lie within K itself. Thus, N_r^{\text{ext}}(K) \leq N_r^{\text{int}}(K), as the external variant permits more flexible placements for the centers. The entropy H_r(K) = \log_2 N_r(K) quantifies the complexity of K by taking the base-2 logarithm of the covering number, where N_r(K) typically denotes the external covering number unless specified otherwise; it serves as a tool for assessing the "size" or of K in terms of how finely it can be approximated by finite discretizations. In the , the is often denoted by \varepsilon > 0 instead of r, and for subsets K of a ed linear , the d is commonly induced by the norm \|\cdot\|, yielding N_\varepsilon(K) = N_\varepsilon(K, \|\cdot\|). Packing numbers provide a complementary notion, representing the maximal number of points in K separated by at least $2r.

Historical Background

The concept of covering numbers emerged from foundational ideas in during and 1940s, where the focus was on quantifying the size and complexity of sets through coverings by small balls or sets. Influential work by in 1919 introduced outer measures based on coverings, which were further developed by in to establish the modern theory of , emphasizing efficient coverings to capture fractal-like structures in metric spaces. These early contributions laid the groundwork for assessing set complexity via minimal coverings, influencing later quantitative measures without yet formalizing the logarithmic associated with covering numbers. In the 1950s, Andrey N. Kolmogorov introduced the notion of ε-entropy as a tool for classifying compact sets in metric spaces according to their intrinsic complexity, defined logarithmically in terms of the minimal number of ε-balls needed to cover the set. This was first presented in his 1956 paper, where he explored asymptotic characteristics of totally bounded metric spaces, providing a metric to distinguish sets based on their "massiveness" under small-scale approximations. Kolmogorov's formulation built directly on the covering principles from , shifting emphasis toward information-theoretic interpretations of set coverings. Kolmogorov collaborated with Vladimir M. Tikhomirov in the late and early , expanding the framework through joint work that established fundamental bounds on ε-entropy and introduced related concepts like ε-capacity for sets in function spaces. Their seminal paper systematically developed these ideas, applying them to and approximation problems, published in English in Russian Mathematical Surveys in . This collaboration marked a pivotal evolution, integrating covering-based entropy into broader and solidifying its role beyond mere classification. Prior to the explicit terminology of covering numbers, these ideas found early applications in mid-20th-century approximation theory, where coverings quantified the complexity of function classes, and in capacity theory, which used similar minimal covering constructions to study potential-theoretic capacities of sets. These uses, emerging in the 1940s and 1950s, predated Kolmogorov's but anticipated modern formulations by linking covering efficiency to analytical bounds in infinite-dimensional spaces. By the , the concepts had matured into a core tool for analyzing set complexity across .

Mathematical Properties

Relations to Packing and Entropy

The packing number N_r^{\text{pack}}(K) of a K in a , for r > 0, is defined as the maximum of a subset of K such that the open balls of r centered at these points are pairwise disjoint. This quantity intuitively measures the "spread-out" nature of K, providing a lower bound on its complexity by ensuring no two selected points are closer than $2r. Covering numbers and packing numbers are interconnected through a chain of inequalities that highlight their complementary roles: N_{2r}^{\text{met}}(K) \leq N_r^{\text{pack}}(K) \leq N_r^{\text{ext}}(K) \leq N_r^{\text{int}}(K) \leq N_r^{\text{met}}(K), where N_r^{\text{met}}(K) denotes the maximum cardinality of an r-separated subset of K (pairwise distances at least r), N_r^{\text{ext}}(K) the external covering number (minimal number of r-balls with centers in the ambient space to cover K), and N_r^{\text{int}}(K) the internal covering number (centers in K). Intuitively, the packing number serves as a lower bound since a maximal packing requires at least as many balls as a coarser covering to avoid overlaps, while covering numbers provide upper bounds on complexity by allowing overlaps to efficiently enclose K. The primary link between these concepts is , defined as H_r(K) = \log N_r(K), where N_r(K) is typically the . As r \to 0, the asymptotic growth of H_r(K) quantifies dimension-like properties of K, such as the effective dimensionality in high-dimensional spaces, where slower growth indicates lower intrinsic complexity. This measure is central in theory for bounding rates. The duality between covering and packing arises from their allowance of overlaps versus prohibition: coverings permit intersecting balls for minimal enclosure, while packings enforce separation for maximal dispersion. They coincide for finite sets K when r is sufficiently small relative to the minimal pairwise distance in K, as each point then requires its own isolated ball, equating the maximal packing size to the minimal covering size.

Bounds and Inequalities

The covering number N_r(K) of a set K in a , defined as the minimal number of balls of r needed to cover K, exhibits several fundamental properties that govern its behavior under variations in parameters and transformations. For the external covering number, which allows centers anywhere in the ambient space, N_r^{\text{ext}}(K) is non-increasing in the r: if r_1 \leq r_2, then N_{r_1}^{\text{ext}}(K) \geq N_{r_2}^{\text{ext}}(K), since larger balls subsume smaller ones, requiring fewer to cover the set. Similarly, the packing number M_r(K), the maximal cardinality of an r-separated subset of K, is non-increasing in r. Moreover, both N_r^{\text{ext}}(K) and M_r(K) are non-decreasing with respect to the size of K: if K \subseteq L, then N_r^{\text{ext}}(K) \leq N_r^{\text{ext}}(L) and M_r(K) \leq M_r(L), as any cover or packing of the larger set restricts to one for the smaller. A key scaling property holds for the external covering number in normed spaces: for any c > 0, N_{c r}^{\text{ext}}(c K) = N_r^{\text{ext}}(K), where c K = \{c x : x \in K\}. This follows from the homogeneity of the , which scales distances by c, so the rescaled balls of radius c r around rescaled centers exactly match the original configuration of radius r balls around the original centers. In Euclidean spaces, the covering number is additionally invariant under isometries such as translations and rotations, since these preserve the and thus the minimal number of balls required for coverage. Central inequalities relate covering numbers to packing numbers and separated sets. Specifically, in any metric space, the chain M_{2r}(K) \leq N_r^{\text{ext}}(K) \leq N_r^{\text{int}}(K) \leq M_r(K) holds, where N_r^{\text{int}}(K) is the internal covering number requiring centers in K, and M_r(K) is the packing number (also the size of a maximal r-separated set). To see N_r^{\text{ext}}(K) \leq N_r^{\text{int}}(K), note that any internal cover, with centers in K, is also a valid external cover, as the ambient space includes K; thus, the minimal external cardinality cannot exceed the internal one. For the link to packing, a maximal r-separated set S \subseteq K of size M_r(K) has the property that the r-balls around its points cover K (since maximality ensures no point in K lies more than r from S), yielding N_r^{\text{int}}(K) \leq M_r(K) and hence N_r^{\text{ext}}(K) \leq M_r(K). Conversely, if S is $2r-separated with |S| = M_{2r}(K), the r-balls around points of S are disjoint (as centers are at least $2r apart), so at least M_{2r}(K) such balls are needed to cover K, giving M_{2r}(K) \leq N_r^{\text{ext}}(K). These bounds connect covering numbers to entropy measures like the r-entropy H_r(K) = \log N_r(K). In normed spaces, rough estimates for covering numbers can be obtained using when applicable, such as in \mathbb{R}^d. For the unit ball B in the norm, a lower bound arises from volume considerations: N_r^{\text{ext}}(B) \geq \frac{\text{vol}(B)}{\text{vol}(B_r)} = r^{-d}, since the volumes of the covering balls cannot exceed the total without overlap accounting, providing a scale for growth in finite dimensions. Upper bounds follow similarly by packing arguments or explicit constructions, but the volume ratio establishes the order (1/r)^d.

Examples and Computations

Finite-Dimensional Cases

In finite-dimensional spaces, covering numbers for compact sets like balls and hypercubes admit explicit computations and bounds, often derived from volumetric arguments or constructions. Consider the unit ball B_2^d = \{ x \in \mathbb{R}^d : \|x\|_2 \leq 1 \} equipped with the . The covering number satisfies \left( \frac{1}{r} \right)^d \leq N_r(B_2^d) \leq \left(1 + \frac{2}{r}\right)^d for $0 < r \leq 1. The lower bound follows from a packing argument: the volume of B_2^d divided by the volume of an r/2-ball yields at least (2/r)^d disjoint smaller balls inside the unit ball. The upper bound uses a volumetric covering: the unit ball is contained in the union of N_{r/2}((1 + r/2) B_2^d) balls of radius r/2, and the ratio of volumes gives the stated estimate. A basic one-dimensional case is the closed interval [-k, k] on the real line with the metric \rho(x, y) = |x - y|. The external covering number (allowing centers outside the set) satisfies N_r^{\ext}([-k, k]) \leq \lceil k / r \rceil + 1. This bound is obtained by dividing the interval into subintervals of length at most $2r and placing centers at their midpoints, with at most one additional center to handle endpoint overhang. For d-dimensional hypercubes, grid-based methods provide effective upper bounds on covering numbers in \ell_p norms ($1 \leq p \leq \infty). For the cube K = [-k, k]^d with the \ell_p metric, N_r(K) \leq (k / r + 1)^d. This follows from a uniform grid construction that discretizes each coordinate into roughly k / r + 1 points spaced by $2r, ensuring the \ell_p-diameter of each grid cell is at most $2r (tight for p = \infty, and an upper bound for finite p via coordinate alignment). In finite grids, which form discrete subsets of \mathbb{R}^d, the internal covering number (requiring centers in the set) equals the external covering number. For a finite set S with minimum distance \delta > 0 between points, when $0 < r < \delta / 2, each point in S must be covered by a distinct ball (as balls do not overlap), so N_r^{\int}(S) = N_r^{\ext}(S) = |S|.

Infinite-Dimensional Cases

In infinite-dimensional Banach spaces, covering numbers are defined similarly to the finite-dimensional case but apply to compact subsets, as non-totally bounded sets, such as the full unit ball in spaces like C[0,1] with the supremum norm, admit no finite \varepsilon-covers for sufficiently small \varepsilon > 0 due to the lack of by the Arzelà-Ascoli . Compact subsets, however, possess finite covering numbers for every \varepsilon > 0, enabling the study of their metric entropy \log N(X, \varepsilon), which quantifies the "effective " through asymptotic growth rates. These rates often reflect the of the set and are crucial for understanding properties in . A prominent example arises in classical function spaces on [0,1]^d. For the unit ball of the Hölder space C^{0,\alpha}([0,1]^d) equipped with the supremum norm, where $0 < \alpha \leq 1, the covering number satisfies \log N(H, r, \|\cdot\|_\infty) \asymp (1/r)^{d/\alpha} asymptotically as r \to 0, indicating exponential growth in the number of \varepsilon-balls needed, modulated by the smoothness parameter \alpha and domain dimension d. Similarly, for the unit ball of the Sobolev space W_p^\alpha([0,1]^d) (with \alpha > 0, $1 \leq p < \infty) embedded into C[0,1]^d or L_q, the asymptotic behavior is \log N(H, r, \|\cdot\|_\infty) \asymp (1/r)^{d/\alpha} under suitable conditions on p, q, and \alpha > d(1/p - 1/q)_+, as established through estimates for the operators. These polynomial rates in the exponent highlight slower compared to rougher classes, facilitating bounds in approximation theory. Entropy numbers play a central role in characterizing covering numbers for infinite-dimensional settings, particularly via compact operators on Banach or Hilbert spaces. The n-th entropy number e_n(T) of a compact operator T: X \to Y is defined such that the image of the unit ball T(B_X) admits a cover by at most $2^n balls of radius e_n(T) in Y, linking directly to N(T(B_X), e_n(T)) \leq 2^n. In Hilbert spaces, for the finite-rank approximation of the identity operator up to dimension n, the covering number of the unit ball in \mathbb{R}^n (or the n-dimensional subspace) with Euclidean norm satisfies N(B_2^n, r) \asymp (1 + 2/r)^n \sim e^{n \log(1/r)} for small r > 0, demonstrating exponential growth with the dimension n. This exponential scaling underscores the "curse of dimensionality" in infinite dimensions, where full unit balls require infinitely many covers below certain radii, but finite-rank projections yield tractable exponential bounds. Challenges in infinite dimensions stem from the potential infinitude of covering numbers for non-compact sets, such as unbounded or non-equicontinuous subsets of C[0,1] or L^p spaces, where no finite \varepsilon-net exists for small \varepsilon. To address this, regularization techniques restrict attention to compact approximations, such as finite-rank operators in Hilbert spaces or smoothness-constrained subclasses like Hölder or Sobolev balls, ensuring finite covers while preserving essential structure for applications in and .

Applications

Machine Learning and Statistics

In , covering numbers serve as a key tool for measuring the of hypothesis classes, enabling bounds on the in (ERM). For a function class \mathcal{F}, the covering number N_\epsilon(\mathcal{F}) quantifies the minimal size of an \epsilon-net that approximates \mathcal{F} in a suitable , such as the empirical L_2 norm over samples. This complexity measure controls the uniform deviation \sup_{f \in \mathcal{F}} |\hat{R}(f) - R(f)|, where \hat{R}(f) denotes the empirical risk over m samples and R(f) the expected risk, ensuring that the ERM solution generalizes beyond the training data with high probability. Within VC theory and analyses, covering numbers provide explicit bounds on this uniform deviation. For bounded functions in \mathcal{F} with supremum at most M, the deviation satisfies \sup_{f \in \mathcal{F}} |\hat{R}(f) - R(f)| \leq O\left(\sqrt{\frac{M^2}{m} \log N_\epsilon(\mathcal{F})}\right), where \epsilon is scaled to the desired level (e.g., \epsilon \approx M / \sqrt{m}). This follows from applying Massart's finite-class lemma to the \epsilon- and a union bound, with the empirical Rademacher complexity \hat{\mathcal{R}}_m(\mathcal{F}) similarly bounded by O\left(\sqrt{\frac{M^2 \log N_\epsilon(\mathcal{F})}{m}}\right), linking covering numbers directly to agnostic learning rates. In kernel methods, covering numbers of balls in reproducing kernel Hilbert spaces (RKHS) underpin generalization guarantees for regularized ERM, such as in support vector machines (SVMs). For the unit ball in an RKHS \mathcal{H}_K equipped with Mercer kernel K, the \epsilon-covering number in the L_\infty norm on a compact domain satisfies \log N_\epsilon(\mathcal{H}_K \cap B(0,1)) \lesssim \left(\frac{1}{\epsilon}\right)^{d/s} for input dimension d and smoothness parameter s > 1/2, enabling bounds on the effective dimension and regularization parameter choice to achieve excess risk O(1/\sqrt{m}). These estimates, derived from eigenvalue decay of the kernel integral operator, ensure that SVMs with \ell_2-regularization control overfitting while approximating the true risk minimizer. The Dudley entropy integral refines these complexity measures for continuous function classes in generalization bounds. Defined as \int_0^D \sqrt{\log N_r(\mathcal{F}, \|\cdot\|_{L_2(P_n)})} \, dr, where D is a diameter bound and P_n the empirical measure, this integral upper-bounds the Rademacher complexity \mathcal{R}_m(\mathcal{F}) \leq O\left(\frac{1}{\sqrt{m}} \int_0^D \sqrt{\log N_r(\mathcal{F})} \, dr \right), yielding sub-root-m convergence rates for classes with decaying entropy. This chaining argument, extending Pollard's inequality, is particularly effective for VC classes or parametric models where covering numbers exhibit polynomial growth, providing tighter controls than VC dimension alone in non-i.i.d. or high-dimensional settings. For neural , covering numbers of function classes bound sample complexity in , focusing on overparameterized models. -constrained , with global constant L, have \log N_\epsilon(\mathcal{F}_{NN}) \lesssim W^2 L \log\left( \frac{(W+1)^L }{\epsilon} \right) for width W, depth L, and input dimension d, where \mathcal{F}_{NN} denotes piece-wise linear ReLU activations; this polynomial-log dependence implies generalization O\left( \sqrt{ W^2 L \log (W L) / m } \right) via Rademacher bounds, explaining why wide avoid despite millions of parameters when trained on modest datasets. Such results highlight the role of architectural constraints in scaling laws for modern .

Approximation and Functional Analysis

In nonlinear approximation theory, covering numbers provide essential bounds on the error of best n-term approximations by quantifying the complexity of the approximating manifold. For functions in Besov spaces B^s_{\tau q}(L_{\tau}(\Omega)) with $1/\tau = s/d + 1/p, where d is the dimension and s > 0 is the smoothness parameter, the n-term approximation error satisfies \sigma_n(f)_p \leq C n^{-s/d} |f|_{B^s_{\tau q}(L_{\tau}(\Omega))}, and this rate is linked to the entropy growth \log N_r(K) \sim (1/r)^{d/s} for the unit ball K of the space, implying \inf_{\|f_n\| \leq 1} \|f - f_n\| \lesssim n^{-s/d} in low-smoothness regimes dominated by dimensionality. This connection arises because the covering number N_r(K) measures the minimal number of balls of radius r needed to cover K, directly influencing the degrees of freedom required for accurate nonlinear encoding and approximation. In , Jackson and Bernstein inequalities leverage covering numbers to estimate approximation widths and operator compactness. These inequalities relate the modulus of smoothness to best errors, with covering-based entropy moduli providing sharp bounds: for a compact set K in a , the Jackson-type estimate bounds the deviation from finite-dimensional subspaces by the H(\varepsilon, K) = \log N_\varepsilon(K), while the Bernstein-type inverse links subspace norms to smoothness via \|T\| \leq C n^\alpha for operators T with entropy growth controlled by N_r. Such results extend classical inequalities to general nonlinear settings, enabling precise control over approximation rates in Hilbert and s. Covering numbers play a key role in embedding theorems for Sobolev spaces, serving as compactness criteria through their asymptotic growth. For the embedding W^{k,p}(\Omega) \hookrightarrow L^q(\Omega) with $1 \leq p \leq q \leq \infty and k > 0, compactness holds when the entropy numbers e_n (defined via N_{e_n}(K) \leq 2^{n-1} for the unit ball K) satisfy e_n \to 0 as n \to \infty, with explicit asymptotics e_n \asymp n^{-k/d} in bounded domains \Omega \subset \mathbb{R}^d, confirming the Rellich-Kondrachov theorem quantitatively. This criterion ensures that bounded sets in W^{k,p} have finite \varepsilon-entropy for small \varepsilon, facilitating applications in PDE existence and . The relation between covering numbers and Kolmogorov n-widths formalizes optimal : d_n(K) \asymp \inf \{ r : N_r(K) \leq 2^n \}, where d_n(K) = \inf_{\dim V_n = n} \sup_{f \in K} \dist(f, V_n) measures the minimal over n-dimensional s V_n. In , for the unit ball of W^{k,2}([0,1]), d_n \sim n^{-k}, reflecting the covering number's role in bounding deviations for algebraic polynomials of degree at most n-1.

References

  1. [1]
    Covering numbers, epsilon-nets and packing numbers - MLweb
    A covering number is the minimal number of balls of a given size required to entirely cover a given set. More formally, in a space equipped with a (possibly ...
  2. [2]
    [PDF] Covering Numbers for Convex Functions
    For a subset of a metric space. , the -covering number is defined as the smallest number of balls of radius whose union contains . Covering numbers capture the ...
  3. [3]
    [PDF] 14.1 Covering and Packing
    Those numbers are defined as covering and packing numbers. Definition 14.3 (Covering number). N(Θ,k·k,) := min{n : ∃-covering over Θ of size n} ...Missing: mathematics | Show results with:mathematics
  4. [4]
    Weak Convergence and Empirical Processes - SpringerLink
    van der Vaart, Jon A. Wellner. Pages 2-5. Outer Integrals and Measurable ... Maximal Inequalities and Covering Numbers. Aad W. van der Vaart, Jon A ...
  5. [5]
    Metric entropy analogues of sum set theory | What's new - Terry Tao
    Mar 19, 2014 · The external covering number {N^{ext}_r(E)} is the fewest number of ... The internal covering number, {N^{int}_r(E)}, might decrease in ...
  6. [6]
    entropy and $\varepsilon$-capacity of sets in function spaces ...
    This article is cited in 109 scientific papers (total in 109 papers) ε ε -entropy and ε ε -capacity of sets in function spaces A. N. Kolmogorov, V. M. ...Missing: original | Show results with:original
  7. [7]
    [PDF] Covering numbers - UC Berkeley Statistics
    A set T is totally bounded if, for all ǫ > 0, N(ǫ,T,d) < ∞. The function ǫ 7→ logN(ǫ,T,d) is the metric entropy of T.
  8. [8]
    [PDF] Lecture 6: Covering Numbers and Chaining
    Sep 21, 2017 · Definition 2 (Metric space). A metric space (S, ρ) consists of a set of objects S, and a function ρ : S×S ⇒ R+ with the ...
  9. [9]
    None
    - **Source**: https://www.math.uci.edu/~rvershyn/papers/HDP-book/HDP-2.pdf (Second Edition draft)
  10. [10]
    [PDF] Functional Covering Numbers - arXiv
    Apr 22, 2017 · We provide analogues for various geometric inequalities on covering numbers, such as volume bounds, bounds connected with Hadwiger's conjecture, ...Missing: invariance | Show results with:invariance
  11. [11]
    [PDF] High-Dimensional Probability - UCI Mathematics
    Sep 24, 2025 · This book covers only a fraction of high-dimensional probability, with just a few data science examples. Each chapter ends with a Notes section ...
  12. [12]
    [PDF] Covering number - SDS 384 11: Theoretical Statistics
    Definitions. • Recall that a metric space (T , ρ) consists of a nonempty set T and a mapping ρ : T ×T → R that satisfies: • Non-negative: ρ(θ, θ′) ≥ 0 for ...
  13. [13]
  14. [14]
    [PDF] Understanding Machine Learning: From Theory to Algorithms
    Shai Shalev-Shwartz is an Associate Professor at the School of Computer. Science and Engineering at The Hebrew University, Israel. Shai Ben-David is a Professor ...
  15. [15]
    The covering number in learning theory - ScienceDirect.com
    The covering number of a ball in a reproducing kernel Hilbert space is important in Learning Theory, and is estimated by the regularity of the Mercer kernel.
  16. [16]
    [PDF] Covering Number Bounds of Certain Regularized Linear Function ...
    Kolmogorov. Asymptotic characteristics of some completely bounded metric spaces. Dokl. Akad. Nauk. SSSR, 108:585–589, 1956. A.N. Kolmogorov and V.M. Tihomirov.
  17. [17]
    [PDF] Nonlinear approximation - University of South Carolina
    Nonlinear approximation means that the approximants do not come from linear spaces but rather from nonlinear manifolds.
  18. [18]
    Inequalities of Bernstein-Jackson-type and the degree of ...
    The paper deals with covering problems and the degree of compactness of operators. The main part is devoted to relationships between entropy moduli and ...
  19. [19]
    [PDF] on the entropy numbers and the kolmogorov widths - NSF PAR
    The above theorem, combined with Theorem 3.1, gives the following relations between linear Kolmogorov widths and entropy numbers. Theorem 4.2. Let K ⊂ X be a ...