Bounded quantification
Bounded quantification is a construct in type theory that enables universal or existential quantifiers over type variables restricted by subtyping bounds, allowing polymorphic functions to be defined over a constrained set of related types in programming languages.[1] This mechanism integrates parametric polymorphism with subtyping, providing a foundation for generic programming while ensuring type safety through explicit constraints on the quantified types.[2]
Introduced in the late 1980s as part of efforts to model object-oriented features in typed lambda calculi, bounded quantification addresses limitations in earlier systems like System F by incorporating subtype bounds, such as in the form ∀t ⊆ τ. t → t, which types functions applicable to any subtype of τ.[1] A significant extension, known as F-bounded quantification, generalizes this by allowing the bound to depend recursively on the type variable itself, as in ∀t ⊆ F. σ, where F is a type expression involving t.[2] This innovation, proposed by Canning, Cook, Hill, Olthoff, and Mitchell in 1989, resolves challenges with recursive types, such as typing binary methods (e.g., equality or comparison functions) that require the input and output types to match the same recursive structure.[1]
In practice, bounded quantification underpins features in modern programming languages, including Java's bounded type parameters (e.g., <T extends Comparable> for self-comparable types) and Scala's support for F-bounded polymorphism in trait definitions.[3] These applications facilitate expressive type-safe code for object hierarchies, inheritance, and generic algorithms like sorting over partially ordered types.[2] Theoretically, systems incorporating bounded quantification, such as F< and its F-bounded variant, exhibit properties like subject reduction and type preservation under reduction, though subtyping inference remains undecidable in general.[4] Ongoing research explores decidable fragments, such as subclassing-bounded quantification, to improve practical implementability in compilers and type checkers.[5]
Background in Type Theory
Universal and Existential Quantification
In type theory, universal quantification, denoted as \forall T. \tau, enables the definition of polymorphic functions that can operate uniformly over any type T, without depending on the specific properties of that type. This construct allows a term to be abstracted over a type variable T, resulting in a type that is applicable to arbitrary instantiations of T. The introduction of universal quantification occurs through type abstraction, where a term \Lambda T. e is formed, assuming T is a fresh type variable not appearing free in the context. The typing rule for introduction is: if \Gamma, T \vdash e : \tau, then \Gamma \vdash \Lambda T. e : \forall T. \tau, where \Gamma is the typing context and \tau may contain T freely.[6] Elimination, or type application, instantiates the quantified type with a specific type S: if \Gamma \vdash e : \forall T. \tau and \Gamma \vdash S (type), then \Gamma \vdash e[S] : \tau[S/T]. This mechanism ensures type safety while providing flexibility for generic programming.[6]
Existential quantification, denoted as \exists T. \tau, abstracts over an unknown type T, allowing the encapsulation of a concrete type and its associated value while hiding the type's identity from the client. This is particularly useful for modeling data abstraction, where the implementation type is existentially quantified to prevent direct access. Introduction via packing constructs a package \mathsf{pack}\, T,\, v as \exists T. \tau, where v : \tau[T/U] for some fixed U in \tau, certifying that the value satisfies the existential specification. The typing rule is: if \Gamma \vdash T (type) and \Gamma \vdash v : \tau[T/U], then \Gamma \vdash \mathsf{pack}\, T,\, v : \exists U. \tau. Elimination through unpacking, often via a let-binding or case expression, extracts the hidden type and value into a scope where the type remains opaque: if \Gamma \vdash e : \exists T. \tau and \Gamma, x : \tau, T \vdash body : \sigma (with T fresh), then \Gamma \vdash \mathsf{let}\, (\mathsf{pack}\, x : \tau) = e\,\mathsf{in}\, body : \sigma. This duality to universal quantification supports modular designs by separating interfaces from implementations.
Universal quantification in System F serves as the foundation for parametric polymorphism, enabling type-generic functions in practical programming languages. These concepts were introduced in Jean-Yves Girard's System F in 1972, establishing a basis for higher-order typed lambda calculi that influenced subsequent developments in type theory.[7]
Parametric Polymorphism
Parametric polymorphism enables the development of code that operates uniformly across a variety of types, treating the type parameter as an abstract entity without relying on type-specific operations or knowledge of the concrete type. This form of polymorphism, also known as genericity, allows functions and data structures to be defined generically, where the behavior remains consistent regardless of the instantiated type, ensuring that the code does not discriminate based on the particular type provided.[8][9]
In programming languages such as Standard ML, parametric polymorphism manifests through polymorphic functions, such as the map function that applies a given function to each element of a list without regard to the list's element type, and through functors, which are parametric modules that generate structure instances based on input signatures. Similarly, in Haskell, unbounded type classes support parametric polymorphism by allowing instances where no additional constraints are imposed, enabling generic definitions like the id function of type a -> a that works for any type a without requiring type-specific methods. These mechanisms facilitate the creation of reusable components that abstract over types, as seen in ML's module system where functors parameterize structures over type variables.[10][11][12]
The primary benefits of parametric polymorphism include enhanced type safety by preventing type errors at compile time through uniform treatment of types, greater abstraction by hiding implementation details behind generic interfaces, and significant code reuse, as a single definition suffices for multiple type instantiations without duplication. However, it has limitations in scenarios requiring type hierarchies or constraints, such as when operations like comparison must be available on the parameterized type, as pure parametric polymorphism cannot enforce such requirements without additional mechanisms. This approach draws from universal quantification in type theory, where polymorphic functions embody types of the form that quantify over all possible types.[13][8][14]
Core Concepts of Bounded Quantification
Bounded Type Variables
Bounded type variables introduce constraints on type parameters in polymorphic types, restricting them to a subset of possible types defined by a subtyping relation. This mechanism extends parametric polymorphism, the unbounded precursor where type variables range over all types, by allowing programmers to specify that a type variable T must be a subtype of a given type U. The syntax for bounded universal quantification is typically expressed as \forall (T <: U). \tau, where T is the bounded type variable, U is the upper bound, and \tau is the type body that may depend on T. This notation originates from early explorations in type theory aimed at supporting inheritance and subtyping in object-oriented contexts.[15]
The typing rules for bounded quantification ensure type safety by enforcing the subtype constraint during type application. Specifically, to apply a polymorphic term of type \forall (T <: U). \tau to an actual type T', the judgment requires T' <: U, after which the resulting type is \tau[T / T'] (substituting T' for T in \tau). This rule prevents invalid instantiations, such as applying a function expecting subtypes of a base class to an unrelated type, thereby maintaining the integrity of subtyping hierarchies.[16]
Bounded type variables play a crucial role in enabling operations on type parameters that rely on shared interfaces or behaviors among subtypes. For instance, a generic function can invoke methods defined in the upper bound U, assuming all possible T implement those methods, such as accessing a common field in record types representing objects. This allows uniform processing of hierarchical types, like drawing operations on shapes where T ranges over subtypes of a base "Shape" type.[15]
In formal semantics, bounded quantification is modeled within typed lambda calculi such as F^<\! :, an extension of System F that incorporates subtyping. The operational semantics preserve subtyping relations under quantification: if a term is well-typed under a bounded quantifier, reductions maintain the subtype constraints, ensuring subject reduction and progress properties hold. This preservation guarantees that type applications remain valid throughout evaluation, supporting decidable type checking in restricted fragments.[16]
Subtyping Constraints
In bounded quantification, subtyping rules for universally quantified types, such as \forall X \leq B. T, ensure type safety by relating instances where the bound is widened and the body respects the substitution. Specifically, if B_1 \leq B_2 and T_1[B_2 / X] \leq T_2[B_2 / X], then \forall X \leq B_1. T_1 \leq \forall X \leq B_2. T_2, reflecting covariance in the bound and careful substitution in the body to maintain polymorphism under subtyping.[17] For existential quantifiers, dual rules apply using union types, preserving contravariance in negative positions like function arguments.[17]
These rules exhibit covariance in the upper bound for universal quantification, allowing a narrower bound to subtype a wider one, while the body must adjust covariantly under the looser bound.[18] However, contravariance arises in contexts like method parameters within the quantified body, ensuring the Liskov substitution principle holds across polymorphic invocations.[18]
A key challenge in these subtyping rules is their potential undecidability, particularly when bounds involve recursive types, as the subtype relation may require solving infinite expansions that do not terminate.[19] For instance, checking \forall \alpha \leq \alpha. \top \leq \forall \alpha \leq \top. \top propagates recursive constraints that can loop indefinitely without additional restrictions.[18]
In type inference, constraint propagation handles these rules by lifting types to minimal forms that satisfy bounds, such as computing the smallest type \tau \upharpoonright \alpha free of certain variables to resolve local subtyping obligations.[5] This technique, applied in systems restricting bounds to class names, ensures decidable checks by separating subclassing from general subtyping and iteratively refining assumptions.[5]
The theoretical foundation integrates these rules with Mitchell-Wright-style subtyping in polymorphic lambda calculi, using coercion semantics to interpret subtypes as type-directed coercions that preserve operational behavior in calculi like F^<.[5] Bounded type variables syntactically enable this integration by restricting quantification to subtype lattices.[17]
F-Bounded Quantification
Motivation and Origins
F-bounded quantification emerged in the late 1980s as an extension to standard bounded quantification in type theory, specifically to address challenges in modeling recursive subtyping within object-oriented programming paradigms. Introduced by Peter Canning, William Cook, Walt Hill, John Mitchell, and Walter Olthoff in their 1989 paper, it built upon earlier work on bounded quantification, such as that by Luca Cardelli and Peter Wegner, which allowed polymorphic functions to operate uniformly over subtypes but struggled with self-referential types common in object-oriented designs. This innovation was motivated by the need to type functions that work polymorphically over families of types involving inheritance hierarchies, where standard bounds like T \leq U proved insufficient for recursive constraints.
The core motivation stemmed from the failure of conventional bounded approaches to handle self-referential type definitions, such as implementing a comparable interface where the type parameter must extend itself, exemplified by \text{Comparable}<T \text{ extends Comparable}<T>>. In such cases, simple bounded quantification leads to inconsistencies or overly restrictive typing, preventing the expression of binary methods like equality or ordering that operate on instances of the same type family. F-bounded quantification resolves this by permitting bounds of the form T \leq F<T>, where F is a type constructor or functor, enabling recursive subtyping while preserving type safety and polymorphism. This approach was particularly vital for encoding object-oriented features like inheritance and method overriding in statically typed languages.
Over time, F-bounded quantification has profoundly shaped type systems for encoding complex inheritance patterns and binary methods, serving as a cornerstone for features in modern object-oriented languages and proving essential for maintaining type soundness in the presence of recursion. Bounded type variables, as a simpler precursor, provided the initial framework for subtype polymorphism but lacked the recursive power of F-bounds.
In the typed lambda calculus, F-bounded quantification extends the syntax of System F to include bounded universal types where the bound may recursively depend on the quantified type variable. The syntax for types includes type variables \alpha, the top type \top, function types A \to B, and bounded universal types \forall \alpha \leq A. B, where A (the bound) may contain free occurrences of \alpha, effectively taking the form \forall \alpha \leq F[\alpha]. \tau with F a type constructor and \tau the body.[2][18] Terms extend System F with type abstractions \Lambda \alpha \leq A. a and type applications a \{A'\}, alongside standard variables x, abstractions \lambda x : A. a, and applications a(a).[18]
The typing rules enforce a self-subtyping condition to handle the recursive nature of F-bounds. For type abstraction, if \Gamma, \alpha \leq A, \Delta \vdash a : B with \alpha not free in the value environment \Delta, then \Gamma, \Delta \vdash \Lambda \alpha \leq A. a : \forall \alpha \leq A. B, where the assumption \alpha \leq A (with A = F[\alpha]) implies the self-condition \alpha \leq F[\alpha].[18] For type application, if \Gamma, \Delta \vdash a : \forall \alpha \leq A. B and \Gamma \vdash A' \leq A[\alpha \leftarrow A'], then \Gamma, \Delta \vdash a \{A'\} : B[\alpha \leftarrow A'], ensuring instantiation preserves the bound via substitution and the self-subtyping A' \leq F[A'].[18] Subsumption allows weakening types via subtyping: if \Gamma, \Delta \vdash a : A and \Gamma \vdash A \leq B, then \Gamma, \Delta \vdash a : B. Standard rules for functions and variables complete the system, with well-formedness requiring environments \Gamma (for types) to satisfy \Gamma \vdash \diamond and bounds to be subtypes under assumptions.[18]
The operational semantics consists of \beta- and \eta-reductions for terms and types: (\lambda x : A. b)(a) \to b[x \leftarrow a], (\Lambda \alpha \leq A. b)\{A'\} \to b[\alpha \leftarrow A'], and corresponding \eta-rules like \lambda x : A. b(x) \to b if x unbound. These ensure subject reduction—if \Gamma, \Delta \vdash a : A then for any reduct a' \to a, \Gamma, \Delta \vdash a' : A' with \Gamma \vdash A' \leq A—and progress, preventing stuck configurations in closed well-typed terms despite recursion.[18] Denotationally, the system is modeled using partial equivalence relations (PERs) over a universal domain \omega, where types denote PERs and terms denote elements thereof; for instance, \llbracket \forall \alpha \leq A. B \rrbracket^\gamma = \bigcap \{ \llbracket B \rrbracket^{\gamma[\alpha \mapsto p]} \mid p \subseteq \llbracket A \rrbracket^\gamma \}, capturing the intersection over realizers satisfying the self-referential bound. This validates typing and supports recursive types via fixed points in the subtype lattice.[18]
F-bounded quantification arises as a conservative extension of System F_{\leq}, the second-order calculus with subtyping and non-recursive bounds \forall \alpha \leq B. \tau (where B lacks \alpha). Allowing recursive dependence in bounds preserves the expressiveness of F_{\leq} for parametric polymorphism while adding support for self-referential subtyping, proven via normal-form derivations that introduce no new inconsistencies.[20] Subtyping constraints address recursion by enforcing fixed-point inclusions like T \leq F[T] in the subtype relation.[18]
Theoretical Properties
Expressiveness and Limitations
Bounded quantification significantly enhances the expressiveness of type systems by integrating subtyping constraints with polymorphism, allowing for more precise modeling of type hierarchies in object-oriented contexts. In particular, it enables the encoding of recursive types through self-referential bounds, such as in the form \forall t \leq F. \sigma, where F is a type-level functor that captures structural recursion. This construct characterizes types with recursive structure akin to \mathsf{Rec}\, t. F, facilitating the typing of operations that preserve subtype information, like geometric transformations on recursive data structures such as points or shapes.[2]
A key strength lies in supporting binary methods, which operate on pairs of objects from the same type while respecting subtyping. For instance, an equals method can be typed as \forall t \leq \mathsf{Comparable}. t \times t \to \mathsf{[boolean](/page/Boolean)}, ensuring that subtypes like Circle or Rectangle can implement equality without violating type safety across hierarchies. Similarly, bounded quantification accommodates mixin-based inheritance by enforcing compatible "shapes" in recursive type definitions, such as separating material types (e.g., concrete classes) from abstract shapes (e.g., interfaces like Comparable), thereby modeling composable behaviors without unbounded recursion. F-bounded quantification further bolsters self-referential expressiveness in these scenarios.[21]
Despite these capabilities, bounded quantification has notable limitations in expressiveness. It typically adheres to prenex (rank-1) polymorphism, where quantifiers appear only at the outermost level of types, preventing the direct encoding of higher-rank types that nest quantifiers arbitrarily—such as functions accepting polymorphic continuations—without extensions like impredicative instantiation. Moreover, it falls short of full dependent types, which permit types to depend on term-level values (e.g., length-indexed lists), requiring separate mechanisms for value-dependent refinements beyond mere subtype bounds.[22][23]
In comparison to unbounded System F, which offers impredicative polymorphism across all types without subtyping, bounded variants like F_{<: introduce subtype constraints to support hierarchical polymorphism but narrow the scope of instantiation to bounded domains, trading some generality for improved integration with inheritance and object-oriented features. Theoretical analyses confirm that these extensions are conservative, meaning they preserve core properties of the underlying system—such as type safety and consistency—without introducing inconsistencies in basic formulations.[2][21]
Decidability Issues
Bounded quantification, particularly when combined with subtyping in calculi like F≤, leads to undecidability of the subtyping relation due to the ability to encode recursive constraints that simulate Turing-complete computations. In F≤, which extends System F with bounded universal quantification and subtyping, the subtyping problem becomes undecidable because bounded types allow "re-bounding" behaviors that can express arbitrary recursive definitions, enabling reductions to the halting problem. This result builds on earlier work showing undecidability of subtyping for second-order types without bounds, based on Mitchell's containment relation for polymorphic types, as established by Tiuryn and Urzyczyn.[24][25]
To address this undecidability, researchers have identified decidable fragments of bounded quantification by imposing syntactic restrictions that limit recursive interactions. For instance, prohibiting recursive bounds—where the bound of a quantified type variable does not depend on that variable itself—ensures decidable subtyping by avoiding infinite descent in type comparisons. Similarly, stratified subtyping hierarchies, which organize types into levels where quantification only occurs within or across fixed strata without cyclic dependencies, yield decidable checking algorithms, often via automata-based methods or constraint simplification. These fragments maintain much of the expressiveness of bounded quantification for practical use while guaranteeing termination of type-checking procedures.[5][26]
Specific algorithms for handling F-bounded quantification in decidable settings leverage coinductive techniques to verify recursive constraints without enumerating infinite structures. In extensions like MLF, Rémy and collaborators employ coinduction over coercion constraints to check satisfaction of F-bounds, treating recursive type equalities and subtyping as greatest fixed points that can be approximated finitely through step-indexed or environment-based proofs. This approach, formalized in systems like Fcc (System F with coercion constraints), supports bounded polymorphism while ensuring decidable type soundness for abstraction and instantiation rules, though full principal type inference remains challenging.[27]
These decidability results have significant practical implications for programming language design, where full bounded quantification is often approximated to achieve efficient type inference. Languages must balance expressiveness—such as supporting recursive generics—with computability, leading to trade-offs like restricting F-bounds to non-recursive forms (as in early Java generics) or using partial inference with user annotations (as in Scala). Such compromises enable decidable checking in real-world systems without sacrificing core benefits of bounded polymorphism, though they may require careful hierarchy management to avoid subtle undecidability pitfalls.[28][5]
Applications in Programming Languages
Java Generics
Bounded quantification in Java generics was introduced with Java 5 (J2SE 5.0) in September 2004, marking a significant evolution in the language's support for type-safe parametric polymorphism. This feature drew heavily from the Generic Java (GJ) project, proposed by Gilad Bracha, Martin Odersky, David Stoutamire, and Philip Wadler in 1998, which outlined a translation-based approach to adding generics while preserving backward compatibility with existing Java bytecode.[29] The GJ design emphasized bounded type parameters to constrain generic types, enabling safer and more expressive code without runtime overhead.
In Java, upper bounds on type parameters are specified using the syntax <T extends U>, where T is the type variable and U is its upper bound, restricting T to U or any subtype thereof. This allows methods and classes to operate on a bounded set of types, such as ensuring elements in a collection implement a specific interface. For more flexible subtyping in method parameters and return types, wildcards provide lower bounds via <? super T>, permitting arguments of type T or its supertypes, which is particularly useful for producer-consumer patterns like adding elements to a list without knowing the exact type. These constructs enforce compile-time checks for type safety, preventing errors like inserting incompatible types into generic containers.[3]
F-bounded patterns in Java enable recursive self-referential bounds, as seen in the Comparable<T extends Comparable<T>> interface, which ensures that comparable objects can be compared to instances of their own subtype. This design realizes F-bounded polymorphism, allowing methods like compareTo to type-check against the declaring class's type parameter. F-bounded quantification provides the theoretical basis for such recursive bounds in Java generics.[2]
Java's generics rely on type erasure for implementation, where generic type information is stripped at compile-time, replacing type parameters with their bounds (or Object if unbounded) in the bytecode. Bounds are rigorously verified during compilation to ensure adherence to constraints, but runtime type checks cannot leverage erased information, leading to the need for explicit casts in certain scenarios. Type inference, introduced in Java 5 and enhanced in later versions like Java 7 with diamond operators (<>) for constructor calls, automates much of the bound resolution, reducing verbosity while maintaining safety. This erasure model balances expressiveness with compatibility but imposes limitations, such as the inability to instantiate generic arrays or use instanceof with parameterized types.
Scala and Other Languages
In Scala, bounded quantification is extended through F-bounded traits, which use upper type bounds to reference the type parameter itself, enabling recursive constraints for self-referential polymorphism. For instance, a trait for ordered types can be defined as trait Ord[A <: Ord[A]] { def compare(that: A): Int }, allowing subtypes to maintain covariance while ensuring method return types align with the specific subclass.[30] This approach supports higher-kinded types, where type constructors can be parameterized over other types, facilitating abstractions like functors or monads beyond simple value polymorphism. Additionally, Scala introduces context bounds (or implicit bounds) as a shorthand for requiring implicit evidence of type class instances, such as def maximum[T: Ord](xs: List[T]): T, which expands to an implicit parameter using Ord[T]. This enhances expressiveness for ad-hoc polymorphism without explicit bound syntax.[31]
Other languages implement bounded quantification with variations emphasizing runtime support and constraints. In C#, the where clause enforces bounds on generic type parameters, as in class Comparer<T> where T : IComparable<T>, restricting T to types implementing the specified interface or inheriting from a base class, thereby ensuring access to required members at compile time.[32] Ceylon took a different approach with reified generics (in its active development phase until archived in 2023), preserving full type information at runtime to avoid Java's type erasure pitfalls; this allowed bounded types like T of UpperBound to retain their constraints during execution, enabling reflective operations on generic arguments without loss of specificity.[33][34]
Examples
Basic Bounded Quantification
Bounded quantification restricts type parameters in polymorphic constructs to subtypes (or supertypes) of a specified bound, enabling type systems to guarantee access to certain methods or properties without runtime checks or casts. This basic form combines parametric polymorphism with subtyping, allowing generic code to operate safely on a constrained set of types.[35]
A straightforward example is a generic Stack class parameterized by T where T must extend a Collection interface, permitting the stack to invoke Collection methods like size() on its elements. This design assumes a basic object-oriented setting with interfaces defining common behaviors.
pseudocode
[interface](/page/Interface) Collection {
[int](/page/INT) size();
// Other methods omitted for brevity
}
[class](/page/Class) Stack<T extends Collection> {
[private](/page/Private) List<T> elements = new ArrayList<>();
public void push(T item) {
// Safe access to bound methods without casting
if (item.[size](/page/Size)() > 0) {
System.out.println("Pushing non-empty collection of [size](/page/Size) " + item.[size](/page/Size)());
}
elements.add(item);
}
public T pop() {
if (!elements.isEmpty()) {
return elements.remove(elements.[size](/page/Size)() - 1);
}
throw new EmptyStackException();
}
}
[interface](/page/Interface) Collection {
[int](/page/INT) size();
// Other methods omitted for brevity
}
[class](/page/Class) Stack<T extends Collection> {
[private](/page/Private) List<T> elements = new ArrayList<>();
public void push(T item) {
// Safe access to bound methods without casting
if (item.[size](/page/Size)() > 0) {
System.out.println("Pushing non-empty collection of [size](/page/Size) " + item.[size](/page/Size)());
}
elements.add(item);
}
public T pop() {
if (!elements.isEmpty()) {
return elements.remove(elements.[size](/page/Size)() - 1);
}
throw new EmptyStackException();
}
}
The upper bound T extends Collection informs the type checker that any instantiation of Stack—such as Stack<List> or Stack<Set>—provides elements supporting Collection operations, ensuring type-safe calls like item.size() within push without explicit casts or potential ClassCastExceptions at runtime.[35]
This contrasts with unbounded parametric polymorphism, where T could be any type and method calls on T would require unsafe casts.[35]
Upper bounds (extends) suit producer patterns, where the generic type yields values readable as supertypes, while lower bounds (super) fit consumer patterns, where values from subtypes are accepted. The PECS principle—Producer Extends for reading, Consumer Super for writing—helps select bounds to preserve subtyping flexibility in method signatures.[36]
A frequent pitfall is bound mismatch: attempting Stack<String> fails compilation since String does not extend Collection, preventing erroneous usage and enforcing the intended type constraints at compile time.[35]
F-Bounded Polymorphism Example
F-bounded polymorphism enables the implementation of type-safe binary methods in comparable containers, where operations like finding a minimum require arguments of the exact same subtype as the receiver. Consider a SortedList class parameterized over T extends Comparable<T>, allowing methods such as minimum to compare and return elements of type T without type casts or widening to a supertype. This recursive bound T <: Comparable<T> ensures that subtypes like Integer or custom classes can participate in operations that preserve the specific type, as introduced in the foundational work on F-bounded quantification for object-oriented programming.[1]
In Java, this can be exemplified as follows:
java
interface Comparable<T extends Comparable<T>> {
boolean lessThan(T other);
}
class Integer implements Comparable<Integer> {
private int value;
public Integer(int value) { this.value = value; }
public boolean lessThan(Integer other) {
return this.value < other.value;
}
}
class SortedList<T extends Comparable<T>> {
private T element;
public SortedList(T element) { this.element = element; }
public T minimum(T other) {
if (this.element.lessThan(other)) {
return this.element;
} else {
return other;
}
}
}
interface Comparable<T extends Comparable<T>> {
boolean lessThan(T other);
}
class Integer implements Comparable<Integer> {
private int value;
public Integer(int value) { this.value = value; }
public boolean lessThan(Integer other) {
return this.value < other.value;
}
}
class SortedList<T extends Comparable<T>> {
private T element;
public SortedList(T element) { this.element = element; }
public T minimum(T other) {
if (this.element.lessThan(other)) {
return this.element;
} else {
return other;
}
}
}
Here, SortedList<Integer> instantiates correctly because Integer satisfies the bound Integer <: Comparable<Integer>, enabling minimum to return Integer precisely.[1]
In Scala, F-bounded polymorphism often combines with self-types in traits to support operations like cloning or equality that return the concrete subtype. For instance, a trait for cloneable entities can use a self-referential bound with a self-type annotation to enforce that implementations provide a clone method returning their own type:
scala
trait Cloneable[A <: Cloneable[A]] { self: A =>
def clone: A
}
class MyClass extends Cloneable[MyClass] {
override def clone: MyClass = new MyClass
}
trait Cloneable[A <: Cloneable[A]] { self: A =>
def clone: A
}
class MyClass extends Cloneable[MyClass] {
override def clone: MyClass = new MyClass
}
This setup ensures MyClass.clone returns MyClass, not a supertype, leveraging the F-bound A <: Cloneable[A] alongside the self-type self: A => for precise typing within the trait.[1]
Type checking for such F-bounded instantiations proceeds recursively: for a type S to instantiate F<T> where T <: F<T>, the system verifies S <: F<S>, unfolding the bound as needed. In the SortedList<Integer> case, it checks Integer <: Comparable<Integer>, confirming Integer implements the interface with the correct parameter type; failure occurs if the bound does not hold, preventing unsafe polymorphism. This resolution supports recursive scenarios by ensuring subtype compatibility without infinite expansion, as formalized in the semantics of F-bounded quantification.[1]
In real-world applications, F-bounded polymorphism facilitates encoding builder patterns through inheritance, where an abstract builder class uses T extends Builder<T> to allow fluent methods like setValue to return the specific child builder type, enabling type-safe chaining in subclasses. Similarly, it supports state machines via hierarchical inheritance, where state classes extend a bounded base to define transitions that return the exact next-state subtype, maintaining type precision across machine evolutions. These patterns extend basic bounded quantification by handling self-referential recursion in practical object-oriented designs.[1]