Fact-checked by Grok 2 weeks ago

Global Descriptor Table

The Global Descriptor Table (GDT) is a in the x86 architecture used in to define and manage memory through a of 8-byte segment descriptors that specify base addresses, limits, access rights, and other attributes for code, data, and system . It serves as a core component for enabling , which translates logical addresses to linear addresses while enforcing protection mechanisms such as limit checking, type validation, and privilege-level restrictions across rings 0 through 3. The GDT is stored in system memory as a linear array of descriptors, with the first entry reserved as a descriptor to prevent unintended , and it can hold up to 8192 entries, though the actual size is defined by a 16-bit limit in the GDTR register (Global Descriptor Table Register). Each descriptor includes fields for the 32-bit segment base address, a 20-bit limit (scalable via a flag for 4KB units), rights (encompassing present bit, descriptor level or DPL, and type indicators for code, data, or system usage), and mode-specific flags like the default operation size (D/B) or (L) bit for IA-32e compatibility. The GDTR itself holds the GDT's base address (32 bits in , expanded to 64 bits in IA-32e mode) and is loaded using the privileged LGDT , typically by the operating system in ring 0, with the table aligned on an 8-byte boundary for optimal performance. In operations, the GDT supports essential system functions including multitasking via Task State Segment (TSS) descriptors, inter-privilege-level transfers through call gates, and integration with the (IDT) for handling exceptions and . The GDT contains segment descriptors, including system descriptors for LDTs and TSS, while the is a separate table for and exception handling. Although segmentation via the GDT enables both flat memory models (with a single large segment) and multi-segment configurations for fine-grained control, its role diminishes in 64-bit IA-32e mode where most segmentation is disabled except for the FS and GS segments used for . Upon processor reset, the GDTR initializes to a base of 0 and limit of 0xFFFF, necessitating explicit OS initialization to enable functionality.

Background and Purpose

Definition and Role in x86 Architecture

The (GDT) is a fundamental in the x86 that contains an array of 8-byte entries. Each entry defines the attributes of a memory segment, including its base address, size limit, access rights, privilege level, and granularity, enabling the processor to manage distinct regions of memory for code, data, stacks, and system resources. The GDT is located anywhere in the linear address space and is referenced by the processor through the Global Descriptor Table Register (GDTR), which holds the table's base address and limit. In operation, the primary role of the GDT is to facilitate , where logical addresses—composed of a segment selector and an —are translated into linear addresses by the CPU. Segment selectors, loaded into the six segment registers ( for code, DS/ES//GS for data, and for stack), index into the GDT to retrieve the corresponding descriptor and apply its attributes during address resolution. This mechanism enforces by checking access permissions, segment limits, and levels (such as ring 0 for kernel mode and ring 3 for user mode), preventing unauthorized access and ensuring isolation between different execution environments. The GDT thus supports essential features like multitasking, where multiple processes can share memory safely, and privilege separation to mitigate risks from faulty or malicious . Key benefits of the GDT include enabling and separation, which allows segments to be distinguished from writable segments, and facilitating dynamic in operating systems. For instance, in a typical OS , the GDT might define a with 0 privileges for kernel execution and a separate with 3 privileges for user-space applications, ensuring that user cannot directly access kernel . This structure underpins the x86's segmented model, providing a foundational layer for and handling without relying on flat addressing alone.

Memory Segmentation Fundamentals

Memory segmentation in the x86 architecture divides the space into variable-sized, independent to enable , isolation of program components, and efficient addressing mechanisms, in contrast to flat models that treat the as a single continuous region. This approach allows programs to operate within designated regions, such as , , or segments, preventing unauthorized and supporting multitasking by logically separating . Segmentation was introduced with the processor to overcome the limitations of real-mode addressing in earlier processors like the 8086, which restricted to a 1 MB flat space without protection, thereby enabling operations for reliable multitasking and enhanced security. In x86 addressing, memory references use logical addresses composed of a segment selector and an offset, which the translates into a linear address by adding the offset to the segment's ; this linear is then mapped to a via paging if enabled. Specifically, the logical follows the formula where the linear equals the segment plus the offset, ensuring that all accesses are resolved relative to segment boundaries before further to physical . This two-stage process—segmentation followed by paging—provides layered abstraction, allowing flexible virtual-to-physical mapping while enforcing segment-level protections. Each segment is defined by key attributes that control its location, size, and access permissions, including a 32-bit base address specifying the segment's starting location in the linear address space, and a 20-bit limit field that defines the segment's size, which can be scaled by a granularity bit to units of 1 byte or 4 KB for larger extents up to 4 GB. Access rights attributes govern operations such as read, write, and execute permissions, while the descriptor privilege level (DPL), ranging from 0 (most privileged) to 3 (least privileged), enforces ring-based protection to restrict access based on the current privilege level. Additional flags, such as the conforming bit for code segments (allowing access from less privileged levels under certain conditions) and the expand-down bit for data segments (enabling growth downward from the limit), provide flexibility for specific use cases like stacks. These segment definitions are stored in descriptor tables, such as the Global Descriptor Table.

Structure and Format

Organization of the GDT

The Global Descriptor Table (GDT) is structured as a of 8-byte descriptors stored in system memory, forming the basis for in . The table always begins with a descriptor at index 0, which is defined as all zeros and deemed for any selector, functioning as a safeguard against the use of uninitialized registers by causing a general-protection exception if referenced. Following this, the table accommodates up to 8191 additional descriptors starting at index 1, with the total maximum of 8192 entries limited by the 64 capacity derived from the GDTR's 16-bit limit field. The GDT resides at a location in linear address space specified by the Global Descriptor Table Register (GDTR), a 48-bit register in IA-32 architecture consisting of a 32-bit base address pointing to the table's starting byte and a 16-bit limit field indicating the table's size in bytes minus one, with a maximum value of 0xFFFF for 65536 bytes. For efficiency, the base address is recommended to be aligned on an 8-byte boundary. The limit enforces sizing constraints by requiring the table length (limit + 1) to be a multiple of 8 bytes to encompass complete descriptor entries without partial overlaps. Entries within the GDT are sequentially numbered from 0, enabling straightforward indexing where the processor computes an entry's address by multiplying the selector's 13-bit index by 8 and adding it to the GDTR base. Operating systems commonly reserve the initial low indices for essential fixed-purpose segments, such as index 1 for a flat code segment and index 2 for a flat data segment in minimalist configurations, thereby standardizing access patterns across implementations. This array organization relies on the uniform 8-byte descriptor entries as its core building blocks for defining segment attributes.

Descriptor Entry Components

Each entry in the Global Descriptor Table (GDT) is an 8-byte structure that defines the attributes of a or system resource in the architecture's . This format allows the processor to enforce , levels, and controls by specifying the segment's , (), and operational characteristics. The layout divides the 64 bits into fields for the base (32 bits total), segment (20 bits total), rights, and flags, enabling flexible definitions up to 4 GB in when using . The byte-level organization of a GDT entry is as follows:
Byte OffsetBitsField Description
0-10-15Segment Limit (lower 16 bits)
2-30-15 (of base)Base Address (lower 16 bits)
416-23 (of base)Base Address (bits 16-23)
5Access RightsPresence (P, bit 7), Descriptor Privilege Level (DPL, bits 5-6), / (S, bit 4), Type (bits 0-3)
6Flags and Upper LimitBase Address (bits 24-31 in byte 7, but flags here: Granularity (G, bit 7), Default/Big (D/B, bit 6), Reserved (bit 5), Available (AVL, bit 4), Limit (bits 16-19 in bits 0-3)
724-31 (of base)Base Address (upper 8 bits)
This structure concatenates the base address from three non-contiguous fields: the lower 16 bits (bytes 2-3), middle 8 bits (byte 4), and upper 8 bits (byte 7), forming a 32-bit linear address that specifies the segment's starting location in . The effective base address is thus the 32-bit value assembled as: base = (base[24-31] << 24) | (base[16-23] << 16) | base[0-15]. Key fields within the entry control access and behavior. The Presence bit (P, bit 47) indicates whether the segment is present in memory (1) or not (0, triggering a #NP exception on access). The Descriptor Privilege Level (DPL, bits 45-46) specifies a 2-bit privilege value (0-3, with 0 being the highest) to enforce ring-based protection against unauthorized access from lower-privilege code. The System/User Segment bit (S, bit 44) distinguishes user-level segments like code or data (1) from system segments like task state segments or local descriptor tables (0). For user segments (S=1), the Type field (bits 40-43) defines the segment's category and permissions. Data segments (Type 0-7) support read-only (0-1, 4-5), read/write (2-3, 6-7), and expand-down variants (4-7, where the segment grows downward from a higher limit); the accessed bit (bit 40 in Type) tracks hardware-detected usage for optimization. Code segments (Type 8-15) enable execute-only (8-9, 12-13), execute/read (10-11, 14-15), and conforming modes (12-15, allowing lower-privilege calls without privilege checks); again, the accessed bit monitors usage. The Granularity bit (G, bit 55) scales the 20-bit limit field: if G=1, the effective limit is limit × 4096 (allowing up to 4 GB in 4 KB units); if G=0, it is limit × 1 (up to 1 MB in byte units). The Default/Big bit (D/B, bit 54) sets the default operand size: 1 for 32-bit operations (or stack pointer width for stacks), 0 for 16-bit. The Available bit (AVL, bit 52) is reserved for use by system software, such as operating systems for custom flags. Special cases include the null descriptor, typically placed at index 0 of the GDT, where all 64 bits are zero; it represents an invalid segment and causes general-protection exceptions (#GP) when used for memory access or loading into certain segment registers like CS or SS. System descriptors (S=0) use the Type field for specialized entries, such as Type 2 for a Local Descriptor Table (LDT) pointer, which includes base and limit fields to locate an LDT for task-specific segments, or Types 1/9 (available) and 3/11 (busy) for Task State Segments (TSS), which store processor state for task switching and require a minimum limit of 0x67 (103 bytes) for 32-bit TSS.

Loading and Access Mechanisms

GDTR Register and Initialization

The Global Descriptor Table Register (GDTR) is a processor register that holds the base address and limit of the Global Descriptor Table (GDT) in memory, enabling the x86 processor to locate and access segment descriptors during protected mode operation. In IA-32 protected mode, the GDTR is a 48-bit register comprising a 32-bit base address field, which specifies the linear starting address of the , and a 16-bit limit field, which defines the size of the in bytes (ranging from 0 to 64 KB - 1). In x86-64 long mode (IA-32e mode), the GDTR expands to an 80-bit structure with a 64-bit base address, where the upper 32 bits must be sign-extended for canonical addressing to ensure compatibility with the 48-bit virtual address space, while the limit remains 16 bits with its upper bits zeroed. The base address must be aligned, typically on an 8-byte boundary for optimal performance, and the must reside in accessible, readable memory, such as writeback cacheable regions. The GDTR is loaded using the LGDT (Load Global Descriptor Table Register) instruction, which is a privileged operation requiring execution at current privilege level (CPL) 0 and is unavailable in virtual-8086 mode. LGDT reads a 6-byte operand from memory in protected mode—consisting of the 16-bit limit followed by the 32-bit base—or a 10-byte operand in IA-32e mode, directly updating the GDTR fields without affecting other registers. This instruction is serializing, guaranteeing that all prior instructions complete before the load, and it can be executed in both real and protected modes, though its utility is primarily in protected mode initialization. After loading the GDTR, operating systems typically load segment registers (such as CS, DS, ES, FS, GS, and SS) using instructions like LSS (Load Stack Segment) or far jumps to establish initial segment contexts based on GDT entries. Initialization of the GDTR occurs during operating system boot or task context switches to ensure the GDT is properly referenced for segmentation. The process begins with the OS allocating and populating the GDT in physical memory with at least a null descriptor at the first entry, followed by issuing LGDT to set the base and limit in the GDTR. This setup is essential before enabling protected mode, as the processor references the GDTR for all segment loads and descriptor validations thereafter. In multi-tasking environments, the GDTR contents may be reloaded during context switches if the GDT needs relocation, though modern systems often maintain a fixed GDT to minimize overhead. The initialization must occur in a memory region that is page-aligned and fault-free to avoid disruptions. Invalid GDTR configurations trigger exceptions to enforce memory safety. If the LGDT operand is misaligned, exceeds the 6- or 10-byte size, references non-canonical addresses in long mode, or points to inaccessible , a general protection fault (#GP) is raised; similarly, a limit value greater than 0xFFFF or execution at CPL > 0 causes #GP. Page faults (#PF) may occur if the GDT is not present or protected, and in virtual machine extensions (VMX), invalid GDTR states during VM exits can lead to VMX aborts. These faults ensure the GDT remains a reliable segmentation foundation, with the halting execution until corrected by the OS handler. In , the GDT and GDTR are not utilized for segmentation, as the processor operates with a flat 1 MB ; following reset, the GDTR defaults to a base of 0x00000000 and limit of 0xFFFF, but these values are ignored. Transition to requires first initializing the GDT and loading the GDTR via LGDT in , then setting the protection enable () bit in CR0 using the LMSW (Load Status Word) instruction, followed by a far jump to reload the () from a GDT selector. This sequence activates segmentation, making the GDTR active and enabling descriptor-based upon entry to .

Segment Selectors and Resolution

A segment selector is a 16-bit value that serves as an identifier for a segment descriptor within the Global Descriptor Table (GDT) or Local Descriptor Table (LDT). It consists of three fields: a 13-bit index (bits 3 through 15), which specifies the entry number in the descriptor table starting from the first entry after the null descriptor; a table indicator (TI) bit (bit 2), where 0 selects the GDT and 1 selects the LDT; and a 2-bit requested level (RPL, bits 0 and 1), which indicates the privilege level of the selector for purposes. The index field determines the position of the descriptor in the table, while the RPL provides an additional check against the current privilege level (CPL) derived from the selector. The process begins when the CPU loads a segment selector into a segment register, such as , , , , , or GS. The processor uses the TI bit to determine whether to access the GDT or LDT; for the GDT, it takes the base address from the GDTR register and computes the byte offset by multiplying the index by 8 (shifting left by 3 bits, as each descriptor is 8 bytes long) and adding it to the base. This offset locates the corresponding descriptor entry, which is then fetched and validated. The present (P) bit in the descriptor must be set to 1; otherwise, a #NP fault occurs. Additional checks include verifying the descriptor type (e.g., code or data), segment limit, and to ensure the offset and access are within bounds; if invalid, a #GP fault is raised. For conforming code segments, the descriptor's accessed bit may also be set during resolution. Privilege checks are integral to the resolution to enforce rings. The CPL, obtained from bits 0-1 of the selector, is compared against the descriptor's descriptor privilege level (DPL) and the selector's RPL. For data segments, access is permitted only if DPL ≥ CPL and DPL ≥ RPL; violation results in a #GP fault. For nonconforming code segments, the CPL must equal the DPL. In contrast, conforming code segments allow access if CPL ≥ DPL, enabling less privileged code (higher CPL) to execute more privileged code (lower DPL). Stack segment () selectors require the DPL to exactly match the CPL, and inter-segment control transfers must satisfy both RPL and CPL relative to the target DPL to prevent . These checks occur at load time and during address translation. Once resolved and validated, the descriptor's contents— including base address, segment limit, and access rights—are cached in the hidden portion of the segment register for efficient subsequent access. This caching avoids repeated lookups in the GDT during memory operations within the segment, improving performance. The Load Segment Limit (LSL) instruction can verify the cached limit of a selector by loading it into a general-purpose register if valid, or it signals invalidity via the zero flag. Similarly, the Load Access Rights Byte (LAR) instruction checks and loads the access rights byte from the cached descriptor. In cases of segment register loads via instructions like MOV or jumps, the cache is updated only after successful resolution and checks.
FieldBitsDescription
Index3-15Entry number in GDT/LDT (13 bits, multiplied by 8 for )
TI20 = GDT, 1 = LDT
RPL0-1Requested level (0 most privileged, 3 least)

Usage Across Processor Modes

In IA-32 Protected Mode

In IA-32 protected mode, the Global Descriptor Table (GDT) becomes fully operational following the transition from real-address mode, a feature introduced with the 80286 and required for all subsequent x86 operating in this environment. Protected mode is activated by setting the protection enable (PE) bit in the CR0 , which mandates the use of the GDT for to enforce hardware-based mechanisms such as levels and controls, while paging remains an optional extension for management. Once enabled, the relies on segment descriptors within the GDT to define the boundaries and attributes of , ensuring between and spaces or different rings. Common configurations of the GDT in often employ a flat memory model, where a single and a single each span the entire 4 GB linear with a base of 0 and a limit of 4 GB ( bit set), simplifying translations and reducing overhead for . Alternatively, a multi-segment model is used for legacy applications or environments requiring granular protection, defining separate segments for , , and with distinct base es, limits, and access rights to prevent unauthorized overlaps or executions. These setups leverage descriptor types (e.g., or writable ) and segment selectors to enforce access rules based on the current privilege level. Inter-segment operations in involve loading new segment selectors into registers like (code segment) or (data segment) during far jumps or calls, which trigger descriptor resolution from the GDT and checks to ensure the target segment's descriptor level (DPL) matches or is more privileged than the selector's requested level (RPL) and the current level (CPL). For level changes, such as entering a more privileged ring, the stack segment register () must be updated to point to a conforming stack segment in the GDT, often facilitated through call gates or task state segments to maintain stack integrity across rings. Fault handling in protected mode detects violations related to GDT-defined segments, generating a stack-segment fault (#SS) when operations exceed the stack segment's limit or reference a non-present stack segment (present bit P=0 in the descriptor). Similarly, a segment-not-present fault (#NP) occurs upon attempting to load a selector for a segment where the present bit is clear, preventing access to unloaded or invalid descriptors and allowing the operating system to handle paging or loading as needed.

In x86-64 Long Mode

x86-64 long mode, also known as IA-32e mode, was introduced by in 2003 with the processor family, marking the first implementation of a 64-bit extension to the x86 architecture. This mode encompasses two sub-modes: a 64-bit mode for native 64-bit operation and a compatibility sub-mode that supports 32-bit x86 applications, enabling while expanding addressable memory to 64 bits. In long mode, the (GDT) persists from earlier x86 modes but undergoes substantial simplification, as segmentation is largely supplanted by paging for and addressing. The GDT's role shifts from defining complex segment hierarchies to primarily supporting system-level constructs and compatibility features. In long mode, the GDT adopts a flat memory model where code and data segments have their base addresses fixed at 0 and limits set to the maximum value (typically 0xFFFFF for 20-bit granularity or expanded equivalents), rendering traditional segmentation attributes largely irrelevant for most operations. The code segment (CS) base is always 0, and its limit is effectively unlimited, with no enforcement of segment boundaries for data segments (DS, ES, SS), which are treated as having base 0 and ignoring limits and attributes. This flattening eliminates the need for multiple data segment descriptors, as paging handles virtual-to-physical address translation and protection. However, the FS and GS segments retain utility for thread-local storage, with their base addresses configurable not only via GDT descriptors but also through Model-Specific Registers (MSRs) like IA32_FS_BASE and IA32_GS_BASE, allowing 64-bit linear addresses without GDT dependency. A minimal GDT suffices for long mode operation, typically comprising a null descriptor as the first entry (selector 0, unused), a 64-bit descriptor (with the L-bit set for and D-bit clear), and optionally a 32-bit code descriptor for compatibility sub-mode support. System descriptors, such as the 64-bit Task State Segment (TSS), remain required for interrupt stack tables and task switching remnants, though hardware is disabled. Segment selectors continue to be used in instructions and registers, but their interpretation is simplified: privilege levels (DPL) and type fields are checked, yet expand-down data segments are unsupported, and 16/32-bit gates trigger general protection faults (#GP). Overall, these adaptations reduce the GDT's complexity, prioritizing paging for security while retaining selectors for legacy and system purposes. Limitations in further constrain GDT functionality to ensure compatibility and performance: segment limits are ignored for all data segments, including FS/GS; no upper limit checking occurs beyond address validation in 64-bit mode. Task gates and call gates are restricted—only 64-bit call gates are permitted, and attempts to use legacy gates or hardware task switches result in #GP exceptions. The GDTR register expands to a 64-bit base, but the GDT size remains capped at 64 KB (8192 entries), and initial loading must occur within the lower 4 GB before relocation. These constraints underscore the de-emphasis on segmentation, making the GDT a vestigial structure in modern 64-bit systems.

Local Descriptor Table

The Local Descriptor Table (LDT) serves as a task-specific counterpart to the Global Descriptor Table (GDT) in the x86 architecture, storing segment descriptors that define memory segments unique to an individual task or . Managed by the operating system, the LDT enables private segmentation for code, data, and stack areas, facilitating memory isolation and selective sharing among tasks in a multitasking environment. Unlike the system-wide GDT, the LDT is optional and per-task, allowing each to maintain its own set of segment definitions without interfering with others. The LDT itself is referenced through a dedicated entry in the GDT, which acts as a system descriptor pointing to the LDT's location and size. Structurally, the LDT mirrors the GDT in format, consisting of an array of 8-byte segment descriptor entries in IA-32 protected mode (expanding to 16 bytes in IA-32e mode to accommodate 64-bit base addresses). Its size is variable, defined by a limit field that supports up to 8192 entries (64 KB total) with byte granularity or larger extents when the granularity flag is set. The Local Descriptor Table Register (LDTR) holds the necessary addressing information: a 16-bit segment selector that identifies the LDT descriptor within the GDT, a 32-bit base address pointing to the LDT's linear starting location (64 bits in IA-32e mode), and a 20-bit limit specifying the table's extent. The LDTR is loaded using the privileged LLDT instruction, which requires current privilege level (CPL) 0 and serializes instruction execution to ensure consistency. During task switches in protected mode, the LDTR can be automatically updated from the task's state segment if configured. In operation, the LDT integrates with segment selection via the Table Indicator (TI) bit in segment selectors: when TI=1, the processor routes descriptor lookups to the LDT instead of the GDT (TI=0), enabling seamless use of task-private s in memory accesses. This mechanism is fully supported in IA-32 protected mode, where it underpins multitasking by providing per-task segmentation without relying on the GDT for all operations. However, the LDT differs from the GDT in scope—it is smaller, process-specific rather than system-wide, and not mandatory for basic operation—reflecting its role in fine-grained task rather than global resource management. In x86-64 long mode, while the LDT remains available for compatibility (with expanded descriptors and limit checking disabled in 64-bit submode), it is largely deprecated in favor of a flat memory model enforced by paging; segment registers like FS and GS instead use model-specific registers (MSRs) to set per-thread bases, bypassing traditional LDT-based segmentation for . Hardware task switching, which could leverage the LDT, is unsupported in long mode, shifting management to software.

Task State Segment and System Descriptors

In the x86 architecture, system descriptors within the Global Descriptor Table (GDT) are distinguished by the system bit (S) being set to 0 in the access rights byte, indicating they define system resources rather than user-accessible code or data segments. These descriptors facilitate advanced features such as task management and controlled procedure calls, with common types including Task State Segment (TSS) descriptors, Local Descriptor Table (LDT) pointers, and call gates. TSS descriptors, in particular, support hardware task switching by pointing to a dedicated memory segment that holds the complete state of a task. The TSS descriptor itself is an 8-byte entry in , expanding to 16 bytes in to accommodate 64-bit base addresses. It includes fields for the base address of the TSS segment (32 bits in , 64 bits in ), a segment limit (typically 0x67 for a standard 104-byte 32-bit TSS), type codes (9 for available TSS, 11 for busy TSS), descriptor privilege level (DPL), and present bit (P). The underlying TSS segment, starting from the 80386 processor, is at least 104 bytes long in 32-bit mode and stores critical task context, including general-purpose registers (e.g., , EBX), segment selectors (, , ), instruction pointer (EIP), flags (EFLAGS), page directory base (CR3), stack pointers per privilege level (SS0:ESP0 for ring 0, etc.), the LDT selector, and an optional I/O permission bitmap for controlling port access. In mode, the TSS extends beyond 104 bytes to include up to seven stack table (IST) entries for switching during and 64-bit RSP pointers per privilege level. Task switching using a TSS descriptor is initiated in IA-32 protected mode via a far jump (JMP) or call (CALL) to the TSS selector in the GDT, or through a task gate, interrupt, or exception. Upon switching, the processor hardware automatically saves the current task's state into its TSS—including registers, segment selectors, EIP, EFLAGS, and the nested task (NT) flag in EFLAGS—and loads the new task's state from the target TSS, updating the task register (TR) with the new selector. This mechanism enables hardware-supported multitasking, where the busy bit in the TSS descriptor (type 11) prevents re-entrant access to the same task, and the LTR instruction loads the initial TR with a TSS selector during system initialization. The I/O bitmap in the TSS allows fine-grained control over IN/OUT instructions based on the current privilege level. LDT pointers serve as another system descriptor type (type 2), simply referencing the base and limit of a task-specific Local Descriptor Table without additional functionality. Call gates, also system descriptors, enable secure inter-privilege-level transfers for procedure calls with stack switching. In , hardware task switching via TSS is deprecated and unsupported, with the TSS repurposed primarily for stack management through IST entries to prevent stack overflows. Modern operating systems favor software-based task switching—manually saving and restoring context—for greater efficiency and flexibility over the overhead of hardware TSS operations.
TSS Segment Components (IA-32 32-bit Mode)DescriptionOffset (Bytes)
Back LinkSelector of previous task's TSS0-1
ESP0, SS0Ring 0 stack pointer and selector2-7
ESP1, SS1Ring 1 stack pointer and selector8-13
ESP2, SS2Ring 2 stack pointer and selector14-19
CR3 (PDBR)Page directory base register20-23
EIPInstruction pointer24-27
EFLAGS28-31
EAX, ECX, EDX, EBX, ESP, EBP, ESI, EDIGeneral-purpose registers32-63
ES, CS, SS, DS, FS, GSSegment selectors64-75
LDT SelectorLocal descriptor table selector76-77
I/O Bitmap (Optional)Permission bitmap for I/O ports102+ (variable)
This table illustrates the layout of a standard 104-byte TSS, excluding the optional I/O bitmap which extends the size if used. Bytes 78-101 are reserved.

History and Contemporary Applications

Development and Evolution

The Global Descriptor Table (GDT) originated with the microprocessor, released on February 1, 1982, as a key element of its architecture, which introduced hardware-enforced and protection mechanisms not present in the of the preceding 8086 processor. In this , the GDT served as a shared of segment descriptors accessible to all tasks, defining segment attributes such as addresses (24-bit physical), limits (up to 64 KB per segment), and privilege levels to enable multitasking and isolation. This innovation marked a shift from the flat, unprotected 1 MB of to a addressing scheme supporting up to 16 MB of physical , laying the foundation for modern operating system designs. The 80386, launched in October 1985, built upon the 80286's GDT by extending descriptors to accommodate 32-bit addressing, thereby expanding the to 4 GB and the to 4 GB. Key enhancements included support for a flat memory model, where a single segment could span the entire , and integration with a new paging unit that allowed 4 KB pages for demand-paged , combining segmentation with paging for greater flexibility and efficiency. These developments enabled more sophisticated , such as larger segments and hardware-assisted protection, while maintaining with 80286 software. A major evolution occurred with 's announcement of the AMD64 architecture in October 1999, with the first implementation appearing in the processor in 2003, which simplified the GDT's role in 64-bit by enforcing a primarily flat 64-bit linear and minimizing segmentation usage. In this mode, most segment registers (except FS and GS for special addressing) default to base 0 and limit 2^64-1, with the GDT retaining compatibility for legacy 32-bit and 16-bit modes but largely ignored for bounds and base checks in native 64-bit execution. This design reduced complexity while supporting a 48-bit (256 TB) through paging, with addresses in the signed 47-bit range. Intel adopted a compatible approach with its IA-32e mode, announced in February 2004 and first implemented in the Nocona processor family in June 2004 and later in processors in February 2005, mirroring AMD64's GDT handling to ensure in 64-bit environments. Key milestones in this era include the 80286's establishment of segmentation in 1982, the 80386's paging integration in 1985, and the introduction of 64-bit around 2003-2004, which preserved GDT functionality for mixed-mode operation while prioritizing flat addressing for performance.

Role in Modern Operating Systems

In modern 64-bit operating systems, the Global Descriptor Table (GDT) plays a diminished role due to the adoption of a flat memory model, where segmentation is largely superseded by paging for and addressing. In this model, most GDT entries define segments with a base address of 0 and a limit spanning the full , effectively disabling traditional segmentation while retaining the structure for compatibility and specific hardware requirements. Linux on x86-64 employs a minimal GDT configuration as part of its flat memory model, with entries primarily for kernel code and data segments (e.g., selectors 0x10 for kernel code, 0x18 for kernel data), user-mode code and data, a Task State Segment (TSS) descriptor, per-CPU areas, and reserved slots for (TLS). The GDT is initialized early during the boot process in the 's setup code, specifically through functions like setup_gdt() in arch/x86//cpu/common.c, building on an initial GDT provided by the bootloader such as . For TLS access, Linux configures the %fs and %gs segment registers not via GDT base addresses but through Model-Specific Registers (MSRs) like MSR_FS_BASE and MSR_GS_BASE, set via the arch_prctl() or FSGSBASE instructions on supported hardware; this allows %fs to point to per-thread data structures managed by the or threading libraries. Windows on x86-64 similarly adopts a flat model, where the GDT defines broad segments for kernel-mode code and data (e.g., selectors 0x10 and 0x18) and user-mode 32-bit compatibility segments, but with minimal segmentation enforcement as paging handles . In 64-bit Windows, 32-bit applications may indirectly leverage more segmented addressing through the subsystem, which uses additional GDT entries for compatibility, whereas native 64-bit processes rely on flat segments. The kernel initializes the GDT during early boot in its initialization routines, establishing the necessary descriptors for separation and handling before transitioning to full paging. Despite these simplifications in , the GDT remains vestigial in 64-bit operating systems, as paging provides dominant through page tables, virtual address spaces, and access controls, reducing the need for segment-based limits and bases. However, it is still required for segment selectors like %cs and %ss to enforce ring levels (e.g., vs. mode) and for the TSS descriptor to manage task switches and stacks. As of 2025, the GDT persists in x86 operating systems for with existing software ecosystems and hardware features, as ongoing x86 evolution prioritizes maintaining legacy support alongside new extensions like improved security and performance instructions.

References

  1. [1]
    [PDF] Intel® 64 and IA-32 Architectures Software Developer's Manual
    This is Volume 3A, Part 1 of the Intel 64 and IA-32 manual, which is a system programming guide. The manual has nine volumes.
  2. [2]
    [PDF] Intel® 64 and IA-32 Architectures Software Developer's Manual
    Jan 2, 2012 · ... global descriptor table (GDT) register. SGDT. Store global descriptor table (GDT) register. LLDT. Load local descriptor table (LDT) register ...
  3. [3]
    [PDF] Intel® 64 and IA-32 Architectures Software Developer's Manual
    The manual has four volumes: Basic Architecture, Instruction Set Reference, System Programming Guide, and Model-Specific Registers. This is Volume 3 (3A, 3B, 3 ...
  4. [4]
    [PDF] Volume 3 (3A, 3B, 3C & 3D): System Programming Guide - Intel
    NOTE: The Intel 64 and IA-32 Architectures Software Developer's Manual consists of four volumes: Basic Architecture, Order Number 253665; Instruction Set ...
  5. [5]
    PRESS RELEASE DATED JULY 16, 2003 - 8-K - AMD
    On April 22, AMD introduced the AMD Opteron processor, the world's first 64-bit processor compatible with the industry-standard x86 architecture.
  6. [6]
    [PDF] AMD64 Architecture Programmer's Manual, Volume 2
    The information contained herein is for informational purposes only, and is subject to change without notice. While every precaution has been taken in the ...
  7. [7]
    [PDF] Intel® 64 and IA-32 Architectures Software Developer's Manual
    ... Global and Local Descriptor Tables ... The Intel® 64 and IA-32 Architectures Software Developer's Manual, Volumes 3A ...
  8. [8]
    Intel introduces the 80286 microprocessor - Event - Computing History
    Feb 1, 1982 · The Intel 80286, released on February 1, 1982, is a 16-bit microprocessor with 134,000 transistors, widely used in IBM PC compatible computers.
  9. [9]
    [PDF] intel-80286.pdf - Index of /
    In protected virtual address mode, the 80286 is source code compatible with 8086, 88 software and may require upgrading to use virtual addresses supported by ...
  10. [10]
    Intel introduces the 80386 microprocessor - Event - Computing History
    Oct 17, 1985 · On the 17th October 1985, Intel launched its new microprocessor, the Intel 80386. Also known as the i386 or Intel 386, it was a 32-bit ...<|separator|>
  11. [11]
    80386 Programmer's Reference Manual -- Section 5.1 - PDOS-MIT
    2 Descriptor Tables. Segment descriptors are stored in either of two kinds of descriptor table: The global descriptor table (GDT); A local descriptor table (LDT).
  12. [12]
    An AMD64 Platform Primer – CPUplanet
    In October 1999, AMD announced a bold alternative to the proprietary EPIC or Itanium 64-bit processor architecture chosen by Intel: The CPU underdog ...
  13. [13]
    AMD64 | SUSE Defines
    Aug 16, 2018 · The first AMD64-based processor (Opteron) shipped in 2003. Linux was the first OS kernel to run the AMD64 architecture in long mode (the ...Missing: announcement implementation
  14. [14]
    IA-32e I've never seen this name before. Where does it come from ...
    After several years of denying its existence, Intel announced at the February 2004 IDF that the project was indeed underway. Intel's chairman at the time ...
  15. [15]
    30.7. Using FS and GS segments in user space applications
    The FS segment is commonly used to address Thread Local Storage (TLS). FS is usually managed by runtime code or a threading library.
  16. [16]
    Global Descriptor Table (GDT) in Linux x86-64 - martin.uy
    Jan 13, 2020 · Going back to the GDT, I wondered how Thread-Local Storage (TLS) works if its entries (GDT_ENTRY_TLS_MIN to GDT_ENTRY_TLS_MAX) are empty. The ...
  17. [17]
    Where Linux Kernel Setup GDT - Stack Overflow
    Jul 26, 2016 · The FS and GS registers have various uses on Linux, such as TLS or stack cookies (en.wikipedia.org/wiki/X86_memory_segmentation). icecreamsword.Why are the data segment registers always null in gdb?How are the fs/gs registers used in Linux AMD64? - Stack OverflowMore results from stackoverflow.com
  18. [18]
    Understanding Win 7 x64 GDT/LDT - OSR Developer Community
    Aug 8, 2013 · If you never ever use LDT selectors - then you can ignore the very fact LDT exists. ... x86/MS-DOS emulators surely need to emulate GDT/LDT.<|control11|><|separator|>
  19. [19]
    [PDF] GDT and LDT in Windows kernel vulnerability exploitation - j00ru
    Jan 16, 2010 · This paper describes some possible ways of exploiting kernel-mode write-what-where vulnerabilities in a sta-.<|separator|>
  20. [20]
    What is the modern usage of the global descriptor table(GTD)?
    Nov 8, 2020 · Wikipedia says that "The GDT is still present in 64-bit mode; a GDT must be defined but is generally never changed or used for segmentation.".Details about segment selectors in x86 system - Stack OverflowDo modern OS's use paging and segmentation? - Stack OverflowMore results from stackoverflow.comMissing: dominance | Show results with:dominance
  21. [21]
    Intel, AMD Detail x86 Improvements To Standardize New Features ...
    Oct 13, 2025 · Intel and AMD on Monday revealed new performance, security and reliability improvements coming to the x86 instruction set architecture in an ...Missing: GDT | Show results with:GDT