Fact-checked by Grok 2 weeks ago

Code page

A code page is a coded character set that defines the mapping of code points—nonnegative values—to abstract , enabling the representation of text in computing systems through specific byte sequences. Primarily associated with IBM's data representation (CDRA), the term denotes a particular encoding scheme, often single-byte, tailored to support a given , , or application. The concept originated in the mid-20th century as part of 's evolution of standards, building on early codes like Hollerith's 12-position system from the late and progressing through 6-bit BCDIC in the to 8-bit introduced with the System/360 in 1964. Code page numbers initially referred to literal page numbers in IBM's standard set manual, which documented various encodings for mainframe and terminal systems, ensuring compatibility across hardware like the IBM 1050 data communication terminal using PTTC (Paper Tape Transmission Code). This numbering system facilitated identification and interchange of data, with CDRA formalizing it in the to maintain identity across diverse code pages via coded character set identifiers (CCSIDs). Microsoft adopted and expanded the term in the 1980s for its operating systems, starting with DOS code pages like 437 (the original IBM PC encoding, supporting English and extended graphics) and evolving into Windows code pages such as 1252 for ANSI Latin-1 Western European text. These encodings addressed limitations of 7-bit ASCII by providing 8-bit extensions for accented characters, symbols, and regional scripts, though they often conflicted with international standards like ISO 8859. While code pages enabled early globalization efforts—such as supporting European diacritics or East Asian double-byte sets—they suffered from fragmentation, with hundreds of variants leading to data interchange issues. The rise of Unicode in the 1990s, with its universal repertoire and encodings like UTF-8, has largely supplanted code pages in modern applications, though they persist in legacy IBM iSeries/AS400 systems, Windows APIs, and certain file formats for backward compatibility.

Fundamentals

Definition and Purpose

A code page is a mapping that associates specific byte values, typically in notation, with individual characters, symbols, or control codes within a defined character set. This structured table enables computers to interpret and display text by translating binary data into readable or functional elements. The underlying encoding schemes for what became known as code pages emerged in the with the development of mainframe systems, such as 's System/360 released in 1964, to accommodate the need for character representations beyond the limitations of 7-bit standards like ASCII, particularly for international languages and specialized symbols. The term "code page" itself was adopted by in the , originally referring to page numbers in their standard character set manuals that documented these encodings. Their primary purpose was to facilitate text processing and data interchange in early computing environments, including mainframes and peripherals, by providing a consistent encoding scheme tailored to specific linguistic or operational requirements. Key characteristics of single-byte code pages include their fixed-width nature, where each character is represented by a single byte (8 bits); however, multi-byte code pages exist for languages requiring larger character repertoires, such as East Asian scripts. Code pages feature vendor-specific implementations (such as those from or ), and ongoing use in legacy systems for compatibility during data exchange. These encodings prioritize simplicity in single-language contexts but require careful handling for multilingual applications. In basic structure, a single-byte code page consists of a 256-entry table, with each index from 0 to 255 corresponding to a unique byte value that maps to a , control code, or character ; for instance, the first 128 entries often align with standard ASCII for basic Latin characters, while the remaining 128 support extended symbols.

Numbering System

IBM's code page numbering system originated in the 1960s alongside the development of for its System/360 mainframes, with CP037 introduced as the standard code page for English-speaking regions such as the and . This scheme employs three-digit numeric identifiers, typically assigning lower numbers (such as 037 and 500) to -based variants for mainframe environments, while higher numbers (starting from around 300, like 437 for the original IBM PC OEM code page) denote ASCII-based variants for personal computers and other systems. As code pages proliferated across vendors, the system evolved to accommodate extensions, notably by , which adopted four-digit identifiers for Windows-specific encodings, such as 1252 for Windows Latin-1 (Western European). This led to overlaps and aliases, where the same encoding might be referenced by multiple numbers or names across platforms—for instance, IBM's CCSID 037 aligns with Microsoft's code page 037. Specific numbering rules emerged for certain categories, including the range 850–859 reserved for multilingual code pages, with 850 serving as the standard Latin-1 variant supporting multiple Western European languages. To resolve conflicts and standardize references, the (IANA) maintains a registry of character set names and aliases, such as "ibm-1047" for IBM's Open Systems Latin-1 code page (CCSID 1047). In application programming interfaces (), these numbers map directly to encoding names; for example, Windows APIs use numeric identifiers like 1252 to load the corresponding code page, while IBM systems often reference them via CCSIDs in functions for character conversion.

Relationship to ASCII

ASCII-based code pages function as 8-bit supersets of the 7-bit ASCII standard, preserving the original ASCII character repertoire in the range of code values 0 through 127 while utilizing the additional 128 values (128 through 255) to encode extended characters. This design ensures , allowing systems and applications to interpret ASCII data without alteration when the high bit (bit 7) is set to zero. EBCDIC-based code pages, however, use a distinct encoding scheme incompatible with ASCII positions. The American Standard Code for Information Interchange (ASCII), standardized in , defines 128 characters primarily for English-language text, controls, and basic symbols, forming the foundational layer for most subsequent 8-bit encodings. A key aspect of this compatibility is evident in code pages like IBM's Code Page 437 (CP437), the original OEM code page for English-language IBM PCs and MS-DOS systems, which retains all ASCII characters in their standard positions while incorporating extended glyphs such as box-drawing elements (e.g., ─, ┌, └) and accented Latin letters for limited international support. These additions enabled early personal computers to display graphical user interfaces and simple multilingual text without disrupting ASCII-based data interchange. Similarly, the structure supports "high-ASCII" usage, where the extended range facilitates region-specific adaptations while maintaining interoperability with pure ASCII environments. The ISO/IEC 8859 series represents standardized international extensions of ASCII, with each variant defining an 8-bit code set that includes the full 7-bit ASCII subset and adds characters for specific linguistic needs, such as Western languages in ISO/IEC 8859-1 (Latin-1). This approach influenced proprietary code page designs, including IBM's (CP850), a multilingual extension supporting Western languages like Danish, , , , and by mapping accented characters and symbols into the upper code range. Such variants promoted broader adoption of ASCII-compatible encodings in diverse computing ecosystems. In practice, this relationship introduces challenges for data portability, particularly in mixed environments where applications assume a pure 7-bit ASCII subset but encounter 8-bit code page variations across systems. Differences in code page assignments—such as varying ANSI code pages on different computers—can lead to , where extended characters are misinterpreted or replaced with incorrect glyphs during transfer or display. For instance, assuming universal ASCII compatibility in file exchanges between systems using distinct code pages may result in garbled text, underscoring the need for explicit encoding declarations to mitigate interoperability issues.

Relationship to Unicode

Code pages serve as predefined mappings of byte values to characters, many of which align with subsets of the standard, where each code page corresponds to specific ranges of code points. For instance, Windows Code Page 1252 (CP1252) maps its characters to the Latin-1 Supplement block, covering code points from U+0080 to U+00FF, enabling direct representation of Western European scripts within Unicode's universal character set. Similarly, other code pages, such as those based on ISO 8859 series, fit into Unicode's Basic Latin and extension blocks, facilitating interoperability by treating code page characters as aliases for scalars. Conversion between code pages and relies on standardized mapping tables and utilities to transform legacy encoded data into Unicode formats like or UTF-16. IBM's Coded Character Set Identifier (CCSID) system provides official mappings from EBCDIC-based code pages to , allowing dynamic or predefined conversions for multilingual data processing on and platforms. Tools such as the libiconv library implement the iconv function for batch conversions, supporting a wide array of code pages to by referencing internal tables that handle character-by-character substitution. These processes ensure that text in single-byte code pages can be migrated to Unicode while preserving semantic meaning where possible. Despite these mappings, conversions from certain code pages to Unicode can be lossy due to characters without direct equivalents in the Unicode repertoire. For example, (CP437), used in early systems, includes unique symbols like box-drawing elements and legacy graphics that map to Unicode approximations, such as U+2500 for horizontal lines, but may result in visual discrepancies or substitution errors during round-trip conversions. Such limitations arise because code pages were designed for specific hardware and locales, often prioritizing display glyphs over universal semantics, leading to potential data fidelity issues in modern Unicode-based applications. Legacy support for code pages persists through platform-specific APIs, even as has become the dominant standard since the early 2000s. On Windows, the MultiByteToWideChar function converts strings from a specified code page to UTF-16 , handling flags for error detection and substitution of unmappable characters. systems maintain CCSID-to- conversions in their Unicode Services for in enterprise environments. However, trends have accelerated post-2000s, with operating systems and software favoring native implementations to reduce complexity, though code pages remain available for legacy file handling and international data exchange.

IBM Code Pages

EBCDIC-Based Code Pages

EBCDIC, or Extended Binary Coded Decimal Interchange Code, originated as an 8-bit character encoding standard developed by IBM in 1963, primarily to support data processing on mainframe systems and complement the punched card technology prevalent at the time. This encoding was introduced with the IBM System/360 in the mid-1960s, establishing it as the default for IBM's mainframe environments, where it replaced earlier BCDIC formats used in punch card systems. Unlike ASCII, EBCDIC features non-contiguous ordering of characters, with gaps in the alphanumeric sequence—for instance, the uppercase letters A through Z are not sequentially encoded, resulting in a collating sequence where lowercase letters precede uppercase and digits follow letters. IBM assigns numbers to its EBCDIC-based code pages typically in the range 000-199, reflecting their foundational role in mainframe character handling. Key variants within the EBCDIC family address specific linguistic and regional needs while maintaining core compatibility. Code page 037 (CCSID 37), for example, serves as the standard for U.S. English and related locales like and , encoding the full Latin-1 character set in an framework. Code page 500 (CCSID 500) provides multilingual support, particularly common in , incorporating a broad Latin-1 charset for international data interchange on mainframes. For enhanced compatibility with open systems, code page 1047 (CCSID 1047) extends Latin-1 coverage to include the Euro symbol and aligns more closely with ISO 8859-1 standards. National variants, such as code page 870 (CCSID 870), target Latin-2 multilingual needs for Central and Eastern European languages, supporting characters essential for , , and similar scripts. Structurally, diverges from ASCII in ways that reflect its mainframe heritage, including support for zoned decimal formats where numeric data uses a high-order of 0xF for digits, enabling efficient arithmetic operations in legacy and applications. Additionally, incorporates graphics escape sequences, such as the Graphic Escape (0x0B), to invoke special symbols and control mainframe display and printing devices like 3270 terminals. These features prioritize hardware compatibility over the contiguous, byte-optimized design of ASCII, leading to invariant characters (like digits and basic punctuation) that remain consistent across variants, while variant characters adapt to regional scripts. In modern contexts, EBCDIC-based code pages remain integral to IBM z/OS operating systems and CICS transaction processing environments, where they encode vast legacy datasets in banking, insurance, and government applications. Conversion to ASCII or Unicode poses significant challenges due to the incompatible ordering and encoding schemes; for instance, direct byte-for-byte mapping can produce invalid ASCII characters exceeding the 7-bit range or misinterpret zoned decimals as text, necessitating specialized tools like IBM's iconv utility or CCSID-aware translators to preserve data integrity during migrations. Such conversions are routine in hybrid environments but require careful handling of variant characters to avoid corruption, especially when interfacing with Unicode-based systems that support far more code points than EBCDIC's 256.

DOS and Early PC Code Pages

In 1981, with the release of PC-DOS 1.0 alongside the original IBM Personal Computer, (CP437) was established as the default (OEM) code page for the , extending the ASCII standard with an additional 128 characters primarily dedicated to block graphics characters for creating borders, tables, and simple diagrams in text-based interfaces. This set, also known as OEM-US or PC-8, maintained full compatibility with ASCII in the 0-127 range while allocating positions 128-255 for line-drawing elements, mathematical symbols, and a limited set of accented Latin characters to support basic graphical applications on early PC displays like the and . To accommodate international markets, IBM developed multilingual variants of the OEM code pages starting in the late 1980s. Code Page 850 (CP850), introduced with MS-DOS/PC-DOS 3.3 in 1987, targeted Western European languages and Latin American Spanish by replacing many block graphics with additional accented characters such as ñ, ç, and ä, while retaining some line-drawing capabilities for compatibility. Similarly, Code Page 852 (CP852) emerged in 1991 with MS-DOS/PC-DOS 5.0 to support Central and Eastern European Slavic languages, incorporating characters for Polish, Czech, Hungarian, and others like ł, ś, and ň. Code Page 855 (CP855), added in 1994 with MS-DOS/PC-DOS 6.22, focused on Cyrillic scripts for Russian and related languages, prioritizing letters such as я, щ, and ё over extensive graphics. These code pages evolved to support advanced displays like the (EGA) and (VGA), which allowed loading multiple font sets into ROM for selectable character rendering. Users could switch code pages at boot time using the COUNTRY command in , specifying a and corresponding OEM page number from the range 437 to 449, enabling runtime adaptation for different locales without hardware changes. This flexibility was crucial for VGA systems, where up to 16 font blocks could be stored, with CP437 typically as block 0 and variants like CP850 in subsequent blocks. The legacy of these and early PC code pages persists in file systems such as , where filenames and directory entries were encoded using the active OEM code page, leading to compatibility challenges in international deployments—such as garbled characters when files created under one (e.g., CP850) were accessed under another (e.g., CP437) without proper . This encoding mismatch often required manual code page switching or conversion tools, highlighting the limitations of single-byte encodings in global software distribution.)

Emulation and Platform-Specific Code Pages

code pages in the range 800 to 999 were primarily designed for purposes and platform-specific adaptations, enabling compatibility with various character encodings across different systems such as environments and legacy operating systems. These code pages support cross-platform data exchange by mapping characters from one encoding scheme to another, often integrating with standards like for Unix compatibility or providing double-byte character set (DBCS) support for Asian languages. For instance, code pages like CP864 emulate PC encoding for Latin-based systems, facilitating the representation of in environments traditionally using Latin alphabets. In the AIX operating system, a Unix-like platform developed by , specific code pages address regional needs while adhering to standards. CP921, for example, provides support for Latvian and Lithuanian languages, allowing seamless integration of characters in AIX applications and ensuring compliance with Unix localization requirements. Similarly, CP964 is tailored for (Taiwan) on AIX, extending support for in Unix-based workflows. For the operating system, created code pages that emphasize multilingual capabilities, including DBCS for Asian languages to handle complex scripts. CP942 serves as a superset of the Microsoft CP932 for on OS/2, incorporating and characters essential for Japanese text processing. CP943 further enhances this by supporting both CP932 and Shift-JIS encodings, enabling robust DBCS operations in OS/2 environments for East Asian data interchange. These adaptations were crucial for OS/2's role in enterprise computing, where multilingual support was key to global deployments. Emulation code pages within this numeric range focus on bridging legacy -based systems with ASCII-derived encodings for cross-platform compatibility. CP423 emulates characters in an context, mapping them to facilitate data exchange from mainframe environments to PC-like Latin setups such as IBM-850. This was particularly useful in emulations and printer drivers requiring script support in mixed-system architectures. Likewise, CP864 supports emulation by providing PC-compatible mappings to Latin structures, aiding in the transition of Arabic data across diverse platforms. These emulation sets, assigned numbers 800-999, were employed in systems like —a microkernel-based successor to —and early web servers to handle international content and ensure reliable data portability. IBM's approach to integrating Unicode with its legacy code page system relies on the Coded Character Set Identifier (CCSID), a numbering scheme that extends traditional code pages to encompass modern encodings like and . This system allows platforms, such as and , to map characters from EBCDIC-based code pages to , facilitating conversions between legacy data and contemporary applications without full system overhauls. By assigning specific CCSIDs to Unicode variants, ensures compatibility across its ecosystem, where CCSID 1208 designates as a growing character set that incorporates new additions over time. Among these, code page (CP1200) represents UTF-16 in little-endian byte order, serving as a bridge for applications requiring wide-character support in environments. Similarly, CCSID 1390 (associated with IBM-1390) provides an EBCDIC-based encoding for text, featuring an alternative Unicode conversion table that maps double-byte characters to their Unicode equivalents, particularly useful for mixed-script data in East Asian contexts. These mappings prioritize round-trip fidelity, ensuring that characters from legacy code pages, such as those in CCSID 5026, can be accurately transformed to and from without loss. The development of Unicode-related CCSIDs gained momentum in the post-1990s era, particularly with the iSeries (formerly AS/400) platforms, where OS/400 version V5R2 introduced explicit data support to handle globalized applications. This evolution addressed the limitations of earlier EBCDIC-centric systems by incorporating as a core encoding option, enabling features like GB18030 for and broader internationalization. leveraged the (ICU) library to implement robust conversion tools, which map between CCSIDs and Unicode scalars, supporting operations in products like Db2 and Integration Bus. ICU's converters handle the nuances of stateful encodings, such as those in CP1390, by using predefined tables for efficient bidirectional transformations. As of 2025, while promotes (CCSID 1208) as the preferred encoding for new developments due to its simplicity and universal compatibility, -related CCSIDs like and 1390 remain integral to for maintaining legacy applications in banking, , and mainframe environments. This partial shift reflects a strategic balance, with ongoing support in Db2 for ensuring seamless data migration, though full reliance on proprietary CCSIDs is discouraged in favor of standard to reduce conversion overhead. Retention of these code pages underscores their role in hybrid systems where data persists alongside workflows.

Microsoft Code Pages

Windows Code Pages

Windows code pages, often referred to as ANSI code pages in the Windows environment, are single-byte character encodings designed to support text display and input in graphical user interfaces and applications, extending the capabilities of earlier MS-DOS code pages for broader international use. These code pages map byte values from 128 to 255 to characters specific to various scripts and languages, while preserving the ASCII range (0-127) for compatibility. Microsoft developed them to handle regional linguistic needs in Windows operating systems, with the active code page determined by system locale settings. The primary Windows code page for Western European languages is CP1252, also known as or ANSI Latin 1, which has served as the default for English and most Western applications since in 1992. This code page supports 256 characters, including Latin letters with diacritics, commonly referred to as an ANSI code page (though this is a ), and based on an early draft of ISO/IEC 8859-1 to ensure compatibility across Windows platforms. Unlike the related ISO-8859-1 (Latin-1), CP1252 populates the 0x80-0x9F range with printable characters such as curly quotes (“ ” ‘ ’), em-dash (—), and , filling gaps left undefined in ISO-8859-1 for better typographic support in applications like word processors. In 1999, Microsoft updated CP1252 and several related code pages to include the Euro symbol (€) at code point 0x80, aligning with the introduction of the Euro currency on January 1, 1999, as specified in OpenType font standards. This update ensured seamless support for financial and business applications in Eurozone countries without requiring a full encoding overhaul. The code page also incorporates other symbols like the en-dash (–) and figure dash (‒), enhancing document formatting in Windows GUI environments. Microsoft provides regional variants of these code pages in the 1250-1258 range to accommodate non-Latin scripts and languages, allowing users to select appropriate encodings via regional settings in the Control Panel. These variants maintain the ASCII base but extend the upper byte range for script-specific characters. For example:
Code PageNamePrimary Language(s)
1250Central European (e.g., , )
1251Cyrillic (e.g., , Bulgarian)
1255Hebrew
1256
Each includes the symbol where relevant and supports right-to-left rendering for bidirectional scripts like Hebrew and . These code pages evolved from ancestry but are optimized for Windows' graphical interfaces. Integration with Windows facilitates programmatic handling of these code pages; for instance, the Win32 GetACP() retrieves the system's current ANSI code page identifier, enabling applications to convert text dynamically based on . Developers can query or set code pages using functions like GetCPInfoExA for detailed information, ensuring in multilingual software. Regional settings, configurable through the operating system's features, control which code page is active, promoting consistent text rendering across user environments.

MS-DOS and DBCS Code Pages

Microsoft's code pages for extended the original (CP437), which was based on ASCII with added block graphics and Latin-1 characters, to support various international languages through variants introduced in the 1980s. These variants included single-byte character sets (SBCS) tailored for non-Latin scripts, such as CP720 for (Transparent ASMO), which retained box-drawing characters while accommodating right-to-left text and diacritics, and was added in MS-DOS 6.22 in 1994. For Asian languages requiring more than 256 characters, implemented double-byte character sets (DBCS) starting with version 4.0 in 1988, using code pages in the 700-999 range to distinguish them from SBCS. Notable examples include CP932 for , an extension of Shift-JIS that maps over 16,000 characters via lead bytes (0x81-0x9F, 0xE0-0xEF) followed by trail bytes, and CP936 for Simplified , based on GBK with similar lead/trail byte mechanisms for encoding hanzi and other symbols. In DBCS mode, the system interprets certain byte ranges as lead bytes signaling a following trail byte, enabling dense representation of large character sets while maintaining ASCII compatibility for the first 128 code points. Code page switching in was managed through the command, introduced in version 3.3 and enhanced in later releases, allowing users to prepare, select, and refresh code pages for devices like displays and printers. For instance, "MODE CP PREPARE=((850) CON)" loaded for the console, followed by "MODE CP SELECT=850 CON" to activate it, with NLSFUNC.EXE providing necessary support files in . DBCS code pages required special loading via DISPLAY.SYS or similar drivers in international versions. Support for Far East markets, including robust DBCS handling, was significantly improved in MS-DOS 5.0 released in 1991, enabling better integration of Japanese, Chinese, and Korean locales through dedicated language editions. However, filename handling on the FAT file system imposed limitations: the 8.3 format (8 characters for the name, 3 for the extension) was enforced, and while DBCS characters were permitted starting in DBCS-enabled versions, each double-byte character consumed two bytes in the fixed 11-byte directory entry field, effectively reducing the maximum number of characters in a name. This byte-level constraint often led to truncated or incompatible filenames when mixing SBCS and DBCS elements across systems.

Emulation Code Pages

Emulation code pages in Windows provide mappings for character sets developed by other vendors, enabling and data portability across platforms without native support. These code pages are particularly valuable for handling legacy files and applications from systems like Apple Macintosh or standards, where direct might otherwise lead to garbled text. By emulating external encodings, Windows allows developers and users to import and process foreign data through standard APIs, such as those in the Win32 functions. Code Page 10000 (CP10000) specifically emulates Apple's Mac Roman encoding, an 8-bit character set designed for Western European languages on Macintosh systems. Mac Roman extends ASCII with 128 additional characters, including accented letters, currency symbols, and typographic marks unique to Apple's early font libraries. This emulation maps these Apple-specific glyphs to equivalent Windows representations, facilitating the exchange of text files, such as documents created in Mac applications like Microsoft Word for Mac or Adobe tools on older systems. For instance, symbols like the Apple logo or fraction characters are preserved during conversion, preventing display issues in Windows environments. Similarly, CP57002 emulates the Indian Script Code for Information Interchange (ISCII) standard for script, supporting languages such as , , and . ISCII, established in the as an 8-bit encoding for Indian scripts, unifies multiple regional writing systems under a single framework. In Windows, this code page enables the processing of ISCII-encoded data from non-Microsoft Indian software, ensuring accurate rendering of conjunct consonants and vowel signs in cross-vendor scenarios, like importing legacy government or educational documents. For systems, emulation occurs through alignments with character sets, notably the Adobe Standard Encoding, emulated in Windows through alignments with code page 1276 or -compatible mappings. This encoding supports Latin-1 text with additional printing symbols for compatibility, allowing Windows applications to interpret Adobe-generated files without loss of glyphs like mathematical operators or diacritics. It is crucial for legacy workflows in , where output from or similar tools must integrate with Windows print drivers. Since the early 2000s, has reserved the 10xxx numbering range for such emulations, primarily targeting Macintosh variants to broaden cross-platform support. Examples include CP10001 for x-mac-japanese and CP10002 for x-mac-chinesetrad, used in Windows APIs for data import. This systematic assignment aids in maintaining compatibility as adoption grew, prioritizing portability over exhaustive native implementations. Microsoft introduced Unicode-related code pages to facilitate direct integration with the standard within Windows environments, enabling applications to handle international text without relying solely on legacy single-byte encodings. Code page 1200 (CP1200) represents UTF-16 in little-endian byte order, providing a 16-bit encoding for the Basic Multilingual Plane of Unicode characters and serving as the primary internal representation for Unicode strings in Windows APIs. Similarly, code page 65001 (CP65001) implements , an 8-bit variable-length encoding that supports the full Unicode repertoire while maintaining compatibility with ASCII for English text. These UTF-based code pages have been standard since in 1996, allowing developers to use Unicode transformations alongside traditional code pages for . For double-byte character set (DBCS) environments, particularly in East Asian locales, extended support through code page 54936 (CP54936), which corresponds to the GB18030 standard for Simplified Chinese. This code page builds on legacy DBCS encodings like CP936 (GBK) by incorporating four-byte sequences to achieve complete coverage of the standard, including rare and historical characters not representable in earlier GB standards. Introduced in and later versions, CP54936 ensures that applications handling Chinese text can convert seamlessly to and from without data loss, addressing limitations in prior DBCS implementations. Windows provides APIs such as WideCharToMultiByte for converting between (UTF-16) strings and multi-byte representations in specified code pages, including the UTF variants like CP1200 and CP65001. This function maps wide-character strings to byte sequences, supporting flags for error handling and default character substitution to maintain data integrity during transformations. Complementing these are system functions like GetACP for retrieving the active code page identifier, which helps applications dynamically identify and adapt to the current encoding context, though recommends direct usage over code page dependencies. In and subsequent versions, has deprecated reliance on non-Unicode (ANSI) code pages in favor of and UTF-16, with version 1903 introducing beta support for setting as the system locale via administrative settings. fallback can be enabled for new applications through manifest properties like activeCodePage, promoting consistent rendering of international text in GDI and console output as of 2025. However, code pages remain available for legacy support, particularly in SQL Server environments where older collations and data imports may require them for compatibility with pre-Unicode databases.
Code PageEncodingIntroductionKey Use
1200UTF-16LE (1996)Internal Unicode string handling in APIs
65001 (1996)Cross-platform text interchange and web content
54936GB18030 (2001)Full Unicode coverage for Simplified Chinese DBCS

Code Pages from Other Vendors

HP Symbol Sets

HP Symbol Sets refer to a family of proprietary 8-bit character encodings developed by () in the 1980s for use in their printer control languages, particularly (), and operating systems like . These sets function similarly to standard code pages by mapping byte values to glyphs, enabling the printing and display of extended characters beyond ASCII, including Western European accents, currency symbols, and mathematical notations. Originating with the introduction of the printer in 1984, they were designed to support in printing environments while maintaining compatibility with early hardware. The foundational set, HP Roman-8 (PCL identifier 8U), extends the 7-bit US ASCII standard into an 8-bit encoding, with the lower 128 code points (0x00–0x7F) matching ASCII and the upper 128 (0x80–0xFF) providing additional symbols such as accented letters (e.g., à, é, ñ), line-drawing characters, and mathematical symbols like ± (plus-minus), ° (degree), and µ (micro). This set, equivalent to IBM code page 1051, was specifically tailored for HP's early LaserJet series and PCL 5 implementations, supporting up to 218 printable glyphs in bound fonts. A variant, HP Turkish-8 (PCL identifier 8T), modifies Roman-8 to include Turkish-specific characters like ğ, ı, and ş, facilitating localization for that language while retaining core ASCII compatibility. These sets prioritize printer output, with structures modeled after ISO 8859 standards but customized for HP hardware constraints. In terms of structure, Symbol Sets divide the 256 possible code points into areas: areas 0 and 2 for or non-printing functions, and areas 1 and 3 for printable glyphs, allowing flexible binding to scalable fonts like Intellifonts. They are closely tied to 's font cartridges, such as the Medium cartridge (92286Z), which preloads glyphs mapped to Roman-8 for consistent rendering in early LaserJet models without requiring full font downloads. Mathematical symbol support in Roman-8 includes essential operators (e.g., × for at 0xD7, ÷ for at 0xF7) and relational symbols, making it suitable for technical printing but limited compared to dedicated math encodings. Integration occurs via PCL escape sequences, such as (8U to select Roman-8 as the primary symbol set or )8U for secondary, enabling dynamic switching during print jobs without resetting the printer. This allows applications to embed multinational text in documents processed by printers. In , Roman-8 serves as the default codeset for terminals and , ensuring compatibility with legacy Unix applications and ensuring proper handling of extended characters in system locales. While primarily HP-native, these sets have been emulated in and environments for cross-platform printing compatibility.

Adobe and Other Emulation Sets

Adobe Standard Encoding, introduced in 1985 as part of 's PostScript LanguageLevel 1, serves as the foundational for text representation in PostScript documents and fonts. This 256-character set extends ASCII with additional diacritics, symbols, and typographic elements, enabling consistent mapping across printers and software in early workflows. It forms the basis for PDF text handling, where character codes are indexed to glyph names in font dictionaries like Type 1 formats. later assigned code page 1276 to this encoding in 1995 to facilitate compatibility in multi-platform environments. Variants of Adobe Standard Encoding address non-Latin scripts, such as Adobe Standard Cyrillic, specified in 1998 to support Russian and related languages by mapping the upper 128 code points to ISO 8859-5 equivalents while using alphanumeric glyph names compatible with PostScript. Similarly, Adobe encodings for Greek, often via the Expert Encoding vector, incorporate polytonic characters and diacritics for classical and modern Greek typography in PostScript fonts. These variants emulate regional standards while maintaining PostScript's device-independent rendering. Other vendors developed code pages to emulate Adobe sets for cross-system compatibility. For instance, IBM code page 1038 for Symbol Encoding, ensures that mathematical and special symbols render consistently in PostScript-derived outputs. In desktop publishing, Adobe encodings enabled precise typographic control during the 1980s and 1990s, powering tools like PageMaker and for high-quality output to devices. However, modern tools face challenges with glyph substitution, where legacy Adobe mappings may trigger incorrect fallbacks or missing characters in Unicode-based workflows, requiring manual overrides in applications like InDesign to preserve fidelity.

DEC and Additional Vendor Sets

Digital Equipment Corporation (DEC) developed the National Replacement Character Sets (NRCS) as a feature for its VT series of computer terminals, beginning with the VT200 series in the early . These sets consist of 7-bit character encodings that modify the standard ASCII set by substituting a small number of graphic characters with equivalents tailored to specific national languages or dialects, enabling localized text display without requiring full 8-bit support. For instance, the DEC NRCS replaces symbols like the and curly braces with letters such as alpha and , facilitating text input and display on terminals like the and VT320. A key component of DEC's ecosystem was the DEC Multinational Character Set (MCS), introduced in 1983 for the terminal and registered by as code page (also known as CCSID 1100). This 7-bit set supports Western European accented characters and symbols, such as accented vowels and marks, while maintaining compatibility with ASCII in the 32–126 range; it includes both 7-bit and 8-bit modes for extended use in systems requiring broader coverage. The MCS was integral to DEC's operating environments, including the operating system for VAX computers and earlier PDP-11 systems, where it handled multinational text processing in applications and assemblers like MACRO-11. Beyond DEC, other vendors introduced specialized code pages during the same era. NeXT Computer, Inc., utilized the NeXT character set (often referred to as NS Roman in documentation) in its operating system starting in 1988; this 8-bit encoding, based on Adobe's Standard Encoding, with symbols, accented Latin characters, and typographic elements for desktop publishing and user interfaces on NeXT workstations. Sun Microsystems extended character in through custom definitions and code page mappings, incorporating extensions for text handling, such as multi-byte support for Asian scripts and supplementary mappings for European languages beyond ISO 8859 standards. These DEC and vendor-specific sets were predominantly used from the through the in terminal-based and early workstation environments, but their legacy persists in modern emulations; for example, the terminal emulator in systems supports DEC NRCS and MCS selections via escape sequences, allowing compatibility with legacy applications.

Code Page Assignments and Lists

Numbering Assignments by Vendor

IBM maintains a structured numbering system for its code pages, referred to as Coded Character Set (CCSIDs). The range 000–199 is primarily allocated to -based encodings, supporting legacy mainframe environments and international variants such as CCSID 037 for U.S. English . Numbers 300–499 are assigned to ASCII and compatible code pages, including CCSID 437 for the original IBM PC OEM character set. Extensions and additional encodings, such as those for double-byte character sets, occupy numbers 500 and higher, with the full registry documented in IBM's official resources for system configuration and data conversion. Microsoft employs a distinct assignment scheme for its code pages, where numbers 000–099 act as aliases referencing OEM code pages like 437 for console and legacy applications. The range 100–199 designates ANSI code pages, such as 1252 for Western European languages, which extend the basic ASCII set for graphical user interfaces. For Unicode-related mappings, Microsoft utilizes numbers 2000 and above, including 1200 for UTF-16 little-endian, aligning with broader IANA standardization to ensure across Windows systems and international software. Other vendors follow proprietary numbering conventions tailored to their hardware and software ecosystems. assigns numbers 0–99 to symbol sets within its (PCL), enabling precise character mapping for printing tasks, such as Roman-8 under set 8U. (DEC) used numbers 10–99 for its National Replacement Character Sets (NRCS) in VT-series terminals, with 10 denoting the U.S. ASCII variant and higher numbers for European locales like 11 for . To mitigate conflicts arising from overlapping assignments across vendors, standardized aliases are employed, such as equating to cp1252 in cross-platform applications. Significant gaps exist in code page coverage, particularly with unassigned numbers beyond 2000, reflecting the shift away from proprietary extensions toward universal encodings. The IANA character set registry, last updated June 6, 2024, registers these with aliases for .

Common Code Page Charts and Mappings

Code page charts provide tabular representations of byte values mapped to characters, typically in (hex) or decimal formats, facilitating conversion between legacy encodings and modern standards like . These charts are essential for developers and system administrators handling text in older software or files. For instance, the US OEM (CP437), originally designed for PCs and , extends ASCII with graphics characters for box-drawing and international symbols. Its mapping table, hosted by the , lists 256 entries from byte 0x00 to 0xFF, where the first 128 bytes (0x00-0x7F) align with ASCII control and printable characters, while the upper range (0x80-0xFF) includes line-drawing elements (e.g., 0xB0 to light shade U+2591), letters (e.g., 0xE0 to alpha U+03B1), and mathematical symbols (e.g., 0xF6 to U+00F7). To interpret such mappings, locate the byte value in hex (base-16, e.g., 0x41) or (base-10, e.g., 65), which corresponds to a (e.g., U+0041 for 'A'). Tools like BabelMap, a free Windows application developed by Unicode expert Andrew West, allow users to visualize these mappings by selecting a code page from installed system encodings and displaying glyphs alongside Unicode equivalents, supporting searches by byte value or character name for accurate conversions. A representative partial table for CP437 illustrates its graphics focus:
Hex ByteDecimalUnicodeCharacter/Description
0x000U+0000
0x011U+0001START OF HEADING
0x2032U+0020
0x4165U+0041LATIN CAPITAL LETTER A
0xB0176U+2591
0xB1177U+2592MEDIUM SHADE
0xDA218U+250CBOX DRAWINGS LIGHT DOWN AND RIGHT
0xE0224U+03B1GREEK SMALL LETTER ALPHA
0xF6246U+00F7
0xFF255U+00A0NO-BREAK SPACE
Another key example is CP1252, the Windows Latin-1 code page for Western European languages, which differs from ISO-8859-1 primarily in the 0x80-0x9F range: ISO-8859-1 reserves these for control characters, but CP1252 assigns printable symbols, enabling broader text display in early Windows applications. For instance, byte 0x80 maps to the (U+20AC), 0x82 to single low-9 quotation mark (U+201A), and 0x85 to horizontal ellipsis (U+2026), while bytes like 0x81 and 0x8D remain undefined. The full mapping confirms 256 entries, with 0x00-0x7F matching ASCII and 0xA0-0xFF largely aligning with ISO-8859-1's . Sample mappings highlighting CP1252's differences:
Hex ByteDecimalUnicodeCharacter/Description (vs. ISO-8859-1 Control)
0x80128U+20AC
0x82130U+201ASINGLE LOW-9 QUOTATION MARK
0x85133U+2026
0x91145U+2018LEFT SINGLE QUOTATION MARK
0x9F159U+0178LATIN CAPITAL LETTER Y WITH DIAERESIS
Common code pages by usage include legacy encodings still encountered in files, databases, and embedded systems, with web statistics showing ISO-8859-1 at 1.0%, (CP1252) at 0.3%, and Shift_JIS (related to CP932) at 0.1% as of November 2025. Other frequently referenced ones are CP437 (OEM ), CP850 (DOS Multilingual Latin-1), CP932 (), and CP1251 (), often prioritized in migration tools due to their prevalence in regional software. The IANA registry, last updated June 6, 2024, registers these with aliases for interoperability: e.g., (MIBenum 2252, ), ibm437 or cp437 (MIBenum 2011, ), and shift_jis for CP932 variants (MIBenum 2024, various). Excerpts include (Central European, MIBenum 2250), (Cyrillic, MIBenum 2251), and (Greek, MIBenum 2253), emphasizing vendor-specific mappings.

Limitations and Criticism

Historical Limitations

Code pages, particularly single-byte variants, were inherently limited to 256 characters due to their reliance on 8-bit encoding schemes, which proved insufficient for representing the vast repertoires required by many non-Latin scripts. For instance, languages such as , , and (collectively known as CJK) demand tens of thousands of characters, necessitating double-byte character sets (DBCS) to extend beyond the 256-code-point barrier and accommodate up to 65,536 possibilities with two bytes. This constraint forced developers to implement complex shift mechanisms or separate encodings, complicating and data interchange across systems. Vendor-specific implementations exacerbated these issues through fragmentation, where incompatible code pages led to widespread —garbled text resulting from mismatched interpretations during data transfer. A notable example is the divergence between 's Code Page 850 (designed for multilingual Latin support in environments) and Microsoft's (an extension of ISO 8859-1 for Western European languages), which differ in the assignment of characters in the 0x80–0x9F range, causing accented letters or symbols to appear as unrelated glyphs when files were opened in the wrong environment. Such incompatibilities were common in cross-platform exchanges during the and , as vendors like , , and DEC prioritized proprietary optimizations over interoperability, resulting in frequent without explicit encoding declarations. In the and , code pages lacked standardized mechanisms for sorting and collation, leading to inconsistent ordering of text across applications and locales, as there were no uniform rules for precedence among characters beyond basic ASCII. Additionally, heavy dependencies on hardware configurations plagued deployment; printers and terminals, such as IBM's 3270 series, required specific code page support embedded in or drivers, rendering output unpredictable when mismatched with the host system's encoding and often necessitating custom mappings for reliable rendering. Specific events underscored these vulnerabilities, including the 1999 introduction of the (€), which prompted urgent code page revisions akin to preparations, as vendors like created new "Euro Country Extended Code Pages" (ECECPs) by reassigning existing code points—such as replacing the international currency symbol in positions like 0x9F—potentially disrupting legacy applications reliant on prior mappings. Similarly, early in the suffered from display errors due to unstandardized code page handling in HTTP transfers, where text conversion between disparate systems frequently produced , as browsers and servers defaulted to local encodings without robust negotiation protocols.

Transition to Unicode and Modern Critique

The development of Unicode emerged as a direct response to the fragmentation caused by numerous proprietary code pages, which complicated multilingual text handling across systems. In October 1991, the Unicode Consortium released version 1.0 of the Standard, establishing a universal character encoding scheme to unify representations of scripts from diverse languages and resolve the incompatibilities inherent in code page proliferation. This standard assigned unique code points to characters, enabling a single encoding to support multiple scripts including Latin, , Cyrillic, and others, with thousands of characters initially, far surpassing the limitations of single-byte code pages like those from or . The transition accelerated in operating systems, with , released in February 2000, marking a pivotal shift by upgrading its internal character handling from the fixed-width UCS-2 to the variable-width encoding, thereby prioritizing over legacy code pages for new applications. Subsequent Windows versions, including those post-2000, further emphasized and as preferred encodings, with recommending their use in to avoid code page dependencies. By the early , major platforms like Windows had internalized support, reducing reliance on code pages for core text processing while retaining through conversion layers. Despite this shift, legacy code page debt persists in modern databases, particularly in systems like SQL Server, where older collations tied to specific code pages—such as SQL_Latin1_General_CP1_CI_AS (based on code page 1252)—cause sorting and comparison inconsistencies for data. These legacy collations perform incomplete code-point comparisons, leading to issues in mixed-language environments and requiring manual conversions between (code page-bound) and nvarchar (Unicode) columns during migrations. Such remnants complicate upgrades, as seen in SQL Server 2022 and later, where deprecated binary collations exacerbate performance overhead in globalized applications. Security vulnerabilities arise from code page misinterpretation, where improper handling of character encodings allows attackers to bypass filters by exploiting discrepancies between assumed and actual encodings. For instance, Unicode normalization differences can enable injection attacks, as malicious input disguised via alternative encodings evades validation in legacy systems still parsing code page data. The OWASP Foundation highlights how Unicode encoding variants can conceal payloads in web inputs, leading to cross-site scripting (XSS) or SQL injection when code pages interpret bytes differently from UTF-8 expectations. These risks are amplified in transitional environments, where mixed code page and Unicode usage creates parsing ambiguities exploitable for command insertion. Critics argue that maintaining code page emulators imposes unnecessary environmental costs, as legacy systems consume more energy due to inefficient processing and outdated hardware dependencies compared to streamlined Unicode implementations. For example, emulating code page conversions in virtualized environments increases computational overhead, contributing to higher carbon emissions from data centers. This ongoing support for obsolete encodings diverts resources from sustainable computing practices, perpetuating e-waste through prolonged use of incompatible hardware. Code pages create barriers for non-Latin scripts, as their limited sets fail to writing systems like or properly in assistive technologies. Screen readers often mispronounce or garble text when encountering code page-mapped glyphs for bidirectional or combining characters, hindering comprehension for visually impaired users in multilingual contexts. Without Unicode's comprehensive , these scripts suffer from incomplete phonetic mapping, exacerbating exclusion for billions of people worldwide using non-Latin scripts. In 2025, regulatory pressures underscore the transition's incompleteness, with the mandating accessible digital formats for products and services, including government documents, by June 28, 2025, to ensure compatibility across member states' diverse languages. gaps persist, particularly for code pages in devices, where integration challenges arise from undocumented serial protocols relying on vendor-specific encodings, leading to failures in industrial settings. These omissions highlight ongoing risks in systems, where incomplete mappings contribute to silos and security blind spots.

Private and Custom Code Pages

Definition and Usage

Private code pages, also referred to as custom code pages, are non-standard encodings that establish unique, user-defined mappings between byte values (typically in the 8-bit ) and s or symbols, without registration in official registries such as the IANA character sets list or vendor-specific assignments from or . These mappings are frequently developed for proprietary data representation or solutions, such as 8-bit extensions in to accommodate custom symbols, icons, or graphical elements not covered by standard encodings. In practice, private code pages find application in resource-constrained environments like embedded systems, where tailored encodings minimize storage and processing overhead for displaying text on microcontrollers or low-memory devices. They also appear in legacy software, exemplified by for , which supported user-created code pages through editable .WCP files to handle characters from unsupported regional sets, such as adding the symbol to Code Page 1252 via custom byte assignments. A key drawback is their inherent non-interoperability, as data encoded this way remains opaque or garbled on systems lacking the proprietary mapping, complicating data exchange across diverse platforms. Creating private code pages generally involves specialized tools, such as custom font editors that allow designers to assign glyphs to byte values, or plain-text files defining the mappings for integration into software or firmware. While the (IANA) actively discourages the creation and deployment of unregistered encodings to foster global compatibility in internet protocols, it permits limited private customization within established standards like 's designated areas. Specifically, Unicode reserves the Private Use Area (PUA) code points from U+E000 to U+F8FF for such non-standard assignments, enabling applications to define and interpret characters privately without conflicting with the core standard. Historically, private code pages gained traction in early Systems () during the 1980s and 1990s, where custom variations of encodings like IBM Code Page 437 were adapted to optimize the rendering of —text-based graphics incorporating colors and extended characters for visually engaging interfaces. As of 2025, reliance on private code pages has significantly diminished, with most custom needs now addressed through Unicode's to maintain broader compatibility in contemporary systems.

Examples and Risks

In the gaming domain, early role-playing games (RPGs) developed with tools like often employed custom encodings derived from to handle non-standard text, such as characters in titles like those from the era, ensuring compatibility with limited hardware resources. Private code pages introduce significant risks during data migrations, where mismatches between source and target encodings can result in character , rendering text unreadable or substituting incorrect symbols, as seen in database transfers involving non-Unicode columns. To mitigate these issues, developers should maintain documented private mappings within technical specifications to facilitate and auditing. A recommended transition involves adopting 's Private Use Area (), which reserves code points like U+E000 to U+F8FF for custom assignments without conflicting with standard characters, enabling safer round-trip conversions in modern applications. As of 2025, private code pages remain rare but can persist in legacy systems for .

References

  1. [1]
    UTR#17: Unicode Character Encoding Model
    ### Summary of Code Pages and Their Relation to Unicode (UTR#17)
  2. [2]
    Db2 12 - Internationalization - Code pages and CCSIDs - IBM
    CDRA is an architecture that aims to achieve consistent representation, processing, and interchange of graphic character data in data processing environments.
  3. [3]
    None
    Below is a merged summary of all the provided segments on "Code Pages: Definition and History," consolidating the information into a dense and comprehensive response. To retain all details efficiently, I will use a combination of narrative text and tables in CSV format where appropriate (e.g., for key facts, standards, and development milestones). The response avoids any additional thinking tokens beyond summarizing and organizing the provided content.
  4. [4]
    7 Character Sets - VT100.net
    ... code pages (page numbers in IBM's standard character set manual) in PCTerm mode to emulate the console terminal of industry-standard PCs. Each code page ...
  5. [5]
    Coded character set identifier - IBM
    CDRA defines the coded character set identifier (CCSID) values to identify the code points used to represent characters, and to convert these codes.
  6. [6]
  7. [7]
    Code pages - Globalization - Microsoft Learn
    Feb 2, 2024 · A code page is a list of selected character codes (characters represented as code points). Code pages were originally defined to support a specific language.
  8. [8]
    How to use character encoding classes in .NET - Microsoft Learn
    Sep 15, 2021 · On Windows operating systems, code pages are used to support a specific language or group of languages. For a table that lists the code pages ...
  9. [9]
    Setting the Code Page for String Conversions - Microsoft Learn
    Jun 15, 2017 · A code page is an internal table that the operating system uses to map symbols (letters, numerals, and punctuation characters) to a number.
  10. [10]
    Character sets and code pages - IBM
    The term code page refers to a coded character set. A character set is independent of a coded representation. A coded character set is the coded representation ...
  11. [11]
    Use UTF-8 code pages in Windows apps - Microsoft Learn
    Jul 18, 2025 · UTF-8 is the universal code page for internationalization and is able to encode the entire Unicode character set. It is used extensively on the ...
  12. [12]
    Data encoding basics - Db2 - IBM
    What is a code page and a CCSID? A code page is a map that tells a computer which hexadecimal numbers represent which characters. Another similar term is ...
  13. [13]
    How it was: ASCII, EBCDIC, ISO, and Unicode - EE Times
    Pronounced “eb-sea-dick” by some and “eb-sid-ick” by others, EBCDIC was first used on the IBM 360 computer, which was presented to the market in 1964. As was ...
  14. [14]
    Db2 12 - Internationalization - EBCDIC - IBM
    EBCDIC was developed by IBM in 1963. Certain characters are the same on every EBCDIC code page. Those characters are called invariant characters . Other ...Missing: introduction date 1960s
  15. [15]
    Code Page Identifiers - Win32 apps - Microsoft Learn
    Jan 7, 2021 · For the most consistent results, applications should use Unicode, such as UTF-8 or UTF-16, instead of a specific code page.Missing: definition | Show results with:definition
  16. [16]
    Character Sets - Internet Assigned Numbers Authority
    Jun 6, 2024 · When a national or international standard is revised, the year of revision is added to the cs alias of the new character set entry in the IANA ...
  17. [17]
    Code Pages - Win32 apps - Microsoft Learn
    Oct 26, 2021 · Each code page is represented by a code page identifier, for example, 1252, and is handled by the Unicode and character set API functions.
  18. [18]
    A brief introduction to code pages and Unicode
    MBCSs are often compatible with ASCII; that is, the Latin letters are represented in such encodings with the same bytes that ASCII uses. Some less often used ...
  19. [19]
    ISO8859 family - IBM
    The ISO8859 encoding defines a family of code sets with each member containing its own unique character sets. The 7-bit ASCII code set is a proper subset of ...
  20. [20]
    UTR #22: Character Mapping Tables - Unicode
    Typically, all unassigned Latin-1 characters (Unicode<=U+00ff) have subchar1 mappings, but also some other code points do. This means that. when one converts ...
  21. [21]
    Set up latin-1 character encoding - IBM
    Latin-1 encoding uses ISO 8859-1, with CP1252 as a superset. Basic Latin and Latin-1 Supplement are also related. See Unicode 7.0 charts for more.
  22. [22]
    CCSID reference information - IBM
    Character Data Representation Architecture (CDRA) has the concept of a growing CCSID. This CCSID is one where the code page is not full and new characters are ...<|control11|><|separator|>
  23. [23]
    [PDF] Unicode Services User's Guide and Reference - IBM
    Feb 16, 2019 · ... CCSID” on page 275 shows how you can define a user defined CCSID in the Unicode services knowledge base. • “Encoding Scheme” on page 301 ...
  24. [24]
    ICONV - GNU.org
    The iconv program converts text from one encoding to another encoding. More precisely, it converts from the encoding given for the −f option to the encoding ...
  25. [25]
    CP437 to Unicode Map
    This is a CP437 to Unicode map. For example, CP437 code 0 is 0000, 1 is 263A, 2 is 263B, and 3 is 2665.
  26. [26]
    MultiByteToWideChar function (stringapiset.h) - Win32 apps
    Feb 5, 2024 · Maps a character string to a UTF-16 (wide character) string. The character string is not necessarily from a multibyte character set.Syntax · Parameters
  27. [27]
    Updated EBCDIC Unicode maps for CCSIDs 1377 and 1388 - IBM
    The IBM i CCSID conversion support to and from Unicode has been updated for data stored in EBCDIC CCSIDs 1377 (Traditional Chinese) and 1388 (Simplified ...Missing: documentation | Show results with:documentation
  28. [28]
    What is Extended Binary Coded Decimal Interchange Code (EBCDIC)
    Aug 23, 2023 · EBCDIC was developed by IBM in 1963 to complement the punched cards used for storage and data processing in the early days of computing.<|control11|><|separator|>
  29. [29]
    The EBCDIC character set - IBM
    EBCDIC is a character set developed before ASCII, used to encode z/OS data. It is an 8-bit set, but has different bit assignments than ASCII.Missing: history CP037
  30. [30]
    EBCDIC character set - Rocket Software Documentation
    Refer to this table for parameters related to the EBCDIC device control codes and character sets ... Graphic Escape. Ctrl-I, 9, 09, SPS, Superscript. Ctrl-J, 10 ...
  31. [31]
    Converting data to ASCII, EBCDIC and UTF-8 - IBM
    The z/OS® CIM server executes in the Enhanced ASCII mode. This means that all string data within the CIM server's address space is represented in ASCII rather ...
  32. [32]
    The history of Alt+number sequences, and why Alt+9731 sometimes ...
    Jul 2, 2024 · In the IBM PC BIOS, you could enter characters that weren't present on the keyboard by holding the Alt key and typing the decimal value on the ...
  33. [33]
    [PDF] IBM PC Technical Reference - Bitsavers.org
    The International Business Machines Corporation warrants this IBM Personal. Computer Product to be in good working order for a period of 90 days from the.
  34. [34]
    [PDF] Microsoft® MS-DOS® User's Guide - Bitsavers.org
    Jul 15, 1987 · MS-DOS 3.3 supports five different code pages: • 437 - United States code page. • 850 - Multilingual code page. This code page includes all.
  35. [35]
    Where Did CP852 Come From? | OS/2 Museum
    Feb 15, 2022 · Those encodings originated in the mid-1980s and tended to preserve most of the CP437 semi-graphic characters; code page 852 did not, on the ...
  36. [36]
    DOS codepages (and their history) - Aivosto
    The DOS operating system originally supported just one character set, or code page. That was the 437 codepage, also known as PC-ASCII. Later on, several ...<|control11|><|separator|>
  37. [37]
    Code pages - IBM
    Code pages have a canonical name, description, and CCSID. Examples include Windows Latin-1 (Cp1252), ISO8859_1 (ISO8859_1), and UTF8 (UTF8).Missing: numbering system history
  38. [38]
    ICU supported code pages - IBM
    ICU supports code pages like ASCII, LATIN1, ISO8859-2, UTF_8, EUC_CN, EUC_KR, EUC_JP, and many others, including various IBM EBCDIC code pages.Missing: compatibility | Show results with:compatibility
  39. [39]
    Code page and Coded Character Set Identifier (CCSID ... - IBM
    Within IBM, UTF-8 has been registered as CCSID 1208 with growing character set (sometimes also referred to as code page 1208). As new characters are added to ...
  40. [40]
    Unicode character encoding - IBM
    Within IBM®, the UTF-16 code page has been registered as code page 1200, with a growing character set. When new characters are added to a code page, the ...
  41. [41]
    Alternative Unicode conversion table for CCSID 5026 and ... - IBM
    There are several IBM® coded character set identifiers (CCSIDs) for Japanese code pages. CCSID 5026 and CCSID 1390 is registered as a Japanese EBCDIC code ...
  42. [42]
    Replacing the Unicode conversion table for CCSID 1390 with ... - IBM
    When you convert from coded character set identifier (CCSID) 1390 to Unicode, the Db2 database manager default code page conversion table is used.
  43. [43]
    [PDF] iSeries: Globalization - IBM
    v New support for Unicode data. v Support for GB18030, the new Chinese ... OS/400 supports many languages; you can work in the language of your choice.
  44. [44]
    [PDF] for One and One for All - Introducing Unicode (Part 1) - IBM
    For example, the EBCDIC encoding scheme is used on z/OS and iSeries (AS/400) systems. The ASCII encoding scheme is used on Intel-based (Windows) systems and ...<|separator|>
  45. [45]
    Generating a new code page converter - IBM
    Generate a code page converter to handle conversions of data that belongs to a code page that is not in the default set of code pages that are provided by IBM® ...
  46. [46]
    Converter | ICU Documentation
    ICU provides comprehensive character set conversion services, mapping tables, and implementations for many encodings.
  47. [47]
  48. [48]
    Db2 12 - Internationalization - Unicode CCSIDs - IBM
    Db2 uses CCSID 1200 for Unicode UTF-16 data, which is double-byte data (DBCS). This CCSID applies to GRAPHIC and VARGRAPHIC Unicode data.Missing: mappings documentation
  49. [49]
    Using extended code pages - IBM
    Traditional code pages include EBCDIC or ASCII encoding only, but extended code pages can contain EBCDIC or ASCII encodings along with the Unicode equivalent ...<|separator|>
  50. [50]
    [MS-UCODEREF]: Supported Codepage in Windows - Microsoft Learn
    Jun 24, 2021 · The following table shows all the supported codepages by Windows. The Codepage ID lists the integer number assigned to a codepage.
  51. [51]
    The euro currency symbol (OpenType 1.8.1) - Typography
    Feb 11, 2022 · The Unicode assignment of the euro symbol is 20AC. The symbol will be added to the following codepages at position '0x80'; 1250 Eastern European, 1252 Western, ...
  52. [52]
    GetACP function (winnls.h) - Win32 apps - Microsoft Learn
    Feb 22, 2024 · Returns the current Windows ANSI code page (ACP) identifier for the operating system. See Code Page Identifiers for a list of identifiers for Windows ANSI code ...
  53. [53]
    GetCPInfoExA function (winnls.h) - Win32 apps | Microsoft Learn
    Nov 19, 2024 · Retrieves information about any valid installed or available code page. (GetCPInfoExA)
  54. [54]
    [PDF] User's Guide and Reference - Meulie
    Page 1. User's Guide and Reference. An important note to MS· DOS users ... mode cp prep command, you specify the following: • The device for which you ...
  55. [55]
    Appendix A: MS-DOS Version 3.3 - PCjs Machines
    Code page 850 replaces two of the four box-drawing sets and some of the ... If code-page is omitted, CHCP displays the current MS-DOS code page.
  56. [56]
    Naming Files, Paths, and Namespaces - Win32 apps | Microsoft Learn
    Aug 28, 2024 · Files using long file names can be copied between NTFS file system partitions and Windows FAT file system partitions without losing any file ...Missing: DBCS | Show results with:DBCS
  57. [57]
    Filenames on FAT Volumes - NTFS.com
    Windows creates an 8+3 name and secondary folder entries for long filenames on FAT, using attribute bits. MS-DOS/OS/2 ignore these secondary entries.Missing: limitations DBCS
  58. [58]
    Codepages: Comprehensive list - Aivosto
    Nov 26, 2022 · EBCDIC Publishing Austria, Germany F.R. Original IBM "PC-ASCII" codepage. Also known as OEM United States. HP PCL symbol set 10U "PC-8".Missing: origin | Show results with:origin<|control11|><|separator|>
  59. [59]
    WideCharToMultiByte function (stringapiset.h) - Win32 apps
    Feb 5, 2024 · Maps a UTF-16 (wide character) string to a new character string. The new character string is not necessarily from a multibyte character set.
  60. [60]
    Collation and Unicode Support - SQL Server - Microsoft Learn
    Jul 29, 2025 · The data that you move between non-Unicode columns must be converted from the source code page to the destination code page. Transact-SQL ...
  61. [61]
    Printer Command Language Symbol Sets
    Character codes in the range 0 through to 255 can be specified in a single byte. Many fonts contain more than 255 glyphs. To support foreign character sets and ...Missing: documentation | Show results with:documentation
  62. [62]
    None
    ### Summary of HP Symbol Sets in PCL (from Chapter 5 of PCL Implementors Guide)
  63. [63]
  64. [64]
    [PDF] PostScript Language Reference, third edition - Adobe
    All instances of the name PostScript in the text are references to the PostScript language as defined by Adobe. Systems Incorporated unless otherwise stated.
  65. [65]
    Supported code pages - IBM
    To find a code page for a specific CCSID, search for an internal converter name in the form ibm- ccsid , where ccsid is the CCSID for which you are looking.
  66. [66]
    [PDF] Adobe Standard Cyrillic Font Specification - GitHub Pages
    Feb 18, 1998 · The Windows Cyrillic encoding is registered as IBM code page 1251. 3.3 Macintosh Encoding. Apple has similarly based their Cyrillic encoding on ...
  67. [67]
    Supported code pages - IBM
    IBM App Connect Enterprise supports the code pages that are given in the following tables. To find a code page for a specific CCSID, search for an internal ...
  68. [68]
  69. [69]
    Glyphs and special characters - Adobe Help Center
    Sep 25, 2023 · Edit custom glyph sets · Choose Edit Glyph Set from the Glyph panel menu, and then choose the custom glyph set. · Select the glyph you want to ...
  70. [70]
    VTTEST – National Replacement Character Sets - invisible-island.net
    NRC sets are similar to the ASCII set, but replace a few ASCII characters with characters used in that language or dialect.VT100 · VT220
  71. [71]
  72. [72]
  73. [73]
    Codepages / Ascii Table NEXTSTEP Encoding Vector
    Dec, Hex, Char, Name. 32, 20, SPACE. 33, 21 ! EXCLAMATION MARK. 34, 22, ", QUOTATION MARK. 35, 23, #, NUMBER SIGN. 36, 24, $, DOLLAR SIGN.
  74. [74]
    [PDF] XTerm Control Sequences - invisible-island.net
    Jun 22, 2025 · However, xterm is most useful as a DEC VT102 or VT220 emulator. Set the sunKeyboard resource to true to force a Sun/PC keyboard to act like ...
  75. [75]
  76. [76]
    cp437_DOSLatinUS to Unicode table
    # # Name: cp437_DOSLatinUS to Unicode table # Unicode version: 2.0 # Table ... code (in hex) # Column #2 is the Unicode (in hex as 0xXXXX) # Column #3 is ...
  77. [77]
    BabelStone : BabelMap (Unicode Character Map for Windows)
    ### Summary of BabelMap and Code Page Mapping Support
  78. [78]
    None
    ### CP1252 to Unicode Mapping Summary
  79. [79]
    Usage statistics of character encodings for websites - W3Techs
    How to read the diagram: UTF-8 is used by 98.8% of all the websites whose character encoding we know.
  80. [80]
    [PDF] Fundamentals of CJK Encoding - Unicode
    ... 256 characters, at least 2 bytes (i.e. 216=65,536 code points) are necessary. dA codeset may not use all the code points in a code space, i.e. some are assigned ...
  81. [81]
    Character Sets and Code Pages - IBM
    To assist in that effort, IBM® has assigned a unique number to many of the EBCDIC and ASCII code pages you will use. The specific code page translations ...
  82. [82]
  83. [83]
    A program to detect mojibake that results from a UTF-8-encoded file ...
    Jul 1, 2019 · So let's write a program to detect that specific kind of mojibake. The first observation is that code page 1252 agrees with Unicode's first 256 ...
  84. [84]
    [PDF] Specification method for cultural conventions - Unicode
    10 collation: The logical ordering of strings according to defined precedence rules. 221. 222. 3.1.11 collating element: The smallest entity used to determine ...
  85. [85]
    [PDF] CICS Transaction Server from Start to Finish - IBM Redbooks
    In the 1980s, hardware advances made powerful systems accessible to smaller ... 򐂰 3270 terminals and printers. 򐂰 SCS printers. 򐂰 3600/4700 banking ...
  86. [86]
    [PDF] A Guide to Understanding AFP Fonts - IBM
    Dec 30, 1999 · The table below shows the base code pages, new code pages, character replaced by the euro, and the code point to which the euro was assigned.
  87. [87]
    Chronology of Unicode Version 1.0
    Earliest documented use of the term "Unicode" coined by Becker; from unique, universal, and uniform character encoding. February 1988. Collins begins work at ...
  88. [88]
    Early Years of Unicode
    Mar 26, 2015 · Ground work for the Unicode project began in late 1987 with initial discussions between three software engineers -- Joe Becker of Xerox Corporation, Lee ...<|separator|>
  89. [89]
    Windows codepages (and their history) - Aivosto
    The first version of Microsoft Windows, released in 1985, came with a single character set. ... Code Page Support in Microsoft Windows). Microsoft Press, 1995.
  90. [90]
    CWE-116: Improper Encoding or Escaping of Output
    Improper encoding or escaping can allow attackers to change the commands that are sent to another component, inserting malicious commands instead.
  91. [91]
    The Hidden Risks and Costs of Retaining Legacy Technology - ERI
    Oct 22, 2025 · 1. Data Loss · 2. Equipment Downtime · 3. Higher Energy Consumption · 4. Increased Difficulty Finding Workers and Vendors · 5. Inefficient ...
  92. [92]
    Sunsetting Legacy Software - SphereGen
    Mar 28, 2024 · Legacy systems contribute to IT related environmental impacts. Older technology can drain energy resources resulting in higher costs and energy ...The Challenges Of Legacy... · Environmental Impact · What To Evaluate When...
  93. [93]
    The troubled state of screen readers in multilingual situations
    Jun 7, 2020 · What happens when blind and visually-impaired internet users speak multiple languages? I find out, and also give you some tips!Missing: Latin | Show results with:Latin
  94. [94]
    Digital Language Access : Scripts, Transliteration, and Computer ...
    Except for the languages of Western Europe, Africa, America and Australia, many of them are written in non-Latin scripts, which poses an even greater barrier.2. Only Three Types Of... · 3. How These Scripts Work · 5. Standards: What Are They...Missing: pages | Show results with:pages
  95. [95]
    New EU-regulations are coming at end of June - InterForm
    May 2, 2025 · However, starting from June 28, 2025, businesses that provide digital documents to consumers must ensure those documents are accessible.
  96. [96]
    Why Legacy Systems Struggles with IOT Integration and How to Fix It
    Nov 8, 2024 · This mismatch creates a gap where IoT devices and legacy systems can't communicate directly, leading to data silos and lost opportunities. Data ...
  97. [97]
    [PDF] IoT Device Cybersecurity Guidance for the Federal Government
    Legacy devices may also have gaps in device cybersecurity requirements that cannot be remedied through adding those capabilities for these reasons, but also ...
  98. [98]
    How to create a character encoding from scratch - Stack Overflow
    Dec 29, 2014 · The result I want to achieve is to have a normal ttf file (ie myfont.ttf), install it, and then use my new character set in software like notepad!UTF-8 to unicode converter for embeded system displayHow to make my own character encoding in C++? - Stack OverflowMore results from stackoverflow.com
  99. [99]
    Creating Custom Character Encoding to Save Space - Medium
    Aug 12, 2020 · In this article, we'll look at how we can encode (and decode) certain strings to save storage (and transmission) space, and the mathematics behind it.
  100. [100]
    Anatomy of WPDOS: support for updated codepages and foreign ...
    Aug 5, 2020 · Here I will be going over how the excellent design of WPDOS regarding support of code pages and different character sets made it quite future proof.
  101. [101]
    Developing a Custom Font for the Embedded Systems - Instructables
    Step 1: Writing Down the Characters · Step 2: Getting the Characters to Your Workstation · Step 3: Compiling a Set of Characters · Step 4: Adjusting the Size · Step ...
  102. [102]
    Choosing & applying a character encoding - W3C
    The specification also strongly discourages the use of UTF-16, and the use of UTF-32 is 'especially discouraged'. Other character encodings listed in the ...Missing: custom | Show results with:custom
  103. [103]
    How to render ANSI properly - Help - 16colo.rs
    Jul 5, 2021 · Viewing artwork created in one codepage would display differently in another codepage, although when talking about ANSI art, the effects will be ...
  104. [104]
    Supported territory codes and code pages - IBM
    The following map to English (US):. English (Jamaica); English (Caribbean); English (Singapore). Code page 5488 is equivalent to 1392. Starting with ...
  105. [105]
    Quirks in RPG Maker: Unicode and having fun with codepages
    Mar 13, 2015 · libiconv is a library to convert common legacy encodings to Unicode, supporting most non-Unicode Windows encodings. In order to play any non- ...
  106. [106]
    Character data is incorrect when the code page differs - SQL Server
    Jul 7, 2023 · This problem occurs because the character data of code page X is stored in a non-Unicode column of code page Y, which is unsupported.Missing: mismatch | Show results with:mismatch
  107. [107]
    Top 5 Injection Attacks in Application Security - Invicti
    Oct 17, 2024 · Injection attacks work by including a payload in unvalidated user input and getting a vulnerable web application to execute it.
  108. [108]
    UTS #39: Unicode Security Mechanisms
    The implementation shall provide a precise list of character mappings that are added to or removed from those provided, but otherwise be in accordance with the ...
  109. [109]
    Private use area (PUA) characters and End-user-defined characters ...
    Feb 2, 2024 · PUA characters are custom characters defined by individuals/organizations, enabling users to form names not in standard fonts. Shared use ...
  110. [110]
    Overcoming custom code costs in SCADA systems - Control Global
    Apr 29, 2024 · How systems integrators and in-house developers can create advanced SCADA solutions that aren't limited by rapidly aging code.Missing: private | Show results with:private
  111. [111]
    ISO/IEEE 11073 - Wikipedia
    ISO/IEEE 11073 Health informatics - Medical / health device communication standards enable communication between medical, health care and wellness devices