Fact-checked by Grok 2 weeks ago

Wide character

A wide character is a fixed-width representation of a single character in computing, designed to handle text from extended character sets beyond the traditional 8-bit ASCII limit, typically using 16 or 32 bits to encode characters from standards like or ISO 10646. Introduced in the ISO C90 standard, wide characters are primarily represented by the wchar_t , which provides a fixed-width alternative to variable-length multibyte encodings for internal program processing and support. The standard's Amendment 1 further expanded capabilities with additional types and macros, such as wint_t for handling wide characters in input/output operations and constants like WCHAR_MIN and WCHAR_MAX to define the representable range. The size of wchar_t is implementation-defined: on Windows platforms, it is typically 16 bits to align with UTF-16 encoding, enabling representation of Basic Multilingual Plane characters in , while on many systems including those using C Library, it is 32 bits for full UCS-4/UTF-32 support of all code points up to 0x10FFFF. This variation allows wide characters to process global text efficiently, though external storage often favors compact encodings like to minimize space. In practice, wide character support is provided through the <wchar.h> header in C and C++, offering functions for string manipulation (e.g., wcslen for length, wcscpy for copying), conversion from narrow characters (e.g., mbtowc), and input/output (e.g., fgetws, wprintf), ensuring portability across locales while avoiding the inefficiencies of multibyte sequences in performance-critical code. Later standards like C++11 introduced fixed-width alternatives such as char16_t and char32_t to address portability issues with wchar_t's variable size.

Fundamentals

Definition and Purpose

A wide character is a fixed-width data type in computing that employs more bits—typically 16 or 32—compared to the standard narrow character, which usually occupies 8 bits, to encode characters from extensive or international repertoires such as Unicode. This representation, often implemented as the wchar_t type in languages like C and C++, allows for the direct storage of a broader range of symbols, including those from diverse scripts and technical notations, in a uniform format. The primary purpose of wide characters is to facilitate efficient handling of global text data in applications requiring , by enabling straightforward storage and manipulation without relying on variable-length encodings like multibyte sequences. This approach supports the processing of or similar large character sets in a code set-independent manner, promoting performance gains in text operations for multilingual software. Key benefits include the simplicity of string indexing due to predictable character boundaries, consistent memory allocation per character regardless of content, and reduced complexity in avoiding byte-order variations during data exchange. These advantages make wide characters particularly valuable in scenarios demanding rapid text manipulation, such as graphical user interfaces (GUIs) where uniform width aids in layout computations, and rendering engines that benefit from fixed-size access for displaying international content. For instance, wide-character strings like L"Hello" can seamlessly incorporate characters from multiple locales without encoding overhead.

Narrow vs. Wide Characters

Narrow characters, typically represented using 8-bit data types such as the C/C++ char, are limited to encoding up to 256 symbols, as seen in standards like ASCII (7-bit, 128 characters) or ISO/IEC 8859-1 (8-bit, 191 graphic characters for Latin scripts). These encodings suffice for single-language applications, such as English text in legacy systems, where the restricted repertoire covers basic alphabetic, numeric, and punctuation needs without exceeding memory constraints. However, they prove inadequate for multilingual support, as they cannot represent characters from non-Latin scripts like Cyrillic, Arabic, or CJK ideographs beyond rudimentary extensions. In contrast, wide characters employ larger fixed-width encodings, such as 16-bit UCS-2 (supporting 65,536 code points in the Basic Multilingual Plane) or 32-bit UTF-32 (accommodating the full range of over 1.1 million code points up to U+10FFFF). These are commonly implemented via types like wchar_t (16 bits on many platforms) or char32_t, enabling comprehensive coverage of global scripts, including emojis and historic notations. Wide characters are particularly suited for modern applications requiring random access to international text, such as user interfaces handling diverse languages or content with non-Latin elements. The primary trade-offs between narrow and wide characters revolve around efficiency and capability. Wide characters demand 2-4 times more storage per symbol compared to narrow ones, potentially increasing memory usage in large datasets, but they permit direct indexing without variable-length decoding. Narrow characters, while compact and faster for simple operations, often necessitate multibyte extensions (e.g., or Shift-JIS) for broader character sets, complicating parsing and due to varying byte lengths. For instance, legacy English text files remain efficient with narrow encodings like ASCII, whereas contemporary software for global users, including emoji-rich messaging or multilingual databases, favors wide characters for seamless integration.

Historical Development

Origins in Character Encoding Challenges

The American Standard Code for Information Interchange (ASCII), introduced in 1963, employed a 7-bit encoding scheme that supported only 128 characters, primarily accommodating the , digits, punctuation, and control codes, which proved insufficient for non-Latin scripts such as the , , and Korean (CJK) ideographs requiring thousands of distinct glyphs. For instance, basic literacy in demanded over 2,000 characters, far exceeding ASCII's capacity and rendering it incapable of handling ideographic writing systems without extensive modifications. Similarly, the 8-bit Extended Binary Coded Decimal Interchange Code (), developed by in the 1960s, offered 256 code points but suffered from non-collation-friendly ordering and regional variants, limiting its effectiveness for global text processing. By the 1980s, the inadequacies of these "" approaches became increasingly apparent, particularly in EBCDIC-based mainframe systems where over 57 national variants complicated data interchange and led to frequent encoding mismatches during file transfers. Initial academic and industry proposals for 16-bit encodings emerged around this time; for example, in 1987, researcher Joe outlined a uniform 16-bit scheme as part of early discussions, building on his prior work with multilingual systems like the . These challenges were driven by the rapid of , where software applications required seamless support for multiple languages in single documents without constant encoding switches, resulting in processing overheads and errors in internationalized workflows, as exemplified by communication challenges during the development of Apple's KanjiTalk project in 1985. A key milestone came in the mid-1980s when the (ISO) initiated work on 16-bit universal character sets; in 1984, ISO/TC97/SC2 document N1436 proposed a two-byte graphic character repertoire to unify diverse scripts, laying groundwork for broader efforts.

Evolution with Internationalization Standards

The development of wide characters gained momentum in the 1990s through key internationalization standards that addressed the limitations of 8-bit encodings for global text processing. In 1991, the released 1.0, establishing a 16-bit character encoding scheme aligned with emerging international efforts to support multilingual software. This standard was synchronized with the International Organization for Standardization's (ISO) work, culminating in the 1993 publication of ISO/IEC 10646, which defined the Universal Coded Character Set (UCS) as a 31-bit repertoire capable of encoding over 2 billion characters, thereby promoting the use of 16-bit and 32-bit wide character formats for efficient representation of diverse scripts. These milestones, driven by collaboration between the and ISO/IEC Joint Technical Committee 1 (JTC1), laid the foundation for wide character adoption in software (i18n), enabling developers to handle characters from multiple languages without relying on locale-specific code pages. A significant programming milestone was the inclusion of wide character support in the ISO C90 standard (1990), which introduced the wchar_t type and the <wchar.h> header for fixed-width character handling, with Amendment 1 (1995) adding types like wint_t and macros such as WCHAR_MIN and WCHAR_MAX. As these standards evolved, the initial UCS-2 encoding, which used fixed 16-bit units, proved insufficient for the full UCS repertoire due to its inability to represent characters beyond the Basic Multilingual Plane without extensions. To address this, the Unicode Consortium introduced UTF-16 and UTF-32 in Unicode 2.0 (1996), where UTF-16 extended UCS-2 by incorporating surrogate pairs to encode the entire 31-bit space within 16-bit or 32-bit units, while UTF-32 provided a fixed-width 32-bit format for simplicity in processing. Concurrently, the X/Open Portability Guide Issue 4 (XPG4) in 1992 and subsequent POSIX standards, such as IEEE Std 1003.1-1996, integrated wide character support through headers like <wchar.h>, standardizing functions for wide character handling in Unix-like systems to facilitate portable i18n applications. These refinements ensured that wide characters could be manipulated consistently across platforms, shifting the industry from fragmented multibyte schemes to unified, extensible encodings. By the 2000s, wide character support had achieved widespread adoption in major operating systems and libraries, solidifying their role in global software ecosystems. Microsoft Windows, starting with , transitioned from UCS-2 to UTF-16 for its wchar_t type, integrating it into core APIs for file systems and user interfaces to support international text rendering. Similarly, distributions via the GNU C Library () enhanced wide character functionality around 2000, with full conformance to wide character interfaces by the mid-2000s, enabling robust handling of and UTF-16 in applications like and . Supplementary planes were introduced in 2.0 (1996), with expansions continuing in later versions including 5.0 (2007), relying on to overcome 16-bit limitations and accommodate growing demands for scripts like historic and characters. This evolution profoundly impacted operating system internationalization, allowing seamless integration of diverse languages without code reconfiguration. For instance, , released in 1995, adopted a 16-bit char type based on Unicode to natively support i18n from inception, facilitating cross-platform applications that process global text efficiently. Overall, these standards transformed wide characters from an experimental solution into a cornerstone of modern computing, promoting and reducing the complexity of multilingual development.

Relation to Encoding Standards

UCS and Unicode

The Universal Coded Character Set (UCS), defined by the ISO/IEC 10646, establishes a 31-bit code space encompassing up to 2^31 code points to support a vast repertoire of characters from diverse scripts and symbol systems. Within this framework, wide characters serve as native fixed-width representations, primarily through UCS-2, which maps code points using 16-bit units, and UCS-4 (also known as UTF-32), which employs 32-bit units for direct encoding of the full range. These encodings enable straightforward storage and processing of Unicode-compatible text without the variability of byte lengths, aligning with the standard's goal of universal character representation. The standard closely aligns with UCS, adopting its character repertoire as a subset while synchronizing definitions and assignments to ensure interoperability; specifically, limits its practical code space to 0 through 10FFFF (1,114,112 s) across 17 planes. For the (BMP), which covers code points U+0000 to U+FFFF and includes most commonly used characters, wide characters in 16-bit form (such as UCS-2 or the BMP portion of UTF-16) provide sufficient coverage without additional mechanisms. Full support for characters in the astral planes (U+10000 to U+10FFFF), encompassing rarer scripts and symbols, requires the 32-bit wide character encoding UTF-32 to represent each in a single unit. In terms of mapping, wide characters in UTF-32 store code points directly, where each scalar value corresponds to exactly one 32-bit unit, simplifying indexing and in applications. For 16-bit compatibility, UTF-16 extends UCS-2 by using pairs—two 16-bit units (a high surrogate from U+D800 to U+DBFF and a low surrogate from U+DC00 to U+DFFF)—to encode characters, allowing wide character systems originally designed for UCS-2 to handle the extended repertoire with minimal disruption. These fixed-width encodings in UCS and Unicode facilitate portability by permitting explicit specification of endianness, either big-endian (most significant byte first) or little-endian (least significant byte first), through variants like UTF-16BE, UTF-16LE, UTF-32BE, and UTF-32LE. Additionally, the (BOM), represented by the Unicode character U+FEFF at the start of a , signals the endianness and encoding form, enhancing cross-platform compatibility by allowing systems to detect and interpret the byte order without prior . This mechanism is particularly valuable for wide character data exchanged between architectures with differing native byte orders.

Multibyte Characters

Multibyte character encodings represent text using a variable number of bytes per character, typically ranging from one to four bytes in modern standards like , allowing for efficient storage of diverse scripts while maintaining compatibility with legacy single-byte systems. For instance, encodes basic Latin characters in a single byte identical to ASCII, while more complex characters from scripts like CJK (, , ) may require up to four bytes. Similarly, older encodings such as Shift-JIS, developed for text, use one byte for half-width and Roman characters but two bytes for full-width . This variable-length approach optimizes space by allocating fewer bytes to frequently used characters but introduces complexity in parsing, as decoders must examine byte sequences to determine character boundaries. In contrast to wide characters, which employ a fixed-width format for uniform and predictable access to individual characters, multibyte encodings demand stateful decoding processes that track context across bytes, potentially leading to errors during or operations without full re-parsing. Wide characters, often based on fixed-size units like 16-bit or 32-bit representations in UCS or , enable direct indexing by character position, simplifying operations in applications requiring frequent manipulation. Multibyte systems, however, require incremental decoding to avoid misalignment, which can complicate implementations in performance-critical scenarios. Multibyte encodings emerged in the to address limitations of 8-bit systems in handling non-Latin scripts, particularly in East Asian contexts where thousands of characters exceeded single-byte capacities. Standards like , introduced for Unix environments to support , , and , built on from 1978 by allowing multiple code sets within a variable-byte framework. , popularized in the mid- by for Windows, extended this approach for broader compatibility in personal computing. Wide characters later gained prominence in the era as a performance-oriented alternative, offering fixed-width simplicity for globalized applications without the overhead of variable decoding. The primary trade-offs between multibyte and wide character approaches center on memory efficiency versus processing simplicity. Multibyte encodings like conserve storage by using just one byte for characters, which comprise much of Western text, reducing overall footprint compared to fixed-width alternatives that allocate uniform space regardless of character complexity. However, this efficiency comes at the cost of increased computational overhead for parsing and , whereas wide characters provide straightforward, error-resistant handling but waste on simpler scripts by padding with unused bits.

Technical Specifications

Size Variations

Wide characters are commonly implemented using 16-bit or 32-bit widths to accommodate code points. The 16-bit format, as in UCS-2 and UTF-16, directly represents characters in the (BMP, covering code points U+0000 to U+FFFF) using a single code unit, while employing surrogate pairs (two 16-bit units) for supplementary characters in higher planes. In contrast, the 32-bit format, as in UTF-32, uses a fixed single code unit for every code point, eliminating the need for surrogates and enabling straightforward indexing and processing across all planes. Platform implementations exhibit variations in these sizes. On Windows, wide characters are typically 16 bits wide, aligning with UTF-16 encoding for compatibility with the system's native support. In contrast, many systems, including and AIX, adopt a 32-bit width for wide characters to match UTF-32 and provide sufficient capacity for full coverage without variable-length complications. The choice of 16 bits balances memory efficiency with practical coverage, as the encompasses the vast majority of commonly used characters from modern scripts, allowing most text to be stored compactly at 2 bytes per character. For the assigned characters in astral planes beyond the , add a modest overhead in UTF-16, but this is acceptable given the rarity of such characters in typical usage. The 32-bit option prioritizes simplicity in processing astral characters, avoiding surrogate handling at the cost of doubled memory usage for text. Implementation considerations include to the host system's word size for optimal ; for instance, 32-bit wide characters on 64-bit Unix systems align naturally to 4-byte boundaries, minimizing access penalties. overhead is directly proportional to the chosen width, with a of n characters occupying 2n bytes in 16-bit formats (or up to 4n with surrogates) and exactly 4n bytes in 32-bit formats, influencing decisions in resource-constrained environments.

Data Types and Storage

In programming languages like and , the primary data type for wide characters is wchar_t, an implementation-defined type designed to hold the largest extended character set supported by the . Its size varies by platform: typically 16 bits (2 bytes) on Windows systems to align with UTF-16 encoding, and 32 bits (4 bytes) on most systems such as to support UTF-32. For explicit fixed-width alternatives, developers often use uint16_t or uint32_t from the <stdint.h> header to ensure consistent size across platforms, avoiding reliance on the variable sizeof(wchar_t). Wide character strings are stored as arrays of wchar_t, with memory allocation scaling by the type's size; for example, a fixed-size array like wchar_t str[100]; consumes 200 bytes on 16-bit platforms or 400 bytes on 32-bit platforms, excluding any additional overhead. These strings are null-terminated using the wide null character L'\0', which is a single wchar_t value of zero, analogous to the narrow '\0' but occupying the full width of the type to mark the end of the sequence. When serializing wide characters to files or streams, becomes critical, as multi-byte representations (e.g., 16-bit values) can be stored in big-endian (most significant byte first) or little-endian (least significant byte first) order depending on the host architecture. To resolve ambiguity during deserialization, the (BOM), encoded as the character U+FEFF (zero-width no-break space), is prefixed to indicate the order: FE FF for big-endian UTF-16 or FF FE for little-endian UTF-16. Portability challenges arise from the platform-dependent size of wchar_t, which can lead to incompatible binary layouts or incorrect string lengths when code is compiled across systems; for instance, a string buffer sized assuming 16 bits may overflow on 32-bit platforms. To mitigate this, cross-platform code should prefer explicit fixed-width types like uint16_t for UTF-16-like storage or uint32_t for UTF-32, combined with standardized that includes BOM where detection is needed.

Programming Implementations

C and C++

In C, wide character support is provided through the <wchar.h> header, which defines the wchar_t type as an integer type capable of representing wide characters, along with types such as wint_t for extended integer values and functions for manipulation. The ISO C99 standard introduced comprehensive wide character utilities in this header, enabling operations on wide-character strings and streams, building on earlier partial support in C90 amendments. Key functions include wcslen() for determining the length of a wide-character string by counting non-null wide characters, wprintf() for formatted output of wide-character data to streams, and mbstowcs() for converting multibyte character sequences to wide-character arrays while respecting the current locale. To handle wide characters effectively , programs must configure the using setlocale(LC_ALL, "") to enable locale-specific behavior for conversions and , ensuring compatibility with international character sets. For example, a wide-character is declared with the L prefix, such as wchar_t str[] = L"Hello, world!";, which creates a null-terminated of wchar_t values. Common pitfalls include assuming wchar_t is always 16 bits wide, as its size is implementation-defined (typically 32 bits on systems like and macOS for full UTF-32/UCS-4 support, and 16 bits on Windows using UTF-16 with for higher code points), leading to portability issues if not verified with sizeof(wchar_t). Best practices recommend using wide-character functions consistently for Unicode-aware code and avoiding mixing with narrow-character operations without explicit conversions via functions like wcstombs(). In C++, wide character support extends C's facilities through the <cwchar> header, which provides C-style wide functions in a namespace-clean manner, while the introduces higher-level abstractions like std::wstring, a of std::basic_string<wchar_t> for dynamic wide-character management. This supports iterators for traversal and standard algorithms, such as std::find() to locate a wide character within a std::wstring, enabling efficient searching without manual pointer arithmetic. The standard enhanced handling by introducing codecvt facets like std::codecvt_utf8<wchar_t> for conversions between UTF-8 multibyte strings and wide-character representations, along with fixed-width types char16_t and char32_t (though std::wstring remains tied to wchar_t). For instance, converting a narrow to wide can use std::wstring_convert<std::codecvt_utf8<wchar_t>> (deprecated in C++17 but illustrative of improvements), ensuring proper encoding transformations. C++ best practices emphasize using std::wstring for locale-sensitive text processing, combining it with std::locale for facet-based operations, and verifying platform-specific wchar_t sizing to avoid truncation in cross-compilation scenarios. Developers should prefer standard algorithms over raw wide functions for readability and safety, while always including locale setup as in C to support internationalization.

Python

In Python 3, the str type natively represents text, encompassing the full range of Unicode code points from U+0000 to U+10FFFF, thereby functioning as a wide character abstraction without requiring explicit fixed-width types. Internally, since Python 3.3, Unicode strings employ a flexible scheme that optimizes usage by allocating 1 byte per character for code points 0–255, 2 bytes for code points U+0100–U+FFFF, or 4 bytes for code points U+10000–U+10FFFF, adapting based on the string's content. This variable-width internal storage ensures efficient handling of wide characters while maintaining transparency to the user, as the str object always operates on logical code points rather than raw bytes or surrogates. Key functions for manipulating wide characters in Python 3 include len(), which returns the number of Unicode code points in a string (e.g., len("café") yields 4, treating the accented 'é' as one unit); ord() and chr(), which convert between single code points and their integer values (e.g., ord('é') returns 233, and chr(233) reconstructs 'é'); and encode('utf-32'), which explicitly outputs a fixed-width 32-bit encoding for applications requiring uniform character size, such as certain legacy systems or precise byte-level access (e.g., "wide".encode('utf-32') produces a bytes object with each character as 4 bytes). Surrogate pairs for code points beyond U+FFFF (e.g., emojis like '😊' at U+1F60A) are handled transparently within str objects; when encoding to UTF-16, Python generates the necessary high and low surrogates (U+D800–U+DBFF and U+DC00–U+DFFF), but the string itself abstracts away these details to present a single code point. For legacy support, Python 2 distinguished between the str type for byte sequences (8-bit, narrow characters) and a separate unicode type for wide Unicode strings, which required the u prefix for literals (e.g., u"wide"). Migration to Python 3 often involves converting unicode usages to plain str, addressing issues like implicit decoding errors or mixed-type operations, with tools like the 2to3 converter automating much of the process; however, explicit encode() and decode() calls became essential to distinguish text from bytes. In practice, for input/output operations, UTF-8 encoding is typically preferred over UTF-16 or UTF-32 due to its compactness for ASCII-heavy text and compatibility, though fixed-width encodings like UTF-32 may be used for scenarios demanding consistent character alignment, at the cost of higher memory overhead. The following example demonstrates transparent surrogate handling and UTF-32 encoding:
python
# Emoji with surrogate pair in UTF-16
emoji = "😊"  # U+1F60A, len() treats as 1 code point
print(len(emoji))  # Output: 1
print(ord(emoji))  # Output: 128522

# Encode to fixed-width UTF-32
wide_bytes = emoji.encode('utf-32')
print(len(wide_bytes))  # Output: 4 (one 32-bit unit)
This approach contrasts with lower-level languages by prioritizing Unicode abstraction over manual wide character management.

Other Languages

In , the char type is a 16-bit unsigned that represents a UTF-16 code unit, allowing it to store characters from the Basic Multilingual Plane directly while using surrogate pairs for supplementary characters. The String class internally uses this UTF-16 encoding, where strings are sequences of these 16-bit code units, and supplementary characters are handled via surrogate pairs. To access full code points beyond the BMP, methods like codePointAt() return the value of the at a specified index, accounting for surrogate pairs if present. JavaScript represents strings as sequences of 16-bit unsigned integers, encoded in UTF-16, where most characters fit in a single unit but those outside the require two units via pairs. There is no explicit wide character type; instead, the language's handling is implicit, with properties like [length](/page/Length) counting UTF-16 units rather than full code points, which can lead to discrepancies for characters. Other modern languages adopt varied approaches to wide character support. In Go, the rune type is an alias for int32, a 32-bit signed integer that directly represents a single Unicode code point in UTF-32 encoding, facilitating straightforward iteration over Unicode characters in strings. Rust's char type is a 32-bit unsigned value (four bytes) that encodes a Unicode scalar value, excluding surrogates to ensure it always represents a complete, valid character without pairing. In .NET languages like C#, the char type is a 16-bit unsigned integer serving as a UTF-16 code unit, similar to Java, with strings composed of these units and methods available for code point access. Most contemporary programming languages prioritize implicit wide character handling through or for internal representations, balancing memory efficiency with compatibility while avoiding explicit narrow/wide distinctions common in older systems. This design simplifies compared to Python's variable-width approach, though it introduces nuances like surrogate handling in UTF-16-based systems.

References

  1. [1]
    Extended Char Intro (The GNU C Library)
    The ISO C90 standard, where wchar_t was introduced, does not say anything specific about the representation. It only requires that this type is capable of ...
  2. [2]
    Unicode: The Wide-Character Set - Microsoft Learn
    Jun 18, 2025 · A wide character is a 2-byte multilingual character code. Any character in use in modern computing worldwide, including technical symbols and special ...
  3. [3]
    char, wchar_t, char8_t, char16_t, char32_t | Microsoft Learn
    Sep 9, 2025 · The wchar_t type is an implementation-defined wide character type. In the Microsoft compiler, it represents a 16-bit wide character used to ...
  4. [4]
    Wide character data representation - IBM
    The wide character code was developed so that multibyte characters can be processed more efficiently internally in the system.
  5. [5]
    8.3. Wide and Multibyte Characters - C++ In a Nutshell [Book] - O'Reilly
    The key difference between a narrow and wide character is that a wide character can represent any single character in any character set that an implementation ...
  6. [6]
    Glossary of Unicode Terms
    The ANSI C defined wide character type, usually implemented as either 16 or 32 bits. ANSI specifies that wchar_t be an integral type and that the C language ...
  7. [7]
    ISO/IEC 8859-1:1998 - Information technology — 8-bit single-byte ...
    8-bit single-byte coded graphic character setsPart 1: Latin alphabet No. 1. Published (Edition 1, 1998).
  8. [8]
  9. [9]
  10. [10]
    Early Years of Unicode
    Mar 26, 2015 · We needed a 16-bit [character encoding]. Joe Becker came up with the initial 16-bit. This was for the Xerox STAR product [1981], which was truly ...
  11. [11]
    A Short History Of Character Encoding - The Data Studio
    The main problem with EBCDIC was that the characters did not sort into a useful sequence according to their binary values. We have overcome that now, but we ...<|separator|>
  12. [12]
    Legacy Character Models and an Introduction to Unicode
    This report tries to give an overview of the various types of character codes used today, the motivation behind their development and to give an introduction ...
  13. [13]
    How it was: ASCII, EBCDIC, ISO, and Unicode - EE Times
    However, starting in the early 1980s, American computer companies began to consider their own solutions to the problem of supporting multilingual character sets ...
  14. [14]
    Chronology of Unicode Version 1.0
    Fall 1987. The Xerox group under Joe Becker begins discussing multilingual issues with Davis. New character encoding is a major topic. Evaluation by Opstad ...
  15. [15]
    [PDF] 1.0 Unification of the Unicode Standard - and ISO 10646
    Meetings in October of 1991 finally resulted in mutually acceptable changes to both Unicode and the ISO Draft International Standard (DIS) 10646 which merged ...
  16. [16]
    Internationalization Overview - Oracle Help Center
    The primitive data type char in the Java programming language is an unsigned 16-bit integer that can represent a Unicode code point in the range U+0000 to U+ ...Missing: 1995 | Show results with:1995
  17. [17]
    UTR#17: Unicode Character Encoding Model
    ### Summary of Unicode Technical Report #17: Unicode Character Encoding Model
  18. [18]
  19. [19]
    Unicode Basics | ICU Documentation
    UTF-8 is a variable-length, byte-based encoding that preserves ASCII transparency. UTF-8 maintains transparency for all the ASCII code values (0..127). These ...
  20. [20]
    Encoding Standard
    Aug 12, 2025 · The UTF-8 encoding is the most appropriate encoding for interchange of Unicode, the universal coded character set. Therefore, for new protocols ...Iso-8859-8 · Index ISO-8859-8 BMP coverage · Windows-1256 BMP coverage
  21. [21]
    2 Choosing a Character Set - Database - Oracle Help Center
    In multibyte character sets, a character or code point consists of one or more bytes. Calculating the number of characters based on byte lengths can be ...
  22. [22]
    Character Sets - Win32 apps
    ### Summary: Wide Character Sizes on Windows
  23. [23]
    unicode - Why is there no UTF-24? - Stack Overflow
    Apr 13, 2012 · The simple reason is because there isn't a 24 structure. There is 16, 32, 8, 2 so 24 would be odd. Plus blocks of memory are normally dished out in blocks of 2 ...Encoding binary data within XML: Are there better alternatives than ...Why does anyone use an encoding other than UTF-8? [closed]More results from stackoverflow.comMissing: abandoned | Show results with:abandoned
  24. [24]
    wchar_t string on Linux, OS X and Windows - firstobject XML editor
    Dec 19, 2008 · Unlike Windows UTF-16 2-byte wide chars, wchar_t on Linux and OS X is 4 bytes UTF-32 (gcc/g++ and XCode). On cygwin it is 2 (cygwin uses ...
  25. [25]
  26. [26]
    None
    Summary of each segment:
  27. [27]
    Using Byte Order Marks - Win32 apps | Microsoft Learn
    Sep 26, 2024 · Therefore, Unicode has defined a character (U+FEFF) and a noncharacter (U+FFFE) as byte order marks. They are mirror byte images of each other.Missing: portability | Show results with:portability
  28. [28]
    <wchar.h>
    The `<wchar.h>` header is for wide-character handling, defining types like `wchar_t`, `wint_t`, and `wctype_t` and some functions.
  29. [29]
    String and character literals (C++) - Microsoft Learn
    Mar 24, 2025 · String literals can have no prefix, or u8 , L , u , and U prefixes to denote narrow character (single-byte or multi-byte), UTF-8, wide character ...Missing: computing | Show results with:computing
  30. [30]
    Unicode HOWTO
    ### Summary of Python 3 Unicode Handling
  31. [31]
    PEP 393 – Flexible String Representation | peps.python.org
    ### Summary of PEP 393 – Flexible String Representation (Python 3.3)
  32. [32]
    Unicode Objects and Codecs — Python 3.14.0 documentation
    Unicode objects internally use a variety of representations, in order to allow handling the complete range of Unicode characters while staying memory efficient.Unicode Objects · Built-In Codecs · Methods And Slot Functions
  33. [33]
    Character (Java Platform SE 8 ) - Oracle Help Center
    The Java platform uses the UTF-16 representation in char arrays and in the String and StringBuffer classes. In this representation, supplementary characters are ...
  34. [34]
    String (Java Platform SE 8 ) - Oracle Help Center
    A String represents a string in the UTF-16 format in which supplementary characters are represented by surrogate pairs (see the section Unicode Character ...
  35. [35]
    UTF-16 - Glossary - MDN Web Docs
    Jul 11, 2025 · UTF-16 is a character encoding standard for Unicode. It encodes each Unicode code point using either one or two code units. Each code unit is a 16-bit value.
  36. [36]
    Strings, bytes, runes and characters in Go
    Oct 23, 2013 · The Go language defines the word rune as an alias for the type int32 , so programs can be clear when an integer value represents a code point.
  37. [37]
    char - Rust Documentation
    The char type represents a single character. More specifically, since 'character' isn't a well-defined concept in Unicode, char is a 'Unicode scalar value'.Validity and Layout · Representation · Methods
  38. [38]
    Data Types - The Rust Programming Language
    Rust's char type is four bytes in size and represents a Unicode scalar value, which means it can represent a lot more than just ASCII. Accented letters ...
  39. [39]
    The char type - C# reference | Microsoft Learn
    Feb 28, 2025 · The char type keyword is an alias for the .NET System.Char structure type that represents a Unicode UTF-16 code unit, typically a UTF-16 character.