Fact-checked by Grok 2 weeks ago

Collation

Collation is the process of comparing and arranging elements, such as written , documents, or characters, into a specific order according to defined rules. This assembly can occur in various contexts, including the of text in and , the gathering of printed sheets in , and the of manuscripts in . Historically, the term also refers to a light meal served during religious fasts, derived from monastic gatherings for reading and refreshment. In modern computing, collation primarily denotes the set of rules governing how character strings are compared and sorted, ensuring consistency across languages and scripts. The Unicode Collation Algorithm (UCA), a key standard in this domain, provides a framework for tailoring sort orders to cultural and linguistic preferences by assigning weights to characters at multiple levels—primary for base letters, secondary for accents, and tertiary for case and diacritics. Database systems like SQL Server and MySQL implement collations to handle Unicode data, specifying bit patterns for characters and comparison behaviors such as case sensitivity or accent insensitivity. Beyond digital applications, collation plays a crucial role in and , where it involves sequencing multiple copies of multi-page documents to maintain proper , avoiding stacks of identical pages. In , the term describes the structural formula of a , detailing the number and arrangement of signatures (folded sheets) to verify completeness and detect missing leaves. In scholarly , collation entails aligning versions of a text to identify differences and reconstruct an authoritative edition, a practice essential to fields like classical studies and . These diverse applications underscore collation's foundational role in organizing information for accessibility and accuracy.

Principles of Ordering

Numerical and Chronological Ordering

Numerical ordering in collation refers to the process of comparing strings by interpreting them as mathematical values rather than their lexical character sequences, ensuring that the relative magnitude of numbers determines their position in the sorted output. For instance, in a list containing "−4", "2.5", and "10", numerical collation would place "−4" first, followed by "2.5", and then "10", as these reflect their actual numeric values rather than codepoint-based comparisons where "10" might precede "2.5". This approach is a customization of the Collation Algorithm (UCA), which by default sorts digits lexicographically but allows tailoring to parse and compare numeric substrings for intuitive human-readable results. Chronological ordering extends this principle to time-based sequences, where dates or timestamps are sorted according to their temporal progression, often leveraging standardized formats to align lexical and chronological sequences. The standard, for example, represents dates in the YYYY-MM-DD format, enabling straightforward sorting such that "2023-01-15" precedes "2025-11-10" both numerically and as strings, facilitating efficient organization in databases and archives. This format was developed to promote unambiguous international exchange and machine-readable chronological consistency, avoiding ambiguities in regional date conventions like MM/DD/YYYY. A key challenge in numerical and chronological ordering arises from partial ordering issues, where different string representations of equivalent values must be distinguished to preserve semantic intent without assuming full . For example, in contexts like version numbering or precise data logging, "2" and "2.0" may represent the same value numerically but differ in or , requiring collation rules to treat them as distinct to avoid unintended merging in sorted outputs. This necessitates hybrid approaches in systems like ICU Collation, where numeric is combined with secondary string comparisons for representations. Historically, numerical and chronological ordering emerged in early filing systems and calendars to manage records efficiently amid growing administrative demands. In the late 19th and early 20th centuries, U.S. government offices, including the State Department around 1910, adopted numerical filing to standardize , replacing haphazard arrangements with sequential number assignments for faster access. Similarly, ancient calendars, such as the from circa 3000 BCE, imposed chronological ordering on events for agricultural and ritual purposes, laying foundational principles for temporal sequencing that influenced modern standards.

Alphabetical Ordering

Alphabetical ordering involves comparing strings by based on their positions within an alphabet, such as placing A before B in the . This letter-by-letter approach determines the sequence by assigning weights to each , starting from the primary level for base letters, and proceeding to secondary levels for diacritics if needed, ensuring that "role" precedes "roles" due to the additional 's' at the primary level. Case handling in alphabetical ordering varies by system, but many dictionary conventions treat uppercase and lowercase letters as equivalent in value, allowing "Apple" to file near "apple" without strict precedence. However, in some computational and filing systems, uppercase letters precede lowercase ones based on ASCII values, resulting in "Apple" sorting before "banana" because 'A' (code 65) comes before 'b' (code 98). Language-specific rules adapt alphabetical ordering to account for digraphs, ligatures, and modified letters unique to each script. In Spanish, the letter ñ is treated as a distinct character positioned after 'n' but before 'o', while digraphs like "ch" and "ll"—once considered separate letters until their 1994 reclassification and 2010 exclusion from the alphabet—are now sorted letter-by-letter as 'c' followed by 'h' (after "ce" but before "ci") and 'l' followed by 'l' (after "li" but before "lo"), respectively. In French, accented letters like é follow their base form 'e' in primary ordering, with diacritics evaluated at the secondary level in a tailored sequence that often places acute accents (é) after unaccented e but considers grave accents (è) in dictionary-specific backward weighting for precision. Ligatures such as œ in French are typically expanded to "oe" for collation, ensuring "coeur" sorts after "cœur" if accents are secondary. Abbreviations and punctuation are frequently ignored or treated as separators to simplify ordering, preventing disruptions from non-letter characters. Spaces and hyphens serve as element dividers, while periods in abbreviations like "St." are disregarded, filing "St. Louis" under "S" as if it were "Saint Louis." In English word lists, this results in "U.S.A." sorting under "U" after ignoring the periods. Examples from major languages illustrate these principles: In English dictionaries, "cat" precedes "dog" via primary letter comparison, with case-insensitive filing placing "Cat" adjacent to "cat." French lists order "cote" before "côte" (unaccented before accented) and "éte" after "ete" but before "fête," reflecting secondary diacritic weighting. Spanish dictionaries place "nación" after "nada" but before "oasis" due to ñ's position, and "chico" after "cebra" but before "cima" under letter-by-letter digraph treatment.

Specialized Sorting Methods

Root-Based Sorting

Root-based sorting is a collation method used primarily in dictionaries of , where entries are organized by shared consonantal roots—typically triliteral sequences of consonants that form the core semantic unit—rather than by the full spelled-out words. For instance, in , words derived from the root k-t-b (كتب), such as kitāb (book) and kataba (he wrote), are grouped together under the root entry, with subentries arranged by vowel patterns or affixes. This approach prioritizes morphological structure over linear alphabetical sequences, contrasting briefly with standard alphabetical ordering in non-Semitic scripts. A prominent historical example is Hans Wehr's A of Modern Written , first published in German in 1952 with English editions appearing by 1961, which employs as the primary sort keys followed by form patterns. In this , roots are listed in a modified based on their consonants, enabling users to locate related derivations systematically. This method extends to other , such as Hebrew, where standard lexicons like the Brown-Driver-Briggs Hebrew and English Lexicon arrange entries by triliteral roots to reflect etymological families. Similarly, in , dictionaries including Grover Hudson's A Student’s Amharic-English, English-Amharic Dictionary (1994) and the Kane Amharic-English Dictionary organize words by roots when applicable, following the Ethiopic syllabary for root sequencing. The advantages of root-based sorting lie in its ability to reveal etymological and semantic connections among words, facilitating for language learners and researchers by grouping morphologically related terms. This organization highlights the templatic morphology of , where a single root can generate dozens of forms across grammatical categories.

Radical-and-Stroke Sorting

Radical-and-stroke sorting is a hierarchical employed in East Asian writing systems to order logographic characters, primarily by identifying a semantic or graphic component known as the , followed by the number of in the remaining portion of the character. This approach facilitates dictionary lookup and collation for scripts like hanzi, Japanese , and Korean , where characters do not follow phonetic alphabets. The radical serves as the primary key, often hinting at the character's meaning, while the residual stroke count provides the secondary sorting criterion, ensuring a systematic without reliance on linear phonetic sequences. In the system, characters are decomposed such that the —typically the leftmost, topmost, or bottommost component—determines the main , with the total stroke count minus the radical's strokes used for subordering. For instance, the character 妈 ("mother") is sorted under the radical 女 (nǚ, meaning "," 3 strokes), with 3 additional strokes in the remainder 马 (mǎ, "horse"). Similarly, 好 ("good") falls under 女, followed by 3 strokes in 子 (zǐ, "child"). This method relies on a standardized set of 214 radicals, ordered by their own stroke counts from 1 to 17. The foundational framework emerged from the (康熙字典, Kāngxī Zìdiǎn), commissioned by the Qing emperor Kangxi and completed in , which formalized the 214-radical system drawing from earlier Ming-era works like the Zhengzitong. This dictionary organized approximately 47,000 characters under these radicals, with subentries by residual strokes, establishing a enduring standard for traditional collation that persists in modern print and digital references. Its influence extends to adaptations in simplified contexts, where variant radicals are mapped to maintain compatibility. Japanese kanji dictionaries adopt a parallel structure, classifying characters under one of the 214 Kangxi radicals before sorting by additional strokes, often supplemented by total stroke indices for cross-verification. For example, the character 読 ("read") is indexed under the radical 言 (yán, "speech," 7 strokes), with 7 further strokes in the phonetic component 賣 (mài). This method supports efficient lookup in resources like the , aligning closely with Chinese traditions while accommodating Japanese-specific readings and usages. In contemporary digital environments, radical-and-stroke sorting has been integrated into font systems and collation algorithms through the Unicode Standard, which encodes the 214 Kangxi radicals in the range U+2F00–U+2FDF and provides radical-stroke indices in the Unihan Database for . This enables consistent machine-readable ordering across Chinese, Japanese, and Korean texts, preserving the method's utility in search engines, databases, and typesetting software. Korean hanja collation mirrors the radical-and-stroke approach using the same 214 Kangxi radicals, with characters sorted first by radical strokes and then by residuals, though practical dictionaries often integrate this with phonetic indices for Sino-Korean compounds. This adaptation supports hanja's role in formal and , where it coexists with the alphabetic script without disrupting the logographic ordering.

Collation in Computing

Sort Keys and Algorithms

In computing, collation relies on sort keys, which are binary strings or arrays of integers derived from character codes to enable efficient string comparisons. These keys transform code points into weighted sequences that reflect linguistic order rather than raw numerical values, allowing binary operations like memcmp for sorting. For example, the character "a" (U+0061) might map to a multi-byte key such as [0x15EF, 0x0020, 0x0002], representing its primary, secondary, and tertiary weights. The historical evolution of collation mechanisms traces back to the American Standard Code for Information Interchange (ASCII), standardized in 1963 by the American Standards Association to provide a 7-bit encoding for 128 characters, primarily supporting English text. Early ASCII-based sorting used simple codepoint comparisons, ordering characters by their binary values (e.g., 'A' at 65 precedes 'a' at 97), which sufficed for basic English but failed for multilingual needs. The introduction of in 1991 initiated a major evolution in , with the standard growing to encompass 159,801 assigned characters across 172 scripts as of version 17.0 (September 2025), necessitating algorithms beyond codepoint order to ensure culturally appropriate . The Collation Algorithm (UCA), specified in Unicode Technical Standard #10, provides a foundational, customizable method for generating these sort keys and performing comparisons. It decomposes strings into collation elements—triplets of weights—and compares them level by level: primary weights for base letter differences (e.g., "a" < "b"), secondary for diacritics (e.g., "é" > "e"), and tertiary for case or (e.g., "A" < "a"). The algorithm is tailorable through modifications to the Default Unicode Collation Element Table (DUCET), allowing reordering of scripts, contractions (e.g., "ch" in ), or level adjustments without altering the core process. The UCA's main algorithm proceeds in four steps:
  1. Normalization: Canonicalize the input strings to Normalization Form D (NFD), decomposing combined characters (e.g., "é" to "e" + combining acute). This ensures consistent element mapping.
  2. Collation Element Generation: Map each cluster to one or more collation elements from the CET. Simple mappings use a single triplet [P.S.T]; expansions handle ignorables or composites (e.g., "ffi" ligature expands to multiple elements); contractions treat digraphs as units. Elements with zero primary weight are typically ignored at higher levels.
  3. Sort Key Formation: For each level, collect non-ignorable weights into a key array, padding with zeros and optionally processing secondary weights backward for certain languages. The full key concatenates levels: L1 || || L2 || || L3, forming a binary string for comparison.
  4. Comparison: Iterate through keys level by level, stopping at the first differing non-zero weight (L1 first, then L2, etc.). If equal up to the level, strings are equivalent; higher levels () can resolve ties.
In , the core comparison resembles:
function compareStrings(s1, s2):
    normalize s1 and s2 to NFD
    ce1 = getCollationElements(s1)  // array of [P, S, T] triplets
    ce2 = getCollationElements(s2)
    key1 = buildSortKey(ce1)  // levels L1, L2, L3 as concatenated weights
    key2 = buildSortKey(ce2)
    for level in 1 to 3:
        if compareLevel(key1[level], key2[level]) != 0:
            return that result
    return 0  // equal
This outline ensures and , with implementations optimizing for variable-length keys. Other algorithms build on or contrast with UCA. Simple codepoint sorting, common in pre-Unicode systems, compares raw Unicode scalars (U+0000 to U+10FFFF), which ignores linguistic rules (e.g., "ä" might precede "b" incorrectly in ). Tailored rules, as in UCA, override this via custom weight assignments. The (ICU) library implements UCA with extensions from Common Locale Data Repository (CLDR), generating sort keys as compact byte arrays for high-performance comparisons in applications like databases. ICU supports both default DUCET mappings and rule-based tailoring, enabling binary-safe sorting without full string re-parsing.

Handling Numbers and Special Characters

In collation processes within , handling mixed alphanumeric strings presents challenges, particularly with numbers embedded in text. Lexical sorting, the default in many systems, treats digits as characters based on their code points, leading to counterintuitive results such as "file2.txt" sorting before "file10.txt" because the character '1' precedes '2'. This approach prioritizes string comparison over numerical value, which can disrupt user expectations in applications like file explorers or . Natural sorting addresses these issues by consecutive digits as numerical entities, ensuring "file2.txt" precedes "file10.txt" by evaluating 2 < 10. For instance, in version numbering, natural order places "1.10" before "1.2" to reflect semantic progression, whereas lexical order reverses this due to the shorter length of "1.2". Similarly, labels like "Figure 7b" should appear before "Figure 11a" in natural sorting, requiring algorithms to detect and numerically compare digit sequences while preserving surrounding text. Phone numbers exemplify formatting complications, where variants like "(555) 123-4567" and "555-123-4567" may sort inconsistently in lexical mode unless normalized by removing non-digits for numerical comparison. Special characters, including punctuation and symbols, introduce further variability in collation. In the Unicode Collation Algorithm (UCA), punctuation such as apostrophes, hyphens, and ampersands is often classified as variable elements with low primary weights, allowing options like "shifted" handling to ignore them at higher levels (L1-L3) and treat them as separators or ignorables. For example, "O'Brien" typically sorts under "O" by ignoring the apostrophe, simulating "OBrien", which aligns with dictionary conventions in English locales. Business names with symbols, like "AT&T" or "3M", may require similar ignorance rules to group them with "AT" or "3" entries, preventing punctuation from elevating them to unexpected positions. Symbols like "@" or "%" in identifiers are handled via quaternary-level distinctions in UCA to break ties without altering primary order. To resolve these challenges, systems employ custom collators or preprocessing techniques. Tailoring in allows rules to redefine weights for digits and symbols, such as assigning numeric expansions for sequences like "10" to sort before "2". Preprocessing with regular expressions can extract numbers for separate numerical before reintegrating them into the string order, as seen in libraries supporting natural sort. Sort keys, generated from these tailored mappings, provide the foundation for efficient binary comparisons while accommodating such tweaks.

Advanced Topics in Collation

Multilingual and Locale-Specific Collation

Collation in multilingual contexts requires adaptation to diverse scripts, languages, and cultural conventions to ensure logical and culturally appropriate ordering of text. The Common Locale Data Repository (CLDR), maintained by the , plays a central role in defining these locale-specific rules by providing tailored collation data that builds upon the Collation Algorithm (UCA). This repository includes specifications for hundreds of locales, allowing software to apply rules such as variable weighting for ignorables (e.g., spaces, ) and script reordering to align with local expectations. For instance, CLDR enables distinctions in character equivalence and ordering that reflect linguistic norms, preventing mismatches in across global applications. Specific examples illustrate CLDR's impact on collation. In , the "" is treated as a secondary variant of "a," sorting immediately after "a" rather than at the end with "z," as defined in the German tailoring rules. Similarly, in Turkish, CLDR handles the dotted "i" (U+0069) and dotless "ı" (U+0131) as distinct primary weights, with their uppercase counterparts "İ" (U+0130) and "I" (U+0049) following locale-specific case mappings to preserve phonetic differences in . French phone book collation, another CLDR-defined variant, ignores hyphens and apostrophes (using the "alternate=shifted" option) to sort entries like "D'Artagnan" under "D" rather than after all "D" entries, prioritizing readability in directories. In Japanese, collation often relies on pronunciation-based ordering for kana scripts, with romaji (Latin transliteration) as a fallback for mixed-script text, supported by prefix matching rules in CLDR to efficiently handle common readings. When dealing with interactions between scripts, such as mixing Latin, Cyrillic, and Arabic characters, CLDR leverages UCA defaults to group code points by script blocks for initial ordering, while allowing parametric reordering to prioritize native scripts (e.g., placing Cyrillic before Latin in Russian locales). This ensures cross-script comparisons remain consistent yet adaptable, as UCA assigns primary weights based on script hierarchies to avoid arbitrary placements. Challenges in multilingual collation include varying reordering needs for different use cases, such as dictionary-style sorting (where accents follow base letters) versus phone book styles (which may suppress them for simplicity), requiring explicit locale variants in . Additionally, some locales suffer from incomplete coverage, where tailorings are partial or rely on private-use mappings, leading to gaps in support for less common scripts or dialects until community contributions update the repository.

Recent Developments and Standards

In September 2025, Unicode 17.0 was released, adding 4,803 new characters to reach a total of 159,801 encoded characters, including support for four new scripts: Sidetic, Tolong Siki, Beria Erfe, and Tai Yo. These additions necessitate updated collation elements in the (UCA) to handle sorting for the new characters, such as emojis and symbols, ensuring consistent ordering across implementations. The Unicode Technical Standard #10 (UTS #10), specifying the UCA, was updated to version 17.0 in September 2025. This version includes enhancements for well-formedness criteria—such as consistent collation levels and unambiguous weight assignments—and guidance on migration between UCA versions to maintain interoperability in software. Common Locale Data Repository (CLDR) version 47, released in March 2025, expanded locale data coverage to include core support for languages like Coptic and Haitian Creole, alongside updates for 11 English variants and Cantonese in Macau, contributing to data for over 168 locales across modern, moderate, and basic coverage levels. CLDR version 48, released on October 29, 2025, further added core data for languages such as Buryat (bua), enhancing collation tailoring for diverse languages and facilitating more precise sorting in applications, including AI-assisted database systems that leverage CLDR for locale-aware operations. Emerging trends in collation emphasize AI integration for dynamic processing, such as real-time locale detection in cloud services to adapt sorting rules on-the-fly for user contexts. In modern databases, , released in September 2025, advanced ICU-based collation support with customizable attributes in language tags and the new pg_unicode_fast provider, offering performance improvements over traditional ICU implementations for multilingual queries in SQL and NoSQL environments. Post-2023 multilingual expansions have addressed gaps by broadening support for underrepresented languages, thereby improving equitable handling of diverse content in global applications.

References

  1. [1]
    COLLATION Definition & Meaning - Merriam-Webster
    The meaning of COLLATION is a light meal allowed on fast days in place of lunch or supper.
  2. [2]
    COLLATION | definition in the Cambridge English Dictionary
    COLLATION meaning: 1. a meal, especially one left ready for people to serve themselves 2. the act or an example of…. Learn more.
  3. [3]
  4. [4]
    Collation and Unicode Support - SQL Server - Microsoft Learn
    Jul 29, 2025 · A collation specifies the bit patterns that represent each character in a dataset. Collations also determine the rules that sort and compare ...
  5. [5]
    12.1 Character Sets and Collations in General
    A character set is a set of symbols and encodings. A collation is a set of rules for comparing characters in a character set.
  6. [6]
    What does collate mean when printing? - PaperCut
    Jun 30, 2023 · Collating arranges printed documents in a specific order, assembling each set of pages in the correct sequence, rather than printing each page in a stack.
  7. [7]
    Book Collecting Basics: What is Collation?
    Aug 28, 2018 · Especially with older volumes, the information found in a collation can indicate whether a volume is missing leaves or gatherings and let ...<|control11|><|separator|>
  8. [8]
    Concepts | ICU Documentation
    :point_right: Note Since ICU 2.0 Japanese tailoring uses prefix analysis instead of contraction producing expansions. ... 2”. 1”, a, e, d. 2”, a, e, d. Collator ...
  9. [9]
    The Filing Cabinet - Places Journal
    That year, the State Department adopted a numerical filing system. Suddenly, every American diplomatic office began using the same number for passport ...
  10. [10]
    Calendars and Chronology in the Ancient World | Research Starters
    Ancient calendars tracked agricultural cycles, political events, and religious observances. They were vital for tracking time, and were not simple to construct.Missing: filing | Show results with:filing
  11. [11]
  12. [12]
    [PDF] Library of Congress Filing Rules
    1.1.​​ Letters are arranged according to the order of the English alphabet (A-Z). Upper and lower case letters have equal filing value. Modified letters are ...<|control11|><|separator|>
  13. [13]
  14. [14]
    [PDF] 1. Abecedario y uso de las letras - Real Academia Española
    Por tanto, las palabras que empiezan por ch y ll o que contienen ch y II deben alfabetizarse en los lugares correspondientes dentro de la c y la I,.
  15. [15]
  16. [16]
  17. [17]
  18. [18]
    [PDF] Hans WEHR Dictionary of Modern Arabic - ghazali.org
    This dictionary presents the vocabulary and phraseology of modern written Arabic. It is based on the form of the language which, throughout the Arab world from ...
  19. [19]
    The Hans Wehr Arabic-English dictionary - Darul Tahqiq
    The dictionary arranges its entries according to the traditional Arabic root order. ... based solely on the stem number and the abstract consonantal root.Missing: sorting | Show results with:sorting
  20. [20]
    Biblical Hebrew: Hebrew Dictionaries / Lexicons - Guides & Help
    Aug 12, 2025 · The standard English language Hebrew lexicon. Words are arranged by root, not alphabetically. Cover Art Concise Hebrew and Aramaic Lexicon of ...<|control11|><|separator|>
  21. [21]
    Review: [Untitled] on JSTOR
    It contains two sections: Amharic- English and English-Amharic. The entries of this concise dictionary follow the root system (where appropriate) familiar ...<|control11|><|separator|>
  22. [22]
    Thomas Leiper Kane-Amharic-English Dictionary, Vol. 1 | PDF - Scribd
    Rating 5.0 (2) ” Preface x ORDER OF ENTRIES The entries are arranged by root (or if none, by the initial letter of the word) according to the order of the Ethiopic ...
  23. [23]
    Kangxi zidian 康熙字典(www.chinaknowledge.de)
    It consists of 12 "collections" (ji 集) of which each is divided into three parts. It makes use of the 214 radicals system established in the Zhengzitong. Each ...
  24. [24]
    Chapter 18 – Unicode 16.0.0
    Chapter 18. East Asia. This chapter presents scripts used in East Asia. This includes major writing systems associated with Chinese, Japanese, and Korean.<|control11|><|separator|>
  25. [25]
    None
    ### Summary of Kangxi Radicals in Modern Digital Representation for Sorting or Collation in CJK
  26. [26]
    How to use the Kanji dictionaries 漢和/漢英辞典の使い方 かんわ
    Kanji are arranged by the number of strokes for the radicals of those Kanji. Thus, the Kanji 読 is found in the Kanji section of 14 strokes under the radical ...
  27. [27]
    Hanja - Korean Wiki Project
    Mar 23, 2023 · For more complex characters and those familiar with hanja, it is easiest to look up by number of strokes in the radical (부수획수). Correctly ...Missing: sorting | Show results with:sorting
  28. [28]
    Milestones:American Standard Code for Information Interchange ...
    May 23, 2025 · The American Standards Association X3.2 subcommittee published the first edition of the ASCII standard in 1963. Its first widespread ...
  29. [29]
  30. [30]
  31. [31]
  32. [32]
  33. [33]
  34. [34]
    Collation | ICU Documentation
    Starting in release 1.8, the ICU Collation Service is compliant to the Unicode Collation Algorithm (UCA) (Unicode Technical Standard #10) and based on the ...
  35. [35]
    UTS #10: Unicode Collation Algorithm
    The Unicode Collation Algorithm (UCA) details how to compare two Unicode strings while remaining conformant to the requirements of the Unicode Standard. This ...Introduction · Well-Formedness of Collation... · Default Unicode Collation...
  36. [36]
    Collation, sorting, and string comparison - Globalization
    Jan 24, 2023 · Collation is the term for how the sorting order of strings of characters is determined. This concept is important to globalization when it comes ...
  37. [37]
    Unicode CLDR Project
    Locale-specific patterns for formatting and parsing: dates, times, timezones, numbers and currency values, measurement units,… · Translations of names: · Language ...Unicode Locale Data Markup... · CLDR Charts · CLDR Releases/Downloads · ICUMissing: collation examples
  38. [38]
    Unicode 17.0.0
    Sep 9, 2025 · Unicode 17.0 adds 4803 characters, for a total of 159,801 characters. The new additions include 4 new scripts: Sidetic; Tolong Siki; Beria Erfe ...
  39. [39]
  40. [40]
    UTC #180 Agenda - Unicode
    Jul 21, 2024 · Proposed Update UTS #10, Unicode Collation Algorithm (snapshot) [Ken Whistler, Markus Scherer, L2/24-184]. F.3.2 PRI #492: Proposed Update ...Missing: enhancements | Show results with:enhancements
  41. [41]
    CLDR 47 Release Note - Unicode CLDR Project
    Core data for Coptic (cop), Haitian Creole (ht); Locale data for 11 English locales and Cantonese (Macau) (yue_Hant_MO). Updated time zone data to tzdata 2025a ...
  42. [42]
    Documentation: 18: 23.2. Collation Support - PostgreSQL
    This SQL standard collation sorts using the Unicode code point values rather than natural language order, and only the ASCII letters “ A ” through “ Z ” are ...<|control11|><|separator|>
  43. [43]
    PostgreSQL 18 just dropped: 10 powerful new features devs need to ...
    Jun 19, 2025 · ... PostgreSQL's MERGE now plays much nicer with existing SQL:2023 logic. 10. Improved ICU collation and globalization support. Handling ...3. Json Table Support... · 4. Unique Nulls Distinct... · 5. Sql And Pl/pgsql Function...<|separator|>
  44. [44]
    Arabic and Dozens More Languages Unlock Advanced AI Accessibility
    Oct 7, 2025 · Ultimately, Google's multilingual AI Search is not just an update; it's a foundational step towards a truly global, AI-powered information ...
  45. [45]
    The web is multilingual – so why does search still speak just a few ...
    Aug 6, 2025 · Despite AI's promise of inclusion, search and LLMs still sideline minority languages. Here's why it matters – and what must change.Missing: 2023 reducing