Fact-checked by Grok 2 weeks ago

hOCR

hOCR is an for representing the results of (OCR) and document layout analysis as a subset of , enabling the encoding of text, layout, bounding boxes, confidence scores, and style information in a machine-readable and web-compatible format. Developed by Thomas M. Breuel in 2007 for the open-source system, hOCR addresses the need for a unified, extensible format to handle intermediate and final OCR outputs across multilingual workflows, leveraging existing HTML, CSS, and XML technologies to support models, logical markup, and engine-specific details. Key features include hierarchical elements such as ocr_page, ocr_par, ocr_line, and ocrx_word for structuring content, along with properties like bbox for coordinates and x_confidence for accuracy metrics, allowing seamless integration with tools for processing, validation, and visualization. Widely adopted in and projects, hOCR is natively supported by the OCR engine for outputting structured results and is extensively used by the to generate searchable text layers for millions of scanned documents, including character-level granularity in compressed formats like *_chocr.html.gz.

Introduction

Definition and Purpose

hOCR is an designed for representing the results of (OCR) processes in a formatted, HTML-based structure. It encodes textual content derived from scanned images or photographs, along with associated layout analysis, character recognition outcomes, and metadata such as confidence scores, into or documents. This format serves as a , extensible means to capture the spatial and typographic details of recognized text without requiring structures. The primary purpose of hOCR is to facilitate across diverse OCR workflows by embedding machine-readable OCR data directly within standard technologies, enabling seamless integration of intermediate and final results. It supports multi-level abstractions, from high-level document structures and models to engine-specific details, allowing for consistent representation throughout the OCR pipeline. By leveraging existing and CSS standards, hOCR ensures that OCR outputs can be stored, shared, processed, and displayed using widely available tools, promoting reusability and reducing the need for custom parsers. Key benefits of hOCR include its ability to preserve spatial information, such as bounding boxes and stylistic attributes, which is essential for downstream applications like generating searchable PDFs, extracting , and enabling visual verification of recognition accuracy. This preservation of layout and without loss supports advanced post-processing tasks, including error correction and multi-lingual handling, while maintaining a human-viewable format that aligns with broader OCR goals of transforming physical documents into accessible forms. In essence, hOCR bridges the gap between raw image inputs and usable digital outputs, enhancing efficiency in document digitization efforts.

History and Development

hOCR originated from the need to standardize the representation of (OCR) results within documents, initially developed by Thomas Breuel around 2007 as part of the open-source project. This effort addressed challenges in embedding layout information, confidence scores, and textual outputs from OCR workflows into web-compatible formats, building on principles to enable seamless integration with existing structures. The format was formally introduced in a seminal presented at the 2007 International Conference on Document Analysis and Recognition (ICDAR), where Breuel outlined hOCR as a lightweight, extensible solution for both intermediate and final OCR results. Following its inception, hOCR's maintenance shifted to community-driven efforts, with no formal governing body but active collaboration via GitHub. Current stewardship is provided by Konstantin Baierer through the kba/hocr-spec repository, which hosts the official specification and facilitates ongoing contributions from developers. The version timeline reflects incremental refinements: version 1.0, released in May 2010, established the basic English-language specification in a Google Document format. Version 1.1, published in September 2016, introduced minor updates for improved clarity, including a port to Markdown, errata corrections, a table of contents, and a Chinese translation. Version 1.2, launched in February 2020, adopted a WHATWG-style living standard approach using the Bikeshed tool for specification generation; it added support for layout properties such as baseline and x_confs while maintaining backward compatibility, with subsequent commits addressing refinements but no formal major release by 2025. Key milestones include hOCR's integration into major OCR engines, notably starting with version 3.0 in 2010, which enabled HTML-based output for layout-aware text extraction. The format also influenced supporting utilities like hocr-tools, a suite developed within the ecosystem for manipulating and evaluating hOCR files. Its recognition in academic and technical literature, exemplified by the 2007 ICDAR proceedings, underscored its role as an influential for document analysis. As of 2025, hOCR persists as a living , with no announced, yet it remains actively maintained in open-source communities to support multilingual OCR applications across diverse scripts and layouts.

Format Specification

Core Elements

hOCR employs a set of HTML elements to encode the structural and semantic aspects of OCR output, categorized by their role in representing document layout and . These core elements are defined using class attributes prefixed with "ocr_" for compliance or "ocrx_" for engine-specific extensions, ensuring across OCR systems. The category includes elements such as ocr_page, ocr_carea, ocr_line, ocr_separator, and ocr_noise, which model page-level layout by delineating the overall page, areas (formerly columns), text lines, separators, and noise regions, respectively. These elements are nested hierarchically to mirror the physical arrangement of text on a page, with ocr_page serving as the top-level container for ocr_carea and subsequent lines. In contrast, the category comprises ocr_float, ocr_image, ocr_table, ocr_header, ocr_footer, and others, designed for positioned, non-flow like floating images, tables, headers, or blocks that do not integrate into the primary text stream. Logical elements, including ocr_document, ocr_par, ocr_section, ocr_chapter, and ocr_blockquote, establish a semantic that abstracts the document's organizational structure, such as chapters, paragraphs, or blockquotes, independent of visual layout. Inline elements like ocr_glyph, ocr_math, ocr_dropcap, and ocr_chem address character-level details, marking individual glyphs, mathematical expressions, drop caps, or chemical formulas within text lines to preserve fine-grained recognition data. Engine-specific elements, denoted by the "ocrx_" prefix (e.g., ocrx_block for custom blocks and ocrx_word for word-level units), allow OCR engines to incorporate features without violating the standard. These elements fulfill their representational role by embedding the recognized text directly as HTML content, while attaching essential —such as bounding boxes or confidence scores—via the title attribute (with detailed in the subsequent section on attributes). hOCR's design supports multilingual text through encoding and HTML language attributes, enabling seamless handling of diverse scripts.

Properties and Attributes

hOCR are embedded within the title attribute of relevant HTML elements as semicolon-separated key-value pairs, where each pair consists of a lowercase property name followed by its value, separated by whitespace (e.g., title="bbox 0 0 100 200; ppageno 1"). This format allows for structured attachment without altering the document's visual rendering, ensuring compatibility with standard parsers. Properties are grouped into functional categories to describe various aspects of the OCR output. In the general category, ppageno specifies the physical page number as an unsigned integer, starting from 0 for the front cover, to uniquely identify pages within a multi-page document (e.g., ppageno 7). Layout properties include bbox, which defines a rectangular bounding box using four unsigned integers for the top-left (x0 y0) and bottom-right (x1 y1) coordinates in pixels (e.g., bbox 105 66 823 113), and baseline, which describes the text line's baseline as a polynomial approximation, typically a linear form with slope (float) and y-intercept (int) relative to the bounding box's bottom-left corner (e.g., baseline 0.015 -18 for y = 0.015x - 18). Another layout property, textangle, indicates the text rotation angle in degrees (counter-clockwise positive) relative to the page upright position (e.g., textangle 7.32). For confidence, x_confs provides per-character confidence scores as space-separated floats (0-100, approximating posterior probabilities in percent), one per character in the element (e.g., x_confs 37.3 51.23 1 100), while x_conf offers an overall confidence for the substring (e.g., x_conf 97.23). Source properties encompass image, which references the input image file as a delimited string (UNIX pathname or URL, possibly relative; e.g., image "/foo/bar.png"). Style properties, often engine-specific with the x_ prefix, include x_font for the font family (e.g., x_font "Comic Sans MS"), x_fsize for font size in points (e.g., x_fsize 12), and others like x_style for additional typographic details. The coordinate system for spatial properties like bbox and baseline originates from the document's top-left corner, with x increasing rightward and y increasing downward, measured in pixels; for baseline, coordinates are relative to the element's bounding box bottom-left, enabling precise alignment even for sloped text. Higher-order polynomials for baseline can be used for curved lines by adding more coefficients (e.g., baseline b0 b1 b2 for ), though linear approximations suffice for most cases. For interoperability, all property names must be lowercase alphanumeric or prefixed with x_ for extensions, with unknown properties ignored by parsers; values adhere to specific Augmented Backus-Naur Form (ABNF) grammars defined per property, and required properties like bbox must be present on certain elements such as ocr_page or ocr_line.

Hierarchical Structure

The hOCR format organizes OCR results into a nested, tree-like that mirrors the spatial and logical layout of the original document, using standard elements augmented with specific class attributes. At the root level, the structure typically begins with an <html> or <body> element containing a single <div class="ocr_document"> as the top-level logical container, which encapsulates the entire processed document. This descends through a series of specialized elements: <div class="ocr_page"> for individual pages, <div class="ocr_carea"> for content areas (such as columns or blocks), <div class="ocr_par"> for paragraphs, <span class="ocr_line"> for text lines, <span class="ocrx_word"> for words, and optionally <span class="ocr_cinfo"> for character-level confidence annotations. Nesting in hOCR follows standard rules, requiring proper containment where child elements are fully enclosed within parents, ensuring a valid DOM tree that reflects the document's order. Text content is placed directly within these elements (e.g., inside <span> tags for words or lines), flowing linearly in reading order to preserve the natural sequence while embedding layout via the title attribute on each . This supports hierarchies for non-linear elements, such as <div class="ocr_float"> for images or sidebars, which exist outside the main text flow and do not nest within the primary chain, allowing representation of complex layouts like figures alongside paragraphs. For multi-page documents, the hierarchy accommodates multiple <div class="ocr_page"> elements directly under the ocr_document, each identified by a unique ppageno property in its title attribute (starting from 0 for the first or cover page) to distinguish physical pages. Every page-level element must include a bbox property specifying its bounding box coordinates (e.g., "bbox 0 0 612 792" for an 8.5x11-inch page at 72 DPI), enabling precise spatial mapping. The format's extensibility stems from its basis in XHTML, ensuring compliance with XML parsers while allowing embedded <style> blocks or external CSS links for visual styling of OCR elements without altering the core data; custom properties prefixed with x_ (e.g., x_font for font details) can be added to the title attribute for engine-specific extensions. Parsing hOCR documents involves traversing the DOM tree to extract hierarchical relationships, where tools recursively process elements to collect text content in order and associate it with layout via bbox properties, thereby reconstructing the document's spatial structure for downstream applications like reflowable text rendering.

Usage and Applications

In OCR Pipelines

hOCR integrates into (OCR) pipelines as a standardized output format, generated after layout analysis and text recognition stages. It takes scanned images, such as or files, as input and produces an file embedding the recognized text, information, and . This placement in the workflow allows hOCR to capture the full results of prior processing steps without altering the upstream operations. The typical OCR pipeline leading to hOCR output follows a sequence of preprocessing, analysis, and assembly. First, input images undergo binarization to convert them to for clearer text separation and deskewing to correct any or tilt from scanning. Next, page segmentation divides the image into hierarchical blocks, such as paragraphs, lines, and words, often using algorithms to identify reading order and content regions. Character recognition then applies models, like those in , to identify glyphs and assign confidence scores to each recognized element. Finally, these results are assembled into the hOCR hierarchy, mapping text to spatial coordinates and embedding attributes for structure and quality. A key advantage of hOCR in pipelines is its preservation of spatial data through bounding boxes and baseline information, which facilitates post-processing tasks such as visualizing errors for manual correction or aligning text with original images. This spatial fidelity also enables seamless chaining with (NLP) tools, where the structured output supports entity extraction by providing context-aware text segments. For instance, bounding boxes allow NLP models to consider layout in identifying named entities like locations or persons within document regions. hOCR supports multilingual processing through its reliance on Unicode for character encoding, ensuring compatibility across scripts, and uses the HTML dir attribute to specify directionality (e.g., "ltr" or "rtl") for handling right-to-left (RTL) languages such as or Hebrew. The lang attribute denotes the language, allowing pipelines to process diverse documents without format reconfiguration. In large-scale adoption, hOCR has been implemented in digital libraries for batch OCR of scanned books, notably by the , which transitioned to -generated hOCR files in 2020 for processing millions of pages. Projects like further standardize hOCR workflows for multilingual digitization in , with 5 enhancing accuracy for diverse scripts as of 2023. This enables efficient indexing, search, and derivative creation, such as searchable PDFs, across vast collections. Tools like integrate directly to output hOCR, supporting autonomous language detection for multilingual batches.

Integration with Searchable Documents

hOCR plays a central role in generating searchable documents by merging its detailed layout annotations with the original scanned images to create PDFs that include selectable text layers. The bounding box (bbox) coordinates in hOCR files—defined as rectangular areas (x0, y0, x1, y1) in units—enable precise alignment of the recognized text over the corresponding visual elements, ensuring the digital text matches the spatial position of the printed content. This approach preserves the document's original appearance while embedding machine-readable text, facilitating functionalities like copy-paste and keyword searching. The integration process involves rendering the hierarchical structure of hOCR elements, such as ocr_page, ocr_line, and ocr_word, onto the image layers of the target document. Text positioning relies on the bbox attributes to overlay content accurately, while optional properties like x_confidence allow for the inclusion of recognition confidence scores (expressed as percentages), which can flag low-quality areas for potential or visual highlighting in the final output. This method supports efficient for large-scale , transforming static image-based files into interactive, text-enabled formats without altering the underlying visuals. Key benefits include upholding visual fidelity alongside added digital accessibility, which is vital for applications in digital archives, legal repositories, and compliance-driven environments where screen readers can navigate and vocalize the content. For instance, the text layer enables users with disabilities to interact with documents independently, reducing barriers in access. hOCR's XHTML foundation further supports standards like (ISO 14289), which mandates structured, tagged PDFs for universal , by providing a semantic markup that translates into compliant document tags during conversion. Additionally, hOCR facilitates workflows for producing accessible e-book formats such as and , where the layout and text data inform navigation and reading aids. A prominent is the Internet Archive's digitization efforts, where hOCR generated via OCR processes millions of scanned books to produce searchable PDFs with embedded text layers. This enables across the collection—spanning terabytes of content—without necessitating re-OCR, while also supporting derivative formats like for print-disabled users, demonstrating hOCR's scalability in large-scale archival integration.

Supporting Software

OCR Engines

Tesseract, developed and maintained by , serves as the primary open-source OCR engine supporting native hOCR output. It generates hOCR files using the command-line option to specify the 'hocr' output format, such as tesseract input_image output_base hocr, which embeds layout information, bounding boxes, and confidence scores directly into HTML. Since version 4.0, Tesseract incorporates LSTM-based neural networks for improved text recognition accuracy across various scripts and fonts. Additionally, versions 4 and later integrate enhanced layout analysis capabilities, enabling better detection of text blocks, paragraphs, and hierarchical structures in complex documents. OCRopus, an early adopter of hOCR in open-source OCR systems, focuses on modular document analysis with line-level text recognition. It produces hOCR output through dedicated tools like ocropus-hocr, which convert recognition results into the format for embedding and segmentation data. Designed for research and large-scale processing, supports Python-based pipelines that allow customization for specific document types, emphasizing feed-forward architecture for detection and recognition. Cuneiform provides multilingual OCR support, particularly strong for Cyrillic and Asian scripts, and generates hOCR output as one of its standard formats alongside TXT and RTF. This engine performs layout analysis and text recognition, making it suitable for scanned documents in non-Latin languages, and has been integrated into alternatives to commercial tools like ABBYY FineReader for enhanced script handling. Kraken represents a modern OCR engine tailored for historical and non-Latin texts, outputting hOCR with detailed including bounding boxes and confidences. It features advanced trainable segmentation models for , enabling precise line and character detection in degraded or varied document materials. is widely adopted in (galleries, libraries, archives, and museums) institutions for digitizing collections due to its flexibility in handling diverse scripts and its open-source model repository. Other tools include gImageReader, a graphical user interface frontend for that facilitates hOCR export directly from its recognition interface, supporting post-processing like spell-checking before output. offers limited hOCR support through third-party plugins and converters, such as XSLT-based transformations from its XML output, primarily for integration in professional workflows.

Manipulation and Conversion Tools

Several open-source tools and libraries facilitate the , validation, and of hOCR files, enabling tasks such as merging documents, , and into searchable formats like PDF. These utilities are essential for post-processing OCR outputs, ensuring compliance with the hOCR specification, and integrating with broader workflows in document digitization projects. The hocr-tools suite, developed as part of the project, provides a Python-based collection of command-line utilities for editing and assessing hOCR files. It includes hocr-check for validating file structure and compliance with hOCR standards, hocr-merge for combining multiple page-level hOCR documents into a single cohesive file, and hocr-eval for measuring OCR accuracy by comparing output against ground-truth annotations, such as character error rates. Additional tools like hocr-pdf support conversion to searchable PDFs by overlaying text layers on associated images, while hocr-cut and hocr-extract-images aid in segmenting and extracting elements for further manipulation. This suite is particularly useful in environments due to its focus on multilingual OCR evaluation and low-level editing capabilities. hocr2pdf is a dedicated converter that transforms hOCR files, typically generated by engines like , into searchable PDFs by aligning bounding box (bbox) coordinates from the hOCR markup with corresponding scanned images. It handles text orientation via attributes like textangle in ocr_line elements and supports bbox-based positioning for words and paragraphs to ensure accurate overlay. Implementations vary: the ExactImage version uses a C++-based PDF for efficient rendering with options for resolution (default 300 dpi), compression (e.g., or Flate), and sloppy text placement to optimize performance; meanwhile, the Node.js variant from eloops leverages libraries such as pdfkit for PDF generation, cheerio for HTML parsing, and for image manipulation. These tools preserve layout fidelity while embedding selectable text layers, making them suitable for archival PDF creation. Developed by the , archive-hocr-tools offers a library optimized for parsing and processing large-scale hOCR datasets, with streaming capabilities to manage memory usage during operations on extensive collections. Key utilities include hocr-combine-stream for merging multiple hOCR files into one without high memory overhead, hocr-pagenumbers for identifying and annotating page boundaries in multi-page documents, and hocr-fold-chars for aggregating character-level annotations into word-level structures. The suite also supports diffing between OCR versions to track improvements or errors across iterations, as well as extraction tools like pdf-to-hocr for converting existing PDFs back to hOCR format. This toolkit is designed for high-volume workflows, emphasizing efficiency in . For integrating outputs from cloud-based OCR services, gcv2hocr converts responses from Google Cloud Vision into hOCR format, facilitating downstream compatibility with hOCR ecosystems. It processes bounding polygons and text annotations from the Vision to generate structured hOCR markup, including bbox coordinates scaled to dimensions (e.g., via optional width and height parameters). Usage involves piping input to the tool, often followed by PDF generation with hocr-tools, enabling searchable document creation from Vision-processed images. This bridge is valuable for hybrid workflows combining cloud OCR with open hOCR standards. Supporting libraries enhance programmatic handling of hOCR. The hocr-spec-python package implements the hOCR specification in , serving primarily as a validation tool that checks attributes, classes, , and properties against profiles like "" or "relaxed," with options to skip specific validations or output reports in XML or text formats. It aids developers in ensuring hOCR conformance before manipulation. In applications, the provides bindings to the OCR engine, supporting hOCR output through the hocr configuration option in functions like ocr_data(), which returns structured frames with word-level confidences, bounding boxes, and text. This enables R users to process images or PDFs, preprocess with packages like magick for tasks such as conversion, and analyze multilingual OCR results (over 100 languages) in statistical workflows.

Examples

Basic hOCR Document

A basic hOCR document represents the simplest form of OCR output, encapsulating a single page of recognized text within a standard structure. This format may use the optional ocr_document class as a top-level container for the OCR results within the HTML body, though basic examples often start directly with ocr_page elements that reference the source image and define page boundaries via bounding box coordinates. Within each page, text is organized into ocr_line spans, which hold the recognized content along with layout properties like position and baseline alignment. The following example illustrates a minimal hOCR file for a single-page with one line of text:
html
<!DOCTYPE [html](/page/HTML)>
<html>
<head>
  <title>OCR Output</title>
  <meta charset="UTF-8">
</head>
<body>
  <div class="ocr_document">
    <div class="ocr_page" title="image 'page1.png'; bbox 0 0 2480 3508; ppageno 1">
      <span class="ocr_line" title="bbox 248 543 2326 609; [baseline](/page/Baseline) 0.015 -13">
        Sample text here.
      </span>
    </div>
  </div>
</body>
</html>
In this structure, the ocr_document div optionally serves as the container for the OCR results, while the ocr_page div specifies the image file, overall page dimensions using the bbox property, and the physical page number with ppageno, which defines the rectangular coordinates (in pixels) from left, top, right, and bottom edges. The ocr_line then captures a segment of text, including its and a offset for , directly embedding the recognized words without additional nesting for paragraphs or regions. such as bbox provide essential layout information, with further details on their format available in the hOCR properties specification. hOCR files are typically saved with a .hocr or .html extension, allowing them to be opened and verified directly in web browsers, where the embedded text and metadata can be inspected or styled via CSS. This basic example assumes a straightforward single-page layout without complex elements like columns or tables, making it suitable for initial testing or simple OCR verification workflows.

Bounding Box and Baseline Example

In hOCR, the bounding box property, denoted as bbox, specifies the rectangular region enclosing an OCR element using pixel coordinates relative to the top-left corner of the document, formatted as x0 y0 x1 y1 where (x0, y0) is the upper-left corner and (x1, y1) is the lower-right corner. For a word-level element, this appears in the title attribute of a <span> tag with class ocrx_word, such as <span class="ocrx_word" title="bbox 105 66 823 113">word</span>, which defines the precise spatial extent of the recognized text for layout reconstruction. The property complements the bounding box by describing the orientation of a text line, particularly for handling slanted or ed text, and is formatted as baseline slope [offset](/page/Offset) where slope indicates the angle (in pixels per ) and is the y-intercept relative to the bottom-left corner of the line's bounding box. An example for a line-level is <span class="ocr_line" title="bbox 105 66 823 113; [baseline](/page/Baseline) 0.015 -18">text line</span>, where the slight positive of 0.015 accounts for minor , and the negative of -18 positions the baseline below the box's bottom edge. Confidence scores can be integrated alongside these layout properties using the x_confs property, which provides per-character recognition reliability values (0-100, higher indicating greater ) within the same title attribute. For instance, <span class="ocrx_word" title="bbox 105 66 823 113; x_confs 87 92 81 85">word</span> assigns scores of 87, 92, 81, and 85 to each character (w, o, r, d), enabling tools to highlight low-confidence regions during post-processing. These facilitate accurate and text overlay on original images by allowing rendering engines to position extracted text precisely within the document's , including corrections for via the to align synthetic text with scanned content.

Challenges and Limitations

Known Issues in

One common issue in hOCR implementations arises during PDF generation, where bounding box (bbox) coordinates can drift due to mismatches in and pixel . For instance, the hocr2pdf tool has been reported to superimpose multiple pages into a single output, leading to misalignment when processing multi-page documents with varying resolutions. Similarly, tools like OCRmyPDF may produce invalid PDFs with displaced image elements, such as black boxes replacing intended visuals, exacerbating coordinate inaccuracies in downstream workflows. Confidence score handling in hOCR outputs exhibits inconsistencies across OCR engines, particularly in the scaling and presence of attributes like x_conf. outputs confidence values on a 0-100 scale, where scores below 95 often indicate unreliable recognition, but some implementations report negative values or omit them entirely in hOCR files, complicating quality assessment. This variability stems from engine-specific outputs, with no standardized normalization, leading to missing or misinterpreted data in integrated pipelines. Multilingual support in hOCR reveals gaps, especially for right-to-left (RTL) languages in older tools, where text directionality is not properly preserved. For example, hocr-pdf reverses Hebrew text direction, rendering it unreadable without additional parameters, a problem tied to inadequate UTF-8 handling for RTL scripts like Arabic or Hebrew. Baseline calculations also fail for vertical scripts, as hOCR outputs may omit baseline information or incorrectly set textangle to 180 degrees when legacy engine components are disabled, disrupting alignment in languages like Japanese or Mongolian. Tool-specific problems further hinder hOCR adoption, including crashes and errors in converters handling large files. The HOCRConverter in pdfminer.six encounters import errors that can halt processing of substantial documents, often due to dependency mismatches. , when used for hOCR-to-PDF conversion, has been observed to corrupt complex documents by ignoring embedded text layers or hierarchical structures, resulting in incomplete outputs. Validation tools fail on non-standard properties, such as Tesseract's use of x_size instead of the specified x_fsize for font metrics, which violates the and triggers errors in strict validators. As of 2025, the hOCR specification remains at version 1.2 from 2020, with no major updates to address evolving needs, contributing to fragmentation in modern AI-OCR systems. Transformer-based engines and large language models increasingly favor proprietary or outputs over hOCR, leading to incompatible workflows as tools like those in open-source OCR libraries diverge from the aging standard.

Solutions and Workarounds

To address issues in hOCR bounding boxes, which are defined in pixels relative to the document's , developers can normalize coordinates by scaling them according to the image's DPI. For instance, to convert pixel coordinates to PDF points (assuming a standard 72 DPI base), multiply the bounding box values by (DPI / 72); this ensures consistency across different input resolutions during downstream processing like PDF embedding. Tools such as /hocr-tools facilitate preprocessing by allowing extraction and manipulation of bounding boxes from hOCR elements, enabling scripted adjustments before further conversion. For confidence score inconsistencies, where values may exceed the standard 0-100 range or fall negative in engines like , post-processing scripts can enforce normalization by capping scores at 100 and flooring at 0 to align with hOCR specifications. The library hocr-spec provides a foundation for such scripts, including programmatic access to properties (x_conf for characters and implicit word-level scores) for validation and adjustment. Additionally, opting for OCR engines like ensures more consistent hOCR output, as it generates bounded values alongside detailed bounding boxes and character-level without requiring extensive post-correction. Right-to-left (RTL) text handling in hOCR can be enhanced by embedding custom CSS rules directly in the HTML structure, such as setting direction: rtl on relevant elements to override default left-to-right rendering. For converters processing hOCR to other formats, integration with bidirectional (bidi) libraries like those in Python's python-bidi or JavaScript equivalents helps maintain proper text flow during export, particularly for languages like Hebrew or . As an alternative to less reliable parsers, the archive-hocr-tools library from the offers robust hOCR parsing capabilities, supporting efficient streaming operations like combining multi-page files and folding character-level data into word-level structures with low memory overhead. For issues in conversion tools like hOCR-to-PDF utilities, manual edits can be performed using standard XML editors (e.g., via lxml in ), targeting specific <span class="ocrx_word"> elements to correct bounding boxes or text without full reprocessing. Best practices for hOCR implementation include validating files against the specification using tools like the hocr-spec CLI before any conversion, which checks conformance to required metadata, element structures, and property names. Adhering strictly to version 1.2 of the hOCR spec ensures compatibility, as it standardizes confidence ranges, bounding box formats, and CSS integration for layout. For persistent bugs, such as rendering quirks in specific engines, community-contributed patches on GitHub repositories like tesseract-ocr or ocropus provide targeted fixes, often addressing edge cases in multi-lingual or high-resolution outputs.

References

  1. [1]
    hOCR - OCR Workflow and Output embedded in HTML
    Feb 26, 2020 · The purpose of this document is to define an open standard for representing document layout analysis and OCR results as a subset of HTML.Terminology and Representation · The elements of hOCR · The properties of hOCR
  2. [2]
    The hOCR Microformat for OCR Workflow and Results - Volume 02
    The format is based on a new, multi-level abstraction of OCR results based on logical markup, common typeset- ting models, and OCR engine-specific markup, ...Missing: implementations | Show results with:implementations
  3. [3]
    Command Line Usage | tessdoc - Tesseract documentation
    This creates a pdf with the image and a separate searchable text layer with the recognized text. HOCR output. Use 'hocr' config file by adding hocr at the end ...
  4. [4]
    OCR at the Internet Archive with Tesseract and hOCR
    This document outlines the OCR (Optical Character Recognition) module and its features as used to perform optical text recognition on Internet Archive items.
  5. [5]
    [PDF] The hOCR Microformat for OCR Workflow and Results - DFKI
    This paper describes a new format for representing both intermediate and final OCR results, developed in response to the needs of a newly developed. OCR system ...
  6. [6]
    kba/hocr-spec: The hOCR Embedded OCR Workflow and ... - GitHub
    The hOCR Embedded OCR Workflow and Output Format. About: This repository contains the hOCR format specification originally written by Thomas Breuel.Missing: Khronos Konstantin Baierer
  7. [7]
    hOCR - OCR Workflow and Output embedded in HTML
    - **Publication Date**: Not explicitly stated in the content.
  8. [8]
    ocropus/hocr-tools - GitHub
    hOCR is a format for representing OCR output, including layout information, character confidences, bounding boxes, and style information.
  9. [9]
  10. [10]
  11. [11]
  12. [12]
  13. [13]
  14. [14]
  15. [15]
  16. [16]
  17. [17]
  18. [18]
  19. [19]
  20. [20]
    OCR-D Workflow Guide
    Step 8: Binarization (Region Level) · Step 9: Clipping (Region Level) · Step 10: Deskewing (Region Level) · Step 11: Line segmentation · Step 12: Resegmentation ( ...
  21. [21]
    Does Tesseract's hOCR output really contain bounding boxes and ...
    Apr 5, 2013 · When I created a sample hOCR output (it's an .html file), the bounding boxes and confidence levels were only available at the word level.Getting the bounding box of the recognized words using python ...How to preserve document structure in tesseract - ocr - Stack OverflowMore results from stackoverflow.comMissing: preserves spatial
  22. [22]
    Natural language Entity Extraction for HOCR and Webarchive Text ...
    May 28, 2021 · Natural language Entity Extraction for HOCR and Webarchive Text extraction ... If configured to be HOCRed it will get NLP Entity Extraction.Missing: chaining tools
  23. [23]
    OCRmyPDF documentation — ocrmypdf 16.11.1 documentation
    - **Process**: OCRmyPDF uses hOCR to add a searchable text layer to scanned PDFs by extracting text and positioning it using bounding boxes, which define the spatial layout of text on each page.
  24. [24]
    Making Scanned Content Accessible Using Full-text Search and OCR
    Aug 4, 2014 · We use the Tesseract OCR engine with the generic training data for each of our supported languages to generate an HTML document using hOCR ...
  25. [25]
    PDF/UA-1, PDF Enhancement for Accessibility, Use of ISO 32000-1
    PDF/UA-1 is a constrained PDF form (based on ISO 32000-1) for accessibility, ensuring support for assistive technology like screen readers.
  26. [26]
    Tesseract Open Source OCR Engine (main repository) - GitHub
    Tesseract supports various image formats including PNG, JPEG and TIFF. Tesseract supports various output formats: plain text, hOCR (HTML), PDF, invisible ...Ocr · Wiki · Releases · Tessdata
  27. [27]
    [PDF] The OCRopus Open Source OCR System - Use
    Breuel, “The hOCR Microformat for Workflow and Results,” in Int. Conf. on Document Analysis and Recognition (ICDAR), Curitiba, Brazil, 2007. 26. B. A. ...
  28. [28]
    OCRopus OCR Engine(s) | OCRopus
    OCRopus is a collection of neural-network based OCR engines originally developed by Thomas Breuel, with many contributions from students, companies, and ...
  29. [29]
    cuneiformplus download | SourceForge.net
    Dec 8, 2020 · user friendly BSD licence · supported languages: English, German, French, Dutch and more · output formats: html, rtf, txt, hocr · usage: cuneiform ...
  30. [30]
    cuneiform - multi-language OCR system - Ubuntu Manpage
    Cuneiform is an OCR system. In addition to text recognition it also does layout analysis and text format recognition. Cuneiform supports several languages.
  31. [31]
    Advanced Usage — kraken documentation
    Per default ALTO, PageXML, hOCR, and abbyyXML containing additional metadata such as bounding boxes and confidences are implemented.
  32. [32]
    mittagessen/kraken: OCR engine for all the languages - GitHub
    kraken is a turn-key OCR system optimized for historical and non-Latin script material. kraken's main features are: Fully trainable layout analysis, reading ...
  33. [33]
    [PDF] Kraken - A Universal Text Recognizer for the Humanities - HAL
    Feb 17, 2025 · Output includes fine-grained bounding boxes down to the character level that may be used to quickly acquire a large number of samples from a ...
  34. [34]
    Install gImageReader on Linux - Flathub
    Recognize and export as plain text or as hOCR documents; Recognize text next to images; Post-processing recognized text, including spell checking; Generate PDF ...
  35. [35]
    cneud/ocr-conversion: Conversions between various OCR formats
    OCR conversion ; ABBYY. abbyy2hocr.xsl - ABBYY FineReader XML to hOCR converter @Rod Page ; ALTO. alto2tei.xsl - Output TEI from ALTO input format @OpenConvert ...Missing: plugin | Show results with:plugin
  36. [36]
    internetarchive/archive-hocr-tools - GitHub
    This repository contains a python package to perform hOCR parsing efficiently, and it also contains a set of tools that can help perform operations on and ...
  37. [37]
    hocr-tools - manipulate and evaluate hOCR format - LinuxLinks
    Oct 13, 2023 · hocr-tools is a set of tools for manipulating and evaluating the hOCR format for representing multi-lingual OCR results.
  38. [38]
    hocr2pdf(1) — exactimage — Debian unstable
    Mar 1, 2025 · hocr2pdf creates well layouted, searchable PDF files from hOCR (annotated HTML) input obtained from an OCR system.
  39. [39]
    eloops/hocr2pdf: take scanned image, and hocr output from ... - GitHub
    Takes a hocr file (output from the likes of Tesseract / Omnipage / ABBYY FineReader) and merges with an image to create a searchable PDF file.
  40. [40]
    ExactImage - ExactCODE
    Based on the new PDF codec a new command line front-end named "hocr2pdf" is included which allows creating searchable PDFs out of hOCR annotated HTML as ...
  41. [41]
    Welcome to archive-hocr-tools's documentation! — archive-hocr ...
    Tool to combine several hOCR files into a single hOCR file. Works in a streaming mode, so has a low memory footprint regardless of input file size.Missing: Internet | Show results with:Internet
  42. [42]
    archive-hocr-tools - PyPI
    Tags hOCR , Internet Archive; Requires: Python >=3.6; Provides-Extra: daisy , epub , pdf , pagenumber. Classifiers. Development Status. 3 - Alpha. Intended ...
  43. [43]
    gcv2hocr converts from Google Cloud Vision OCR output to hocr to ...
    gcv2hocr converts from Google Cloud Vision OCR output to hocr to make a searchable pdf. Installation; Usage. How to get OCR (json) data: How to make ...
  44. [44]
    hocr-spec-python - PyPI
    hocr-spec-python implements hOCR specs, which are HTML for OCR results, and serves as a validation tool for hOCR.
  45. [45]
    kba/hocr-spec-python: Validation of hOCR close to the specs - GitHub
    The hOCR specifications is at the same time very simple (hOCR is just HTML) and hard to implement, due to its terseness and lack of up-to-date code samples.Missing: Khronos Konstantin Baierer
  46. [46]
    Using the Tesseract OCR engine in R
    Mar 23, 2025 · The tesseract package provides R bindings Tesseract: a powerful optical character recognition (OCR) engine that supports over 100 languages.
  47. [47]
    Support for hOCR and Tesseract 4 in R - rOpenSci
    Feb 14, 2018 · hOCR is an open standard of data representation for formatted text obtained from OCR (wikipedia). The definition encodes text, style, layout ...
  48. [48]
    Bindings to Tesseract OCR engine for R - GitHub
    Bindings to Tesseract-OCR: a powerful optical character recognition (OCR) engine that supports over 100 languages. The engine is highly configurable in ...
  49. [49]
    How do I produce a multi-page sandwich pdf with hocr2pdf?
    Mar 22, 2013 · I tried using hoc2pdf to produce a "sandwich pdf" (image + hidden text layer). Hocr2pdf produces a one page pdf with all the pages superimposed ...
  50. [50]
    [Bug]: The generated PDF is INVALID #1367 - GitHub
    Aug 2, 2024 · The generated PDF file has black coloured boxes in place of the images. Steps to reproduce 1. Run ocrmypdf -v1 --output-type pdf --max-image-mpixels 1000
  51. [51]
    What is the "Confidence"value returned by Tesseract and how it is ...
    Jun 1, 2017 · I think the confidence score is returned by the neural network itself. In my experience values below 95 are usually unusable. Above 99 is ...Missing: scaling 0-255 0-100
  52. [52]
    hocr-pdf printing Hebrew text in opposite direction in the generated ...
    Feb 1, 2021 · Hebrew is a right to left language so not sure if I have to pass any language or direction parameters to get this right.
  53. [53]
    HOCR output always sets `textangle 180` and omits baseline info if ...
    Jan 30, 2023 · HOCR output always sets textangle 180 and omits baseline info if Tesseract is compiled with --disable-legacy #4010. New issue.Missing: calculations | Show results with:calculations
  54. [54]
    Getting import error · Issue #881 · pdfminer/pdfminer.six - GitHub
    Apr 28, 2023 · ImportError: cannot import name 'HOCRConverter' from 'pdfminer.converter' (/usr/local/lib/python3.10/dist-packages/pdfminer/converter.py).Missing: tool crashes large
  55. [55]
    PDFs not scanned due to Ghostscript regression bug - Reddit
    Oct 28, 2024 · Ghostscript 10.0.0 through 10.02.0 (your version: 10.0.0) contain serious regressions that corrupt PDFs with existing text.Missing: complex hierarchy nesting
  56. [56]
    hOCR renderer writes "x_size" (instead of "x_fsize") property to ...
    Feb 16, 2021 · ... x_confs, x_scanner, x_source, and x_wconf. That defeats the whole ... duplicative with the baseline property, so could be candidates for removal.
  57. [57]
    Best Open-Source OCR Tools in 2025: A Comparison - Unstract
    Sep 19, 2025 · Discover the best open-source OCR tools of 2025, comparing traditional and modern LLM-powered approaches, with their strengths, limitations, ...Missing: fragmentation | Show results with:fragmentation<|control11|><|separator|>
  58. [58]
    API reference — ocrmypdf 16.11.1 documentation
    Text is positioned according to the bounding box of the lines in the hOCR file. The image need not be identical to the image used to create the hOCR file.
  59. [59]
    Advanced Usage — kraken documentation
    Per default ALTO, PageXML, hOCR, and abbyyXML containing additional metadata such as bounding boxes and confidences are implemented. In addition, custom ...
  60. [60]
    hocr import / export · Issue #453 · ocrmypdf/OCRmyPDF - GitHub
    Nov 12, 2019 · ocrmypdf already has the ability to merge hOCR HTML into PDF through its public APIs. What it does not have is a convenient way to run its post-processing on a ...Missing: integration | Show results with:integration