Fact-checked by Grok 2 weeks ago

Ancillary data

Ancillary data, in the context of digital video and television broadcasting, refers to supplementary non-video information embedded within the serial digital interface (SDI) signal during horizontal and vertical blanking intervals, enabling the transmission of metadata such as timecode, closed captions, and audio descriptors alongside the primary video and audio streams without disrupting the active picture area. This data is formatted into standardized packets consisting of a preamble, data identification words, user data, and checksums to ensure integrity and interoperability across professional media systems. The structure and formatting of ancillary data are primarily defined by SMPTE ST 291-1, which specifies the packet and space formatting for 10-bit data streams, originally published in 1998 and revised periodically to accommodate evolving formats like and ultra-high-definition (UHD) television. Complementary international standards, such as ITU-R Recommendation BT.1364, outline the multiplexing of ancillary data in digital component interfaces for serial , supporting both ancillary data (HANC) in line intervals and vertical ancillary data (VANC) in field intervals. These standards ensure that ancillary data packets can be reliably inserted, extracted, and processed in production workflows, with mechanisms for error detection and deletion marking. Common applications of ancillary data include embedding (per CEA-608 for analog-compatible services and CEA-708 for digital), timecode transmission (SMPTE ST 12-2), active format description () for signaling (SMPTE ST 2016), and for audio formats like Dolby E (SMPTE RDD 6). In broadcast environments, it facilitates , features, and , such as VITC (vertical interval timecode) or to maintain picture integrity during format conversions. Ancillary data also supports regional subtitling systems like in (ETSI EN 300 706) and OP-47 in . In modern IP-based workflows, ancillary data has transitioned from traditional SDI to network transport via standards like SMPTE ST 2110-40, which defines its carriage over managed networks using RTP payloads, preserving compatibility with legacy systems while enabling distribution in data centers and live . This evolution underscores its role in enhancing operational efficiency, regulatory compliance (e.g., FCC captioning mandates), and the integration of immersive audio and in /8K broadcasting.

Overview

Definition and Purpose

Ancillary data refers to additional information multiplexed within the same as the primary video signal, typically carried in the horizontal or vertical blanking portions outside the active picture area. This embedding allows supplementary signals, such as or control information, to travel alongside the core video without requiring separate transmission paths. In professional video formats, it occupies the ancillary space corresponding to traditional blanking intervals in analog systems. The primary purpose of ancillary data is to transport diverse supplementary elements that enhance the functionality and of the main content, including synchronization aids like timecodes, accessibility features such as closed captions, and operational for . By integrating these elements directly into the video stream, ancillary data supports critical broadcast and production needs, such as maintaining timing accuracy across devices and enabling real-time for diverse audiences. It also facilitates additional capabilities, like active format descriptions that ensure proper handling during playback. Across media applications, ancillary data appears in forms like closed captions in video or timecode packets in digital files, analogous to tags in audio formats that provide artist details or without altering the primary audio transport. In streaming protocols, it includes markers and error-checking checksums to preserve content integrity during transmission. Key benefits include reduced overall demands by avoiding dedicated channels for auxiliary information, preservation of through non-intrusive placement in blanking areas, and streamlined professional workflows in production and environments via enhanced . This promotes in system design while ensuring reliable delivery of essential supplementary content.

Historical Evolution

The origins of ancillary data trace back to the analog television era, where the vertical blanking interval (VBI) of broadcast signals was employed for non-video information starting in the mid-20th century. In the and , broadcasters began inserting test signals, such as vertical interval test signals (VITS), into the VBI to monitor signal quality during transmission without interfering with the visible picture. By the 1970s, this space was adapted for accessibility features, including closed captions on line 21 of the signal, with the first captioned programs airing on in 1972 and national broadcasts following in 1980. The standard, published in 1994, formalized these line 21 captions for analog television, enabling widespread use of encoded text for the hearing impaired. The transition to in the and marked a significant , driven by the need for higher capacity and reliable transport in professional environments. The Society of Motion Picture and Television Engineers (SMPTE) played a pivotal role, with early standards like SMPTE RP 125 (first published in 1987 and revised into ST 125M in 1995) defining the ancillary data space in bit-parallel component digital video interfaces at 4:2:2 sampling. This was complemented by SMPTE 259M in 1989, which standardized (SDI) for uncompressed digital video, incorporating provisions for ancillary data embedding. A key milestone came with SMPTE 291M in 1998, with subsequent revisions in the , which established the packet and space formatting for ancillary data across horizontal and vertical blanking regions, enabling structured transport of , timecode, and other non-video elements. Further advancements in the and were propelled by regulatory drivers and technological shifts toward higher resolutions. The U.S. (FCC) implemented captioning mandates under the , with rules adopted in 1997 requiring phased-in for 95% of programming by 2002 to enhance accessibility. SMPTE 272M, published in 1994, introduced the first standardized embedding of /EBU audio into SDI ancillary space, supporting up to 16 channels and revolutionizing synchronized audio-video workflows. The saw expansion to high-definition formats with SMPTE 292M (1998) for HD-SDI and (2006) for 3G-SDI, increasing bandwidth for richer ancillary payloads like multiple audio streams and . The 2010s onward reflected convergence with networks, addressing demands for flexible, scalable distribution in . SMPTE ST 2110-40, published in 2018, extended ancillary data transport over managed networks using RTP packets, decoupling it from video essence for independent routing and via PTP. This gained traction in live production during the , facilitating IP-based workflows in venues like sports arenas and studios, while ongoing revisions ensure compatibility with emerging ultra-high-definition and immersive formats. Overall, these developments were fueled by escalating data needs, accessibility legislation, and the shift from SDI to IP infrastructures post-2010.

Ancillary Data in Video Signals

Analog Systems

In analog video systems, ancillary data was primarily embedded within the vertical blanking interval (VBI), a non-visible portion of the signal that occurs between the active video lines to allow beam retracing in displays. This interval, spanning approximately 19-25 lines depending on the standard, provided a low-bandwidth channel for transmitting without interfering with the picture. In systems used in , lines 14 through 20 were commonly allocated for such data services, while PAL systems in utilized similar non-displayed lines in the VBI for embedding information. Key techniques for ancillary data transmission in the analog era included , introduced in during the 1970s as a broadcast text service standardized by the and IBA, and later formalized under specifications for 625-line systems. Teletext data was modulated into the VBI using (NRZ) encoding at rates up to 6.9375 Mbit/s, enabling pages of text and simple graphics. Closed captions, mandated for accessibility, were transmitted on line 21 of signals following an FCC ruling in 1976 that reserved this line for caption data encoded as two 8-bit bytes per field. Vertical Interval Timecode (VITC), defined in SMPTE RP 108 published in 1981 and later incorporated into SMPTE ST 12, encoded timecode information across multiple VBI lines (typically lines 14-20 in ) using a biphase mark code for synchronization and frame-accurate identification during editing. Common data types carried in the analog VBI encompassed low-bandwidth metadata essential for broadcast operations and consumer features. For instance, program ratings were conveyed via the system, implemented in the 1990s through Extended Data Services (XDS) on line 21, allowing televisions to block content based on parental controls as required by FCC regulations. In PAL systems, Wide Screen Signalling (WSS) on line 23 provided and scan format information (e.g., 16:9 or 4:3) using a 14-bit code to optimize display on widescreen receivers. The Video Program System (VPS), transmitted on PAL line 16, enabled VCRs to accurately start and stop recordings by embedding program identification and timing codes in a 13-byte packet. These services typically handled textual, timing, or signaling data at modest rates, such as 960 bits per second for captions. Despite their utility, analog VBI ancillary data systems faced significant limitations due to the inherent vulnerabilities of analog transmission. Data was highly susceptible to noise, interference, and signal degradation over or aerial paths, often requiring robust error correction like bits, yet still resulting in frequent decoding errors in poor reception conditions. was constrained to a few hundred bits per field—e.g., delivering around 360 bits per line across limited VBI allocation—insufficient for high-volume applications and necessitating prioritization of essential . Additionally, extraction demanded dedicated decoders in receivers or VCRs, increasing costs and complexity for end-users. These challenges in analog ancillary data nonetheless demonstrated the value of embedding non-video information in broadcast signals, laying foundational concepts for more robust implementations.

Digital SDI Systems

The (SDI) serves as a point-to-point or copper-based transport standard for uncompressed professional video signals, enabling the integration of ancillary data within the non-active portions of the signal. Standard-definition SD-SDI operates at 270 Mb/s as defined by SMPTE ST 259:2008, high-definition HD-SDI at 1.485 Gb/s per SMPTE ST 292-1:2012, and 3G-SDI at 2.970 Gb/s according to SMPTE ST 424:2006, all utilizing 75-ohm cabling for reliable transmission over distances up to 100 meters. Ancillary data is embedded exclusively in the horizontal ancillary (HANC) and vertical ancillary (VANC) blanking regions, preserving the integrity of the active video pixels while allowing additional non-video information to coexist in the serial . The capacity for ancillary data in SDI systems is substantial, supporting up to several thousand 10-bit words per frame depending on the video format and blanking allocation; for instance, in HD-SDI, the VANC space alone can hold over 10,000 words across multiple lines. This enables the carriage of diverse payloads, including multi-channel audio, with SD-SDI accommodating up to 16 channels at 48 kHz sampling via four audio groups as per SMPTE ST 272:2008, while HD-SDI and 3G-SDI support the same 16 channels using SMPTE ST 299-1:2009 for 24-bit audio formatting in the HANC space. Higher sampling rates or additional groups in 3G-SDI can extend this to 32 channels in certain configurations, ensuring synchronization with the video timing. Key operational features of SDI include deterministic timing enforced by End of Active Video (EAV) and Start of Active Video (SAV) timing reference signals, which delineate active video boundaries and embed line/field identification for precise . These 4-word sequences (0xFF 0x00 0x00 0xXY) appear at the end and start of each line, respectively, facilitating and data alignment in receivers. SDI maintains across SD, HD, and formats through multi-rate transceivers that auto-detect and adapt to the incoming signal rate over the same cabling infrastructure. The carriage of ancillary data is standardized by SMPTE ST 291-1:2011, which specifies packet formatting with data identification words, user data blocks, and checksums for integrity. Additionally, integration with the Serial Data Transport (SDTI) per SMPTE ST 305:2005 allows SDI links to transport compressed video files or arbitrary data packets by repurposing the active video space, supporting applications like high-speed in production environments. Compared to analog video systems, digital SDI offers superior reliability through reduced noise susceptibility and line-based cyclic redundancy checks (CRC) for error detection, enabling robust ancillary data transport without the degradation inherent in analog vertical interval (VBI) limitations. This digital approach also provides scalability for higher resolutions, such as 4K/UHD, via quad-link configurations using four synchronized 3G-SDI links to divide the image into quadrants, as outlined in SMPTE ST 425-5:2014 for mapping and synchronization. For even higher efficiencies, 6G-SDI (SMPTE ST 2048-1:2012) supports up to 1080p 60 Hz, while 12G-SDI (SMPTE ST 2082-1:2015) enables single-link 4K/UHD transmission at up to 60 Hz.

IP-Based Systems

In IP-based systems, ancillary data is transported over managed IP networks as part of the transition from traditional (SDI) to more flexible, uncompressed media workflows in broadcast production. The SMPTE ST 2110 suite of standards defines this adaptation, with ST 2110-40 specifically addressing the carriage of SMPTE ST 291-1 ancillary data packets over IP networks using (RTP) packets. Complementing this, SMPTE ST 2110-41:2023 specifies the carriage of additional not covered by ST 291-1, supporting advanced applications like immersive audio descriptors. This standard separates ancillary data into independent streams, distinct from video and audio essences, enabling granular handling of such as timecode, closed captions, and error detection information without embedding it directly into the primary video signal. The transport mechanism for ancillary data in ST 2110 relies on RFC 8331, which specifies the RTP payload format for SMPTE ST 291-1 ancillary data, allowing packets to originate from any location within an SDI signal while supporting or routing over networks. Synchronization across these streams is achieved through ST 2110-10, which employs (PTP, IEEE ) to ensure precise timing alignment between ancillary data, video, and audio, maintaining lip-sync and frame-accurate delivery in distributed environments. Key advantages of this IP-based approach include breakaway routing, where ancillary data streams can be independently routed from video and audio for optimized network paths, enhancing flexibility in live setups. It also supports for cloud-based workflows and reduces cabling costs by leveraging Ethernet , while tools like ST 2110-40 converters facilitate extraction and mapping of legacy SDI ancillary data to formats. Practical implementations of ST 2110 ancillary data transport have been prominent in major live events, such as the in the 2020s, where broadcasters like Olympic Broadcasting Services (OBS) and utilized it for high-value content distribution, integrating streams for immersive audio, video, and metadata handling. Challenges in these systems include managing to ensure alignment, particularly for time-sensitive ancillary data like captions that must synchronize with video frames, as well as optimizing network bandwidth to handle the additional streams without congestion. Recent developments in the have focused on integrating ST 2110 with NMOS specifications IS-04 and IS-05 from the Advanced Media Workflow Association (AMWA), providing standardized discovery and connection management for ancillary data flows in multi-vendor IP ecosystems.

Technical Details

Embedding Locations

Ancillary data is embedded in specific non-active portions of video signals to maintain picture integrity and ensure compatibility across systems. Horizontal ancillary data (HANC) occupies the in each video line, positioned between the End of Active Video (EAV) and Start of Active Video (SAV) timing reference signals. This location confines HANC primarily to timing recovery areas, making it suitable for high-bandwidth data that requires frequent updates. Vertical ancillary data (VANC) is inserted during the vertical blanking interval, typically in lines such as 9 through 20 for high-definition formats, strategically avoiding safe title and safe areas to prevent overlap with captions or other on-screen elements. VANC placement is favored for and lower-bandwidth information, as it minimizes the risk of visible artifacts in the active image region. System-specific implementations vary: in analog systems, data is carried in the vertical blanking interval (VBI) across lines 1 to 22; in (SDI) systems, embedding can occur across the full field via HANC in active lines, though blanking regions remain preferred for robustness; and in IP-based systems under SMPTE ST 2110, ancillary data is transported via dedicated (RTP) streams, decoupling it from traditional physical blanking structures. SMPTE ST 291-1 establishes key rules for embedding, defining ancillary data spaces including horizontal (HANC), vertical (VANC), and active line (ALANC) spaces, though ALANC within active video pixels is typically avoided to preserve signal compatibility and prevent image corruption. For enhanced robustness, packets incorporate parity words, with positioning guidelines that account for even and odd fields in interlaced signals to support field-accurate recovery. An exception applies to certain audio-related elements in HD-SDI, where audio metadata is commonly embedded in VANC line 10 per standards like SMPTE 2020-3. Following these locations, the internal packet structure adheres to SMPTE ST 291-1 formatting for data organization.

Packet Structure

Ancillary data packets conform to the formatting defined in SMPTE ST 291-1:2011, which specifies their composition as a sequence of 10-bit words for integration into digital video interfaces such as SDI. The packet begins with a preamble called the Ancillary Data Flag (ADF), comprising three specific 10-bit words: 0x000, 0x3FF, and 0x3FF. This fixed sequence distinguishes the packet start from active video or blanking data in the stream. Immediately following the ADF is the Data Identifier (DID), a 10-bit word where bits b7 through b0 hold an 8-bit identifier value, bit b8 provides even over those bits, and bit b9 is the ( ) of b8 for detection. Registered DID values range from 0x100 to 0x3FF in their 10-bit encoded form, with 0x000 to 0x0FF reserved to avoid conflicts with video elements. Bit b7 of the DID determines the packet type: b7=1 indicates Type 1 (often used for multi-block data with fixed structures), while b7=0 indicates Type 2 (for variable-length payloads). In Type 1 packets, the next word is the Data Block Number (DBN), formatted identically to the DID with 8-bit block sequence value plus bits, enabling of fragmented across multiple packets. Type 2 packets instead use a Secondary Data Identifier (SDID) in this position, also with , to further specify the subtype. Both types then include a Data Count (DC) word, a 10-bit value (0 to 255, with bits b9 and b8 as over b7-b0) indicating the number of subsequent User Data Words (UDWs). The UDWs follow, up to 255 words of 10-bit application-specific , transmitted without additional unless defined by the application. Some systems incorporate a reverse in the to handle byte variations during transport. The packet concludes with a (CS) word, a 10-bit value ensuring integrity: bits b7-b0 are the (modulo 256) of the sum of the 8 least significant bits from the DID, DBN/SDID, DC, and all UDWs; bits b9 and b8 then apply as in the DID. This allows receivers to detect errors by recomputing and comparing the sum. In IP-based systems, SMPTE ST 2110-40:2018 extends this structure by mapping the full ancillary packets (excluding the , which is implicit in RTP timing) into RTP payloads, adding RTP headers for network transport while preserving the DID, SDID/DBN, DC, UDWs, and CS.

Applications and Uses

Embedded Audio

Embedded audio refers to the integration of digital audio signals within the ancillary data space of video signals, enabling synchronized transport of audio and video over a single interface. This mechanism allows audio samples, typically in 20- or 24-bit (PCM) format, to be carried as 24-bit words within ancillary (ANC) data packets. These packets are inserted into the horizontal ancillary (HANC) space during the blanking intervals of each video line, ensuring that audio remains temporally aligned with the video without requiring separate cabling. The audio data packets are type 2 ANC packets, featuring a data identification (DID) value and secondary data identification (SDID) to specify the content; for instance, the audio packet uses DID=0x010 for channel status information, while SDID values denote the audio group and channel configuration, such as 0x020 for professional digital audio pairs. The primary standards governing embedded audio are SMPTE ST 272 for standard-definition (SD-SDI) systems and SMPTE ST 299 for high-definition (HD-SDI) and 3G-SDI systems. SMPTE ST 272, first published in 1994 and revised in 2004, supports up to 16 channels of embedded audio at a 48 kHz sample rate, organized into four groups of four channels each, with each group corresponding to two AES3 pairs. In contrast, SMPTE ST 299, initially released in 1997 and updated in 2001, accommodates 16 channels for HD-SDI at 1.485 Gbps, also at 48 kHz with 24-bit resolution, and its extension in SMPTE ST 299-2 enables up to 32 channels for 3G-SDI at 2.97 Gbps. Both standards embed audio in the HANC space, with packets distributed across active video lines to maintain ; for example, in SD-SDI, audio packets occupy lines 10 through 20 and 283 through 284 in systems. The structure of an embedded audio packet consists of a , DID/SDID, data block number, and up to 255 data words, followed by a for detection, as defined in SMPTE ST 291 for ANC packet formatting. Specifically for audio, each data packet contains 64 words dedicated to audio samples, along with ancillary words for sample count, validity bits (indicating audio presence and mute status), and channel mapping to ensure proper routing of left/right or multi-channel assignments. An audio control packet, transmitted once per field, provides global parameters like sampling frequency and channel status bits derived from AES3. This design supports multi-channel configurations, such as , preserving lip-sync by aligning audio samples to video frames—typically 80 audio samples per video line at 48 kHz. In broadcast production environments, embedded audio is widely utilized in video switchers and routers for seamless multi-channel handling without desynchronization. Despite its benefits, embedded audio in ANC packets has limitations, including fixed sample rates (primarily 48 kHz, with optional 96 kHz in later revisions) and constraints that restrict higher rates or more channels without compromising video quality. In IP-based workflows defined by SMPTE ST 2110 (as revised through 2023), audio essence is transported separately via ST 2110-30 using for uncompressed PCM, decoupling it from video to allow independent routing; however, ANC packets can still convey audio-related control , such as timing and cues, through ST 2110-40. A 2024 extension, SMPTE ST 2110-41, further enhances transport for ancillary data over IP networks. This shift enhances flexibility in modern facilities but retains compatibility with legacy SDI embedded audio for transitional systems.

Metadata and Captions

Ancillary data in video signals plays a crucial role in embedding textual and captions to enhance and . For high-definition () video, CEA-708 serves as the primary standard for closed captions, utilizing a Data Identifier (DID) of 0x061 in the ancillary data space to encapsulate caption packets that support multiple languages simultaneously, along with advanced display styles such as roll-up, pop-on, and paint-on captions. This format allows for greater flexibility compared to earlier systems, enabling up to eight independent caption services per stream, each with customizable fonts, colors, and positioning to accommodate diverse viewer needs. In standard-definition () contexts, CEA-608 captions are mapped to line 21 of the or equivalently embedded in the digital () as ancillary data, providing basic with limited character sets and styles primarily in English. These captions are often bridged into HD workflows by encapsulating CEA-608 data within CEA-708 packets to maintain compatibility during format transitions. For timing synchronization, ancillary data carries timecode information, such as Longitudinal Timecode (LTC) or Vertical Interval Timecode (VITC), formatted according to SMPTE 12M in the HH:MM:SS:FF structure and embedded using SMPTE RP-188 with a DID of 0x060 in the vertical ancillary (VANC) space for HD signals. This embedding ensures precise frame-accurate referencing across production and post-production processes. Additional metadata types include the Active Format Description (AFD) per SMPTE ST 2016-1, which conveys and active picture information via a 4-bit code in the VANC to guide display formatting without altering the video signal. Similarly, SCTE 104 messages embed content advisories, such as parental ratings and program descriptors, as ancillary data to automate cueing for ad insertion and compliance signaling in broadcast workflows. These elements are typically placed in VANC lines 9 through 20 of HD-SDI signals, where the available space supports over 200 characters per field for caption data, depending on packet efficiency and service count. Regulatory frameworks, particularly in the United States, have driven the adoption of these ancillary features for . The (FCC) mandated through the Television Decoder Circuitry Act of 1990, effective from 1993, requiring all new televisions to include decoders and broadcasters to provide captions for a growing percentage of programming, reaching 100% by 2006 for . This extends to modern IP-based streaming via SMPTE ST 2110 (as revised through 2023), which maps ancillary data—including captions and —into separate essence streams over networks, preserving accessibility while enabling flexible routing and processing. The 2024 SMPTE ST 2110-41 standard further supports advanced IP transport of such .

Identification and Error Handling

Ancillary data packets play a crucial role in identifying video formats and managing errors within video signals, ensuring reliable transmission and compatibility across broadcast workflows. The Video Payload Identifier (VPID), defined in SMPTE ST 352, provides a standardized method to encode key characteristics of the video payload, such as frame rate, scanning method, and color space. This four-byte identifier is embedded as an ancillary data packet with a Data Identifier (DID) of 0x41 and Secondary Data Identifier (SDID) of 0x01, followed by user data words (UDWs) that specify the format details. For instance, a 1080p signal at 23.98 frames per second is represented by a specific byte sequence in UDWs 0-3, where byte 1 indicates progressive scanning and the frame rate, byte 2 denotes the line count and aspect ratio, byte 3 specifies the sampling structure and colorimetry, and byte 4 identifies transport parameters. VPID packets are typically inserted in the vertical ancillary (VANC) space to facilitate quick detection by downstream equipment. In high-definition (HD) formats like , insertion occurs on line 9 during the vertical blanking interval, with repetition rates aligned to the video frame structure for consistent availability across fields. This placement adheres to SMPTE 291-1 for ancillary data formatting and ensures the identifier is accessible without interfering with active video content. By conveying precise format information, VPID enables automatic configuration in production chains, reducing manual setup and potential mismatches in multi-format environments. Error Detection and Handling (EDH), specified in SMPTE RP 165, complements identification by monitoring signal integrity through (CRC) s computed over the active video and full field regions. The EDH packet, structured as a Type 1 ancillary data packet with DID 0x50, includes CRC values for each field, along with status flags reporting detected anomalies such as errors, format inconsistencies, or ancillary data failures. These flags—encompassing error detection and ancillary (EDA), active picture (IDH), and full field (UES) indicators—allow receivers to assess and respond to transmission issues, such as bit errors introduced by cabling or interference. In standard-definition (SD-SDI) systems, EDH packets are inserted into every field, providing frequent integrity checks at intervals corresponding to the field's duration (approximately 20 ms for 50 Hz or 16.7 ms for 59.94 Hz systems). Beyond core VPID and EDH mechanisms, other ancillary structures support identification and error management, particularly for advanced applications. The format, outlined in SMPTE ST 336, enables encoding of dynamic within ancillary packets, using a 16-byte universal label as the key to identify data types, including those related to format verification or error status. In IP-based systems under SMPTE ST 2110-40 (2023), ancillary data is transported separately via RTP streams, incorporating RTP-level error detection alongside embedded ANC checksums and flags to diagnose issues like or timing discrepancies. The 2024 SMPTE ST 2110-41 standard extends this for more robust IP handling. This integration preserves SDI-era diagnostics while adapting to network environments. The evolution of these protocols reflects advancements in video technology. VPID support was extended in revisions of SMPTE ST 352 during the to accommodate ultra-high-definition (UHD) formats, including resolutions and higher frame rates, with updated byte assignments for sampling structures like square pixels and wider color gamuts. Similarly, EDH principles have been incorporated into higher-rate SDI standards, while ST 2110-40 enhances error handling for IP workflows by enabling breakaway routing of ancillary data and leveraging RTP's robust diagnostics. These developments ensure ongoing compatibility and reliability in increasingly complex production and distribution systems.

References

  1. [1]
    [PDF] Using VANC Insertion with FlipFactory - Telestream
    Ancillary (ANC) data is a means of embedding non-video information, such as audio and metadata, in a serial digital transport stream. Ancillary data packets ...
  2. [2]
    None
    ### Definition and Key Aspects of Ancillary Data in Digital Component Interfaces
  3. [3]
    ST 291-1 Ancillary Data - IETF
    When an ANC data RTP stream is to be associated with an RTP video stream ... [ST259] SMPTE, "SMPTE Standard - For Television - SDTV Digital Signal/Data ...
  4. [4]
    Ancillary Data - Ross Video
    Ancillary data is information such as closed captioning or embedded audio that is included in the non-active video portions of the video signal.
  5. [5]
    Television Products - vintageTEK
    The need to use test signals buried in the vertical blanking interval necessitated attention to CRT and amplifier design in order to achieve the needed ...
  6. [6]
    History Of Closed Captions: The Analog Era - Hackaday
    Apr 14, 2021 · They held a conference in Nashville (PDF) in December of 1971, which we can say was the birthplace of closed captioning. First Captioned TV ...
  7. [7]
    608 and 708 Closed Captioning: What You Need to Know
    Sep 23, 2022 · What are 608 Closed Captions? 608 closed captions (also known as CEA-608, EIA-608, or Line 21 captions) were the standard for analog television.Missing: ancillary vertical blanking interval history
  8. [8]
    From SDI to IP: The Evolution of Distribution - SMPTE
    Jul 2, 2019 · SDI—or serial digital interface—was first standardized by SMPTE in 1989, marking a revolutionary transition from analog to digital video ...
  9. [9]
    SMPTE 291M - Ancillary Data Packet and Space Formatting
    Document History ; SMPTE 291M. September 20, 2006 ; ST 291M. October 1, 1998 ; ST 291M. January 1, 1996.
  10. [10]
    [PDF] Federal Communications Commission FCC 11-138 Before the ...
    Sep 19, 2011 · Commission's Rules to Implement the Provisions of the Television Decoder Circuitry Act of 1990, Report and Order,. 6 FCC Rcd 2419 (1991) (“TDCA ...
  11. [11]
    Newly Published SMPTE ST 2110–40 Addresses Ancillary Data
    May 3, 2018 · SMPTE ST 2110 provides for carriage and synchronization of separate audio, video and ancillary data streams for live production, playout and ...
  12. [12]
    SMPTE ST 2110 - Society of Motion Picture & Television Engineers
    The SMPTE ST 2110 standards suites specifies the carriage, synchronization, and description of separate elementary essence streams over IP.
  13. [13]
    SMPTE ST 2110: A Vibrant Six-Year-Old | TV Tech - TVTechnology
    Feb 7, 2023 · SMPTE ST 2110: A Vibrant Six-Year-Old. By Wes Simpson published February 7, 2023. How has this critical standard impacted broadcasters?
  14. [14]
    NTSC Television Broadcasting System - Telecomponents
    The remainder (the vertical blanking interval) are used for synchronization and vertical retrace. This blanking interval was originally designed to simply blank ...<|separator|>
  15. [15]
    Vertical Blanking Interval - an overview | ScienceDirect Topics
    Ancillary data packets are used to transmit non-video information (such as digital audio, closed captioning, teletext, etc.) during the blanking intervals. A ...
  16. [16]
    [PDF] ETS 300 706 - Enhanced Teletext specification - ETSI
    May 28, 1997 · This European Telecommunication Standard (ETS) defines the application of CCIR Teletext System B to. CCIR 625 Line 50 field Television ...Missing: 1970s | Show results with:1970s
  17. [17]
    HISTORY OF CLOSED CAPTIONING - NCI leads in providing ...
    1990 – The invention of the decoder chip led to the passage of the Television Decoder Circuitry Act, mandating that starting July 1, 1993, all new television ...
  18. [18]
    [PDF] Federal Communications Commission FCC 96-318 REPORT
    13. Technology: Closed captioning is distributed on line 21 of the vertical blanking interval ("VBI") of broadcast and other analog television signals.
  19. [19]
    [PDF] Technische Richtlinien - ARD
    ... SMPTE ST 12. 1:2008. The SMPTE time code standard, 12M, was developed in 1975 (!) for analog recording systems and thus referred only to interlaced broadcast ...
  20. [20]
    FCC 98-36, ET Docket No. 97-206 Technical Requirements to ...
    ... Extended Data Service (XDS)" and EIA-608 "Recommended Practice for Line 21 Data Service". This incorporation by reference was approved by the Director of ...
  21. [21]
    [PDF] 625-Line television Wide Screen Signalling (WSS) - ETSI
    This ETS defines the Wide Screen Signalling (WSS) information for 625 line PAL and SECAM television systems. NOTE: The EBU/ETSI JTC was established in 1990 to ...
  22. [22]
    [PDF] TVP5146 PDC and VPS APPLICATION NOTE - Texas Instruments
    VPS is used only with the PAL standard and it is received during line 16 of the VBI (Vertical Blanking. Interval). In this application note, signals which ...
  23. [23]
    Analog Video Signal - an overview | ScienceDirect Topics
    Since the “digital” signals are of just a few discrete levels, the data detection circuits can discriminate against significant amounts of noise, distortion, ...
  24. [24]
    Analog Television - an overview | ScienceDirect Topics
    The standard teletext data rate of 5.72 Mbps is used, yielding 9,600 baud per VBI line allocated per field. There are a wide variety of commercial ...
  25. [25]
    [PDF] A Guide to Standard and High-Definition Digital Video Measurements
    Program video, digital audio, and associated ancillary data signals together make up the digital television signal. In the analog world of television, video ...
  26. [26]
    [PDF] Understanding HD & 3G-SDI Video - Tektronix
    ANSI/SMPTE 272M – Television – Formatting AES/EBU Audio and Auxiliary Data ... Our website offers valuable,up-to-date information that is available at the click ...
  27. [27]
    [PDF] digital audio modular processing system 3G/HD/SD SDI 16ch ...
    The transport of Dolby Metadata embedded into the Vertical Ancilliary data space of an SDI signal is defined by SMPTE 2020 suite of standards. For details pls ...
  28. [28]
  29. [29]
    [PDF] A GUIDE TO 4K/UHD MONITORING AND MEASUREMENT ––
    A Quad Link 4K/UHD signal comprises of four SDI physical links. As ... Carriage of 4320-line Source image formats and ancillary data in a quad-link 6G-SDI.
  30. [30]
    SMPTE Advisory Note for ST 2110-40:2023
    The most recently published version of SMPTE ST 2110-40:2023 with the approval date of 2022‐06‐24 contains typographical errors in the dates of certain ...
  31. [31]
    Deep dive into SMPTE ST 2110-40 Ancillary Data - IABM
    The ANC standard for SMPTE ST 2110 is made up of two parts, one from the IETF and one from SMPTE. This presentation covers how SDI ANC services are transported ...<|control11|><|separator|>
  32. [32]
    RFC 8331 - RTP Payload for Society of Motion Picture and ...
    This memo describes a Real-time Transport Protocol (RTP) payload format for the Society of Motion Picture and Television Engineers (SMPTE) ancillary space (ANC ...
  33. [33]
    Implementing PTP aka SMPTE ST 2110-10 - The Broadcast Bridge
    Nov 27, 2018 · PTP or ST 2110-10 is generated by a grandmaster clock that is external to the core switch fabric, similar to how genlock is generated by a master sync ...
  34. [34]
    SMPTE ST 2110 FAQ | Society of Motion Picture & Television ...
    Aug 15, 2025 · ST 2110 standards make it possible to separately route and break away ancillary data, as well as the audio and video. This change promises ...
  35. [35]
    Part 1: Background and key benefits of SMPTE ST 2110 on AWS ...
    Nov 3, 2020 · It greatly reduces the bandwidth requirements. Because these essence flows can be made available in a multicast network, multiple systems can ...
  36. [36]
    Live From Paris 2024: NBC's Paris Broadcast Operations Take ST ...
    Jul 30, 2024 · The NBC Olympics IBC operation is proof positive that you can do more with less, especially when ST 2110 is involved. “We pulled the SDI ...
  37. [37]
    ST 2110 IP Goes Mainstream with Broadcast Leader OBS and ...
    Hear from broadcast leader Olympic Broadcasting Services (OBS) on leveraging ST 2110 technology to make, manage and move their high-value content, meeting both ...
  38. [38]
    SMPTE ST 2110: Takeaways from Denver7's IP Transformation
    The transition to SMPTE ST 2110 required Denver7 to upgrade their entire network to accommodate high-bandwidth, low-latency IP media traffic. One of the key ...
  39. [39]
    Debugging SMPTE 2110 Issues: Essential Features for Seamless ...
    Debugging tools should provide visibility into network bandwidth utilization, latency, jitter, and packet loss rates across critical network segments. Real-time ...
  40. [40]
    Part 3: Simplifying SMPTE ST 2110 Management with NMOS
    Nov 3, 2020 · This multi-post series is a dive deep into the Society of Motion Picture and Television Engineers (SMPTE) ST 2110 suite of standards.
  41. [41]
    Discovery in SMPTE ST 2110 and NMOS - RAVENNA Network
    ST 2110 doesn't mandate discovery. NMOS, created by AWMA, provides discovery and supports ST 2110, improving device compatibility.
  42. [42]
    [PDF] ARRI SDI Metadata White Paper
    Jul 25, 2023 · This flag is inserted in the HANC data space according to SMPTE 291 as a type 2 ancillary data packet with. DID=0x52 and SDID=0x4D. The data ...
  43. [43]
    Part 20 - ST 2110-4x Metadata Standards - The Broadcast Bridge
    Nov 1, 2024 · Ancillary data (ANC) extracted from an SDI stream is delivered in real-time alongside the video and audio essence data. They all travel in ...
  44. [44]
    NBC Universal HDTV Content Delivery Technical Specifications
    Process to conform to forthcoming SMPTE 2020-3: o Data must be encoded into VANC line 10. o DID = 0 x 45 o SID = 0 x 01 o DC = Variable<|control11|><|separator|>
  45. [45]
    RFC 8331: RTP Payload for Society of Motion Picture and Television ...
    RTP Payload Format for SMPTE ST 291 Ancillary Data An example of the format ... Bit b8 is the even parity for bits b7 through b0, and bit b9 is the inverse ...
  46. [46]
    SMPTE Ancillary Data SMPTE ST 291
    Browse values below (click on top bar to sort by required field, hover to the left of a value to view or print a selected record), or download a spreadsheet ...Missing: history | Show results with:history
  47. [47]
    Breaking Down Embedded Audio Part 1 | TV Tech - TVTechnology
    Jan 26, 2009 · The ancillary data flag (ADF) starts the packet, which for component video consists of three specific words with values of 000h, 3FFh, 3FFh.<|control11|><|separator|>
  48. [48]
  49. [49]
  50. [50]
    Embedded Audio in a High-Def Video Signal - TVTechnology
    Jun 22, 2009 · Formatting for embedded audio in the HD HANC space follows the SMPTE 291M standard for television, "Ancillary Data Packet and Space Formatting."
  51. [51]
  52. [52]
    [PDF] How to Guide Closed Caption Monitoring - Tektronix
    CEA 608 standardizes the process of adding caption data to standard definition (SD) signal. This can be added as an analog signal to line 21 of the active NTSC.Missing: history EIA-
  53. [53]
    SMPTE RP 188 - Standards | GlobalSpec
    This International Standard specifies a digital time and control code for use in television, film, and accompanying audio systems.
  54. [54]
    Automation System to Compression System Communications ...
    This standard defines the Communications API between an Automation System and the associated Compression System that will insert SCTE 35 private sections into ...Missing: advisories ancillary data
  55. [55]
    Video Payload ID Codes for Serial Digital Interfaces
    SMPTE ST 352. Video Payload Identification Codes for Serial Digital Interfaces. Byte 1 Values. Browse values below (click on top bar to sort by required ...
  56. [56]
    SMPTE ST 352 - Payload Identification Codes For Serial Digital ...
    Feb 5, 2013 · This standard defines the specification of a 4-byte payload identifier that describes aspects of the payload carried on the SMPTE Serial Digital Interface (SDI)<|separator|>
  57. [57]
    None
    ### Summary of Recommended Line Numbers for Payload Identifier in HD-SDI (1080p) Formats
  58. [58]
    Standards Index | Society of Motion Picture & Television Engineers
    An SMPTE Standard shall contain Conformance Language. ST 12-2, Transmission of Time Code in the Ancillary Data Space. Learn More. ST 159-1, Cartridge-Camera ...
  59. [59]
    SD-SDI RX EDH Processor - XAPP1248
    Nov 8, 2023 · The SD-SDI RX EDH processor detects receiver errors in SD-SDI mode, reports errors, captures error flags, and counts error fields.
  60. [60]
    Error Detection and Handling - Wikipedia
    Error Detection and Handling (EDH) protocol is an optional but commonly used addition to the Standard Definition-Serial Digital Interface (SDI) standard.Missing: rate | Show results with:rate
  61. [61]
    [PDF] XAPP299 "SDI: Ancillary Data and EDH Processors" v1.0 (5/02)
    May 16, 2002 · XAPP299 describes implementations of processors for SDI, including ancillary data (ANC) and error detection (EDH) processing, which are ...<|separator|>
  62. [62]
    [PDF] Deep Dive into SMPTE ST 2110-40 Ancillary Data | IP Showcase
    Nov 2, 2018 · Ancillary Data. • Over the years, lots of things have been put into the SDI “Ancillary. Data” system. ‒ Some are tightly related to the ...
  63. [63]
    [PDF] SMPTE UHD-SDI Receiver Subsystem v2.0 Product Guide
    Dec 5, 2018 · • SMPTE RP 165: EDH for SD-SDI. • SMPTE ST 292: HD-SDI at 1.485 Gb/s ... received EDH packet. See Table 2-5 in the SMPTE. UHD-SDI Product ...