Fact-checked by Grok 2 weeks ago

OPC Data Access

OPC Data Access (OPC DA) is a group of client-server standards developed by the that enables the exchange of , including values, timestamps, and quality information, from industrial devices to applications, primarily using Microsoft's (COM) and (DCOM) technologies on Windows platforms. Originally released as version 1.0 in 1996 as part of the foundational OPC specifications, OPC DA evolved through several versions to address growing needs in industrial automation, with key releases including version 2.0 in 1998, version 3.0 in 2003, and version 2.05a in 2012 incorporating enhancements for better interoperability. The specification was designed to facilitate the development of OPC servers in languages like C and C++, while allowing clients to be built in various programming languages, thereby promoting vendor interoperability in process control and manufacturing environments. By the early 2000s, OPC DA had become a cornerstone of OPC Classic, alongside related standards for alarms and events (OPC AE) and historical data access (OPC HDA), enabling widespread adoption in SCADA systems, PLCs, and other industrial hardware. At its core, OPC DA provides mechanisms for defining an of data items, browsing that space, reading and writing current values, and subscribing to asynchronous updates for monitored items, all while handling data quality indicators such as "good," "bad," or "uncertain" to ensure reliable information flow. The standard includes detailed interfaces, methods, parameters, data types, structures, and error codes, supported by sample code to guide implementation. This architecture allows clients to connect to multiple servers seamlessly, abstracting the underlying device protocols and fostering a unified in heterogeneous industrial settings. Although OPC DA remains in use for legacy systems due to its proven reliability and extensive deployment, it has been largely superseded by the more secure, platform-independent (OPC UA) introduced in 2008, which incorporates and extends OPC DA functionality into a service-oriented model. The declared OPC DA complete and no longer under active maintenance, directing new developments toward OPC UA to meet modern requirements for cross-platform compatibility, enhanced security, and complex .

Overview and History

Introduction

OPC Data Access (OPC DA), originally known as OLE for Process Control (OPC) Data Access, is the foundational specification developed by the OPC Foundation in 1996 for enabling real-time data exchange in industrial automation systems. It serves as a client-server standard that allows software clients, such as human-machine interfaces (HMIs) or supervisory control and data acquisition (SCADA) systems, to access current process data from devices like programmable logic controllers (PLCs) and sensors. This specification defines a structured approach to data communication using Component Object Model/Distributed Component Object Model (COM/DCOM) on Microsoft Windows platforms, ensuring compatibility across diverse hardware and software vendors. The core purpose of OPC DA is to provide standardized, interoperable access to , focusing exclusively on current values rather than historical archiving or event notifications, which are addressed by separate OPC specifications. It facilitates operations such as reading and writing data items, along with associated metadata, to support monitoring and control in manufacturing and process industries. By establishing a , OPC DA promotes seamless integration between multi-vendor systems, thereby minimizing proprietary dependencies. Key benefits of OPC DA include reduced vendor lock-in, as it allows clients from one provider to connect to servers from another without custom middleware, and simplified system integration that accelerates deployment in complex automation environments. Each data point delivered via OPC DA includes not only the value but also quality indicators (e.g., good, bad, or uncertain) and timestamps, enabling reliable decision-making based on the data's validity and recency. As part of the broader OPC Classic suite, OPC DA laid the groundwork for subsequent evolutions, including its distinction from the platform-independent OPC Unified Architecture (OPC UA).

Development and Versions

The OPC Foundation was officially incorporated on April 22, 1996, by a group of leading suppliers, including but not limited to Fisher-Rosemount, Rockwell Software, Opto 22, Intellution, and Intuitive Technology, to create and maintain open, multi-vendor standards for interoperable data exchange in industrial . The foundation's inaugural effort focused on access, culminating in the release of the initial OPC Data Access (DA) specification version 1.0 on August 29, 1996, which provided a basic framework for synchronous client-server communication using (COM) technology. This version emphasized simple read and write operations for process data, addressing the limitations of proprietary protocols and (DDE) in enabling vendor-neutral connectivity. Shortly after, version 1.0a was issued in 1997 as a minor update incorporating corrections and clarifications to the original specification, without introducing major functional changes. The specification evolved significantly with version 2.0, released in late 1998, which separated the custom C++ interfaces from the automation wrappers to better support languages like Visual Basic, while introducing asynchronous operations, group-based subscriptions, and item properties for more efficient real-time data handling. These additions shifted the paradigm from the predominantly synchronous access of 1.0, improving performance by allowing clients to receive notifications on data changes rather than polling, and enhancing error handling through structured callbacks. An intermediate update, version 2.05, followed on December 17, 2001, with enhancements to data type conversions using VARIANT structures, refined percent deadband filtering for subscriptions (0.0-100.0% based on engineering units to reduce unnecessary updates), and improved asynchronous interfaces like IOPCAsyncIO2 for better transaction management and resource efficiency. Version 3.0, released on , 2003, built on these foundations by adding clarifications to interface behaviors, such as group state management and quality status transitions (e.g., WAITING_FOR_INITIAL_DATA), and introducing new optional interfaces like IOPCItemDeadbandMgt for per-item filtering and IOPCSyncIO2/IOPCAsyncIO3 for advanced synchronous/asynchronous access with maximum age parameters. It also aligned more closely with emerging standards by incorporating for XML-DA through and methods, facilitating broader without altering the core COM/DCOM model. These evolutions addressed growing demands for in larger systems, more robust error reporting (e.g., new codes like OPC_E_INVALID_PID), and wider via wrappers. Version 3.0 marked the final major release of OPC DA, which the has declared complete and no longer under active maintenance, with focus shifting to successor technologies like OPC UA to meet modern requirements.

Architecture

Client-Server Model

The OPC Data Access (OPC DA) specification defines a client-server architecture that enables standardized communication for exchange in industrial automation environments. Clients, such as supervisory control and (SCADA) systems or human-machine interfaces (HMIs), act as consumers of process data by establishing connections to servers, requesting reads or writes to specific data points, and optionally subscribing to receive updates on changes. Servers function as providers, aggregating raw data from underlying sources like programmable logic controllers (PLCs), sensors, or other field devices, and exposing this information through a cohesive interface that abstracts the complexities of diverse hardware protocols. This division of responsibilities promotes , allowing clients from different vendors to access data uniformly without custom integrations. Data flow in the OPC DA model follows a request-response initiated by the client. Clients first discover available servers via system-level registration and then connect to initiate sessions, where they define logical groupings of items for efficient management. From there, clients issue requests for immediate data retrieval (synchronous operations) or queued (asynchronous operations), including direct writes to devices when supported. Servers handle these by sourcing values from an internal for speed or from devices for freshness, subscriptions based on configured criteria, and delivering notifications back to clients through established channels. This bidirectional flow optimizes bandwidth by limiting updates to meaningful changes, supporting high-performance applications in time-sensitive control systems. The server's forms the core repository in this model, representing all accessible data items in a structured manner that clients can query and navigate. This space may be organized flatly, listing items as a simple array, or hierarchically, mimicking a tree-like structure with branches for categories like equipment or processes and leaves for individual variables, facilitating intuitive browsing and selection by item identifiers. Clients interact with the to resolve paths to data, enabling scalable access in environments with thousands of points without overwhelming the interface. To accommodate distributed systems, OPC DA includes multi-server support, where a server can internally function as a client to remote servers, proxying and aggregating their data into its own for seamless presentation to end clients. This aggregation reduces the need for clients to manage multiple direct connections, enhancing system efficiency and in setups spanning networks or facilities.

COM/DCOM Foundation

OPC Data Access (OPC DA) is fundamentally built upon Microsoft's Component Object Model (COM), which provides a framework for defining object-oriented interfaces that enable standardized communication between software components. COM allows OPC DA to expose server functionality through well-defined interfaces described in Interface Definition Language (IDL) files, such as opcda.idl, which are compiled using the Microsoft Interface Definition Language (MIDL) compiler to generate necessary headers like OPCDA.H for implementation in languages like C/C++. This foundation extends to Distributed COM (DCOM) for remote access, facilitating networked interactions between OPC clients and servers across machines while maintaining the same interface semantics as local COM calls. Implementation of OPC DA requires a Microsoft Windows operating system, , , or later, to support the underlying /DCOM runtime environment, including proper marshaling of parameters and registry-based object activation. Developers use the MIDL compiler along with the Windows SDK to process IDL files and produce compatible binaries and headers. One key advantage of this COM/DCOM basis is language independence: custom interfaces support low-level access in C/C++, while Automation wrappers (via OPCAuto.dll) enable easier integration with higher-level languages like or scripting environments, promoting interoperability across diverse development tools. Additionally, DCOM inherently supports both local and remote access over networks, allowing OPC DA servers to serve clients on the same machine or across distributed systems without altering the core interface design. Despite these strengths, the reliance on COM/DCOM introduces notable limitations, as OPC DA is inherently Windows-only, restricting deployment to ecosystems and excluding cross-platform . Security is managed through DCOM settings, such as levels and permissions defined via tools like dcomcnfg.exe, which can be complex to administer in environments. Remote via DCOM often encounters challenges, requiring the opening of dynamic ports (typically in the range of 1024-65535) or specific RPC endpoints, which complicates secure network traversal and increases vulnerability exposure if not properly hardened. Recent security updates to DCOM as of 2025 have further impacted on modern Windows versions like 10, 11, and Server 2022+, often necessitating additional mitigations such as tunneling or enhanced . The /DCOM foundation of OPC DA persisted through its evolution, but version 3.0 (released in 2003) introduced OPC XML-DA as a companion specification to address some of these constraints by providing a web-based alternative using over HTTP for data access, enabling firewall-friendly communication without DCOM dependencies. However, the core OPC DA interfaces and client-server model remain rooted in COM for and primary implementations.

Data Model

Groups and Items

In OPC Data Access (OPC DA), groups serve as logical containers for organizing and managing collections of items, enabling clients to structure data access efficiently within the server's address space. Groups are created through the IOPCServer::AddGroup method and managed via the OPCGroup interface, allowing clients to group related data points for operations such as monitoring or retrieval. Groups can be configured as either or , determining their scope and mutability across clients. Public groups are shared among multiple clients, identified by unique names, and become immutable once established—preventing modifications to their items after creation—via methods like IOPCServerPublicGroups::GetPublicGroupByName. In contrast, groups are exclusive to a single client, permitting changes such as name updates, and are typically created directly through AddGroup. Additionally, groups support active and inactive states: an active group enables data updates and callback notifications, managed by SetActiveState, while an inactive group halts these processes and returns an out-of-service status without affecting underlying device reads. Items represent individual connections to specific data sources in the server's , distinct from the data values themselves, and are defined by the OPCITEMDEF structure. Each item is uniquely identified by an ITEMID string, which follows server-specific syntax (e.g., "FIC101.CV" for a flow indicator control value), serving as the primary reference for accessing the point. An optional AccessPath attribute specifies routing details, such as a communication (e.g., "COM1"), though servers may disregard it if unnecessary. Items may also include a Blob field—a large object for vendor-specific —whose size is indicated by dwBlobSize and content by pBlob, potentially updated during item addition or validation to optimize access. Item attributes encompass access rights, states, and that govern their usability. Access rights are bit-masked flags returned in dwAccessRights: OPC_READABLE (value 1) indicates read permission, while OPC_WRITEABLE (value 2) allows writing, determined by hardware capabilities rather than user security. Items maintain an active or inactive , similar to groups, where only active items participate in cache updates and callbacks; inactive items yield out-of-service quality during reads. are categorized into fixed, recommended, and vendor-specific types, accessible via IOPCItemProperties:
Property TypeDescriptionID Range ExamplesKey Attributes/Examples
FixedStandard OPC-defined properties for core item data1–6 (e.g., OPC_PROP_VALUE = 2)Item value, quality, timestamp
RecommendedCommonly used attributes for enhanced description100–499 (e.g., OPC_PROP_UNIT = 100)Engineering units, description, data type
Vendor-SpecificCustom extensions by server vendors≥5000Server-defined, such as custom metadata
These properties include additional details like server and client handles for identification, as well as engineering unit information. Management of items occurs primarily through their containing groups, with validation ensuring proper integration. Clients add items to a group using IOPCItemMgt::AddItems, providing an array of OPCITEMDEF structures; this operation is prohibited for public groups and results in OPCITEMRESULT arrays detailing outcomes, including server handles for active items. Removal is handled via RemoveItems with server handles, again restricted for public groups. Prior to addition, ValidateItems checks item existence, permissions, and validity without committing changes, returning similar result structures to confirm accessibility and rights. Address space browsing facilitates discovery of available items using enumerators defined in IOPCBrowseServerAddressSpace. Clients first query via QueryOrganization to determine if the space is flat (single-level item list) or hierarchical (branched structure with areas and leaves). employs ChangeBrowsePosition for movement, while BrowseOPCItemIDs enumerates item IDs and BrowseAccessPaths lists paths, filterable by access rights. For detailed attribute enumeration, IEnumOPCItemAttributes supports both browsing modes, enabling hierarchical traversal (e.g., from branches like "AREA1" to leaves like "CURRENT_VALUE") or flat listing.

Value, Quality, and Timestamp

In OPC Data Access, the core data provided by items consists of the , , and triplet, commonly referred to as VQT. This structure ensures that clients receive not only the raw data but also metadata indicating its reliability and origin, enabling informed decision-making in industrial applications. The VQT is always returned together during read operations and subscription notifications, forming a unified representation of the item's state. The Value represents the actual data associated with an OPC item, capturing the current state as if read directly from the underlying device. It is stored and transmitted as a VARIANT structure, which supports a range of scalar types including 16-bit signed integers (VT_I2), 32-bit signed integers (VT_I4), 32-bit floating-point numbers (VT_R4), 64-bit floating-point numbers (VT_R8), currency values (VT_CY), dates (VT_DATE), strings (VT_BSTR), booleans (VT_BOOL), and 8-bit unsigned integers (VT_UI1). Single-dimensional arrays of these types are also permitted via VT_ARRAY. Servers return values in the item's canonical data type by default, but clients may request conversions to compatible VARIANT types; however, overflow during conversion results in a Bad quality indicator. For instance, a server might provide a floating-point sensor reading as VT_R4, while a client requests it as VT_I4 for integer processing, with the server handling the type coercion. Special handling applies to IEEE floating-point NaN (Not a Number) values, which are paired exclusively with Bad quality to denote invalid computations. Items serve as the primary carriers of this VQT data within groups. Quality provides an 8-bit code that assesses the validity and usability of the accompanying , structured in the QQSSSSLL format where QQ denotes the main (2 bits), SSSS the substatus (4 bits), and LL the limit (2 bits). The main states are Good (binary 11, hexadecimal 0xC0), indicating reliable data suitable for use; Uncertain (binary 01, hexadecimal 0x40), signaling questionable data that should be handled cautiously; and Bad (binary 00, hexadecimal 0x00), marking data as unusable except in cases like last known values. Substatuses offer further , such as Configuration (0x04) for setup issues, Out of (0x1C) for inactive items, (0x10), or Local Override (0xD8, paired with Good quality for manually adjusted values). Limit bits indicate range compliance: OK (0x00), Low (0x01), High (0x02), or Constant (0x03). is returned as a WORD (VT_I2) and uses masks for : OPC_QUALITY_MASK (0xC0) for the main , OPC_STATUS_MASK (0xFC) for plus substatus, and OPC_LIMIT_MASK (0x03) for limits. The of the is directly influenced by ; for Bad or Uncertain states, the may be ignored or treated as unreliable, preventing erroneous actions—for example, a Bad quality due to device would prompt a client to fall back to safe defaults rather than acting on potentially stale data. The Timestamp records the exact moment when the Value was generated or last updated at the source, using the FILETIME structure—a 64-bit integer representing 100-nanosecond intervals since January 1, 1601 (UTC). It reflects the time of data acquisition by the device or update in the server cache, not the transmission time to the client; if the device is unavailable, the server may set it based on its last interaction. Timestamps are not altered by client-side processing like deadbands and can be converted to VT_DATE for compatibility. In VQT returns, the Timestamp ensures traceability, allowing clients to detect staleness—for instance, a Value from 10 minutes prior might trigger alerts even if the Quality is Good. Error handling in VQT operations relies on HRESULT codes, which report the or of the read or subscription process independently of the data's . Common HRESULTS include S_OK for , E_FAIL for general , OPC_E_INVALIDHANDLE (0xC0040905) for invalid item references, and OPC_E_NOTFOUND (0xC0040205) for missing items. These codes are returned in arrays (e.g., ppErrors) alongside VQT, distinguishing operational issues—like permission denials (OPC_E_BADRIGHTS, 0xC004090A)—from inherent problems. Thus, a successful read (S_OK) might still yield a Bad Quality Value due to source issues, ensuring clients can differentiate between errors and content validity.
Quality Main StateHex Code (QQ)DescriptionExample Substatuses (SSSS)
Good0xC0Reliable ; can be used directly.Local Override (0xD8), Non-specific (0xC0)
Uncertain0x40Questionable ; interpret with caution.Last Usable (0x44), Engineering Units Exceeded (0x48)
Bad0x00Unusable ; ignore unless specified.Non-Specific (0x00), Device Failure (0x0C), Out of Service (0x1C)

Interfaces and Operations

Core Server Interfaces

The core server interfaces in OPC Data Access (OPC DA) form the foundational COM-based mechanisms for managing server operations, group lifecycles, and item handling within the client-server architecture. These interfaces, defined in the OPC DA specification, enable clients to interact with the server's namespace and organize data access requests efficiently, without directly handling data reads or writes. They are implemented by OPC servers to expose essential management capabilities, ensuring interoperability across compliant applications. The IOPCServer interface serves as the primary entry point for server-level management, allowing clients to query server status, create and remove groups, and enumerate existing groups. Key methods include GetStatus, which retrieves the current server state via the OPCSERVERSTATUS structure, indicating whether the server is running, failed, or in another operational mode. AddGroup enables the creation of a new group by specifying parameters such as name, active state, update rate, and locale ID (LCID), returning a server handle for the group. Conversely, RemoveGroup deletes a group using its server handle, facilitating dynamic resource management. For group exploration, IOPCServer supports enumerators through methods like CreateGroupEnumerator, which returns an IEnumUnknown interface for iterating over existing groups. Address space browsing, which utilizes enumerations to navigate public or private address spaces, is provided by the optional IOPCBrowseServerAddressSpace interface. The IOPCGroup interface handles the lifecycle and configuration of individual groups, which serve as logical containers for related data items. It includes SetActive, which activates or deactivates the group to control whether it processes subscriptions or notifications, and GetState, which returns the group's current configuration, including its update rate, LCID, and activity status. These methods allow clients to adjust group behavior at runtime, optimizing performance by enabling or disabling groups as needed without recreating them. Item management within groups is governed by the [IOPCItemMgt](/page/Interface) interface, which maps client-specified item identifiers to server handles for efficient access. The AddItems method adds one or more items to a group, taking an array of item definitions (including ITEMID strings that identify data sources like tags or variables) and returning corresponding server handles upon success, along with any error codes for invalid items. ValidateItems checks the validity of proposed item definitions without adding them, helping clients verify accessibility before committing to a group. Finally, RemoveItems removes specified items from the group using their server handles, cleaning up resources and preventing unnecessary data polling. This interface ensures that items are properly associated with groups for subsequent operations. Asynchronous communication in OPC DA relies on COM connection points, implemented through the IConnectionPointContainer interface on server objects. This interface allows clients to establish callbacks by querying for specific connection points, such as those for group events or server shutdown notifications, enabling the server to advise clients of changes without synchronous polling. It supports efficient, event-driven interactions while maintaining the COM model's standards for interface discovery and management. Browsing the server's namespace is facilitated by enumerations that define the structure and traversal options. The OPCNAMESPACE enumeration specifies the organization of the address space, with values like OPCNAMESPACE_HIERARCHICAL (1) for tree-like structures and OPCNAMESPACE_FLAT (2) for non-hierarchical lists, allowing clients to adapt queries to the server's topology. Similarly, OPCBROWSETYPE controls browse operations, including OPCBROWSETYPE_BRANCH (1) for directories, OPCBROWSETYPE_LEAF (2) for data items, and OPCBROWSETYPE_FLAT (3) for unfiltered enumeration, providing flexible navigation through the available data points. These enumerations are integral to methods in IOPCBrowseServerAddressSpace for discovering and organizing the server's exposed elements.

Synchronous and Asynchronous Access

In OPC Data Access, synchronous operations are facilitated through the IOPCSyncIO interface, which provides blocking methods for reading and writing data to OPC items. The Read method accepts parameters including the data source (cache or device), the count of items, an array of server handles, and outputs arrays for OPCITEMSTATE structures (containing value, quality, and timestamp) along with HRESULT error codes for each item. Similarly, the Write method takes the item count, server handles, an array of VARIANT values, and returns error codes, ensuring immediate execution and direct device interaction regardless of subscription activity. These operations are ideal for one-off queries or low-volume access in simple client applications, as they return results synchronously without requiring callbacks. Asynchronous access is handled by the IOPCAsyncIO2 interface, which supports non-blocking transactions for Read, Write, and Refresh operations, enabling efficient handling of multiple items or high-frequency requests. For instance, the Read method specifies the item count, server handles, a client-generated transaction ID for tracking, an output cancel ID, and error codes, with completion notified via the IOPCDataCallback interface's OnReadComplete method, delivering OPCITEMSTATE results. The Write method follows a parallel structure, using VARIANT values and triggering OnWriteComplete for confirmation, while Refresh2 allows cache or device updates with results pushed through OnDataChange callbacks for active items. Additional methods like Cancel2 enable termination of pending transactions using the cancel ID and OnCancelComplete notification, and SetEnable/GetEnable control callback delivery for data changes. This interface requires prior registration of the client's IOPCDataCallback via IConnectionPointContainer, ensuring queued processing on the server side. The primary differences between synchronous and asynchronous access lie in their execution model and suitability: IOPCSyncIO blocks the calling thread until , making it straightforward for immediate, low-latency needs but less efficient for concurrent or high-volume operations, whereas IOPCAsyncIO2 decouples from via transaction IDs and callbacks, optimizing in multi-threaded or latency-tolerant scenarios. handling in both relies on HRESULT codes, such as S_OK for success, S_FALSE for partial , E_INVALIDARG for invalid parameters, and OPC-specific errors like OPC_E_BADRIGHTS for access violations, with asynchronous methods additionally supporting CONNECT_E_NOCONNECTION for callback issues. Operations are scoped to groups of items via server handles, and results incorporate value, quality, and timestamp (VQT) data as defined in the OPC . IOPCAsyncIO2 supersedes the legacy IOPCAsyncIO interface from earlier OPC DA versions (pre-2.0), adding enhanced features like explicit cancellation and callback enabling while maintaining for core asynchronous behaviors. Servers implementing IOPCAsyncIO2 must support queuing of at least one transaction per operation type to ensure reliable non-blocking access.

Subscriptions and Notifications

Group-Based Subscriptions

In OPC Data Access, group-based subscriptions enable clients to establish ongoing monitoring of data items by organizing them into logical groups, which serve as the foundation for efficient data exchange between clients and servers. Clients configure these subscriptions through the IOPCGroup interface, typically initiated via the server's IOPCServer::AddGroup method, allowing specification of key parameters such as the group name, active state, requested update rate, percent , and identifier. This setup facilitates subscription-based notifications for changes in item values within the group, optimizing resource usage over polling mechanisms. The update rate defines the frequency at which the server scans and reports data changes to the client, with the client requesting a value in milliseconds (dwRequestedUpdateRate) during group creation or modification. However, the server may revise this rate to a supported value (dwRevisedUpdateRate) if the requested rate is unsupported, returning an to indicate the adjustment. The percent parameter, set as a value between 0.0 and 100.0, acts as a threshold for filtering notifications on analog items, triggering updates only when the value change exceeds (Deadband/100) * (EU High - EU Low), where EU High and EU Low represent the item's units range. Additionally, the (dwLCID) specifies the language and cultural settings for returned text, such as string formatting, and can be adjusted via the IOPCCommon::SetLocaleID method. Groups can transition between active and inactive states using the bActive flag or the IOPCItemMgt::SetActiveState method, where an inactive state pauses server-side caching and notification callbacks for the group's items without removing them from the subscription. In the inactive state, items are not actively maintained, and their may reflect OPC_QUALITY_OUT_OF_SERVICE, allowing clients to temporarily suspend while preserving the group . Reactivating the group resumes normal operations, including callbacks for changes. OPC Data Access distinguishes between private and public groups to manage shared resources effectively. Private groups are client-specific, created directly via AddGroup and isolated to the requesting client, ensuring dedicated handling without interference from others. Public groups, in contrast, are server-managed and can be shared across multiple clients; clients may join existing public groups or request new ones through the IOPCServerPublicGroups , promoting efficient resource utilization in multi-client environments. Attempts to perform invalid operations on public groups result in an OPC_E_PUBLIC error. Server resource management imposes limits on the number of groups, items per group, and overall update rates to prevent overload, with errors like CONNECT_E_ADVISELIMIT or E_OUTOFMEMORY returned when limits are exceeded. Clients receive handles (phClientGroup and phServerGroup) upon group creation for referencing and management, while servers must support at least one queued transaction per group for operations like reads or refreshes. The OPCGROUPSTATE structure, accessed and modified via IOPCGroupStateMgt::GetState and ::SetState methods, encapsulates the group's configuration details, including the current update rate (pUpdateRate), active state (pActive), percent (pPercentDeadband), locale ID (pLCID), and time bias (pTimeBias).
FieldDescriptionType/Value Range
pNameGroup nameLPWSTR
phClientGroupClient for the groupOPCHANDLE
phServerGroupServer for the groupOPCHANDLE
pActiveActive state indicatorBOOL (TRUE/FALSE)
pUpdateRateCurrent server update rate (ms)DWORD
pPercentDeadband threshold (%)FLOAT (0.0–100.0)
pLCID identifierDWORD
pTimeBiasTime bias adjustment (min)LONG
The keep-alive feature, introduced in version 3.0, allows clients to receive periodic callbacks at a configurable interval even without changes, helping detect communication issues; it is set via IOPCGroupStateMgt2::SetKeepAlive and revised by the server if necessary.

Data Change Mechanisms

In OPC Data Access (DA), servers notify subscribed clients of changes through callback mechanisms implemented via the IOPCDataCallback interface, which enables efficient delivery of updates without constant polling. This interface is provided by the client and registered with the server to receive asynchronous notifications for active items within active groups. The primary method, OnDataChange, is invoked by the server whenever the or of an item exceeds specified thresholds, delivering a bundle of , , and timestamp (VQT) for multiple items in a single transaction. This exception-based approach ensures that notifications are sent only when relevant changes occur, optimizing and processing in environments. Deadband filtering serves as a key mechanism to control the frequency of these notifications, particularly for analog items, by suppressing updates for minor fluctuations. Configured as a percentage deadband (ranging from 0.0 to 100.0) during group creation, it triggers an OnDataChange callback only if the absolute difference between the current value and the last notified value exceeds the threshold relative to the item's engineering units span: | \text{current value} - \text{last value} | > \frac{\text{percent deadband}}{100} \times (\text{EU High} - \text{EU Low}). A deadband of 0.0 results in notifications for every change, while higher values reduce traffic; this feature was clarified and standardized in version 2.04 of the specification, with further refinements in 2.05. Servers that do not support deadband filtering return an error for non-zero values, allowing clients to adapt accordingly. Quality changes, such as shifts from good to bad status, always bypass the deadband and trigger immediate notifications. To maintain connection integrity, OPC DA employs keep-alive mechanisms through configurable update rates and periodic status inquiries prior to version 3.0, with version 3.0 adding dedicated keep-alive callbacks for periodic notifications even without data changes. The group update rate, specified in milliseconds during subscription (with 0 requesting the fastest possible rate), determines how often the server evaluates items for changes and sends updates if no data changes occur. Clients can poll the server's availability using the IOPCServer::GetStatus method at configurable intervals to detect issues like disconnections or out-of-service states, ensuring timely recovery. The server returns the closest supported rate, enabling fine-tuned intervals from seconds to hours based on application needs. Error notifications are integrated into the callback system to signal operational issues without disrupting the primary data flow. For asynchronous operations, completions or cancellations are reported via methods like OnCancelComplete in the IOPCAsyncIO2 , providing codes for failed requests. Quality codes within VQT updates further indicate problems, using a bitmask format (QQSSSSLL) where values like OPC_QUALITY_BAD (0x00) or OPC_QUALITY_OUT_OF_SERVICE denote unreliable data or server unavailability. These codes, such as OPC_QUALITY_GOOD (0xC0) for valid data, allow clients to interpret and respond to issues like sensor failures or communication losses embedded in every notification. Connection management for these callbacks relies on COM's IConnectionPointContainer interface, through which clients establish and terminate advisory connections. The client calls Advise on the server's IConnectionPoint to register the IOPCDataCallback sink, initiating notifications for the group; conversely, Unadvise or releasing the interface stops them, ensuring clean disconnection when subscriptions end or errors occur. This mechanism supports multiple clients and handles public group releases automatically when the last client disconnects.

Standards and Implementation

Compliance and Certification

The OPC Foundation oversees the certification process for OPC Data Access (DA) implementations to ensure adherence to the specification, granting "Certified OPC" status upon successful completion. This involves rigorous testing conducted at authorized OPC Foundation Test Labs, covering compliance, interoperability, robustness, and efficiency. Vendors must submit their products for evaluation, which includes using specialized tools such as the OPC Analyzer for initial self-validation before formal lab assessment. Certification is valid for three years and results in a unique serial number, official certificate, and permission to use the OPC Foundation's certified product logo. Testing encompasses interface conformance to verify that servers and clients implement required interfaces correctly, error handling to assess recovery from failures like network interruptions, performance under load through stress tests involving thousands of items over extended periods, and to ensure seamless interaction across versions. For instance, OPC DA 3.0 servers are required to support clients from versions 1.0 and 2.0, with lab tests confirming this using a variety of products. Self-certification is limited and not available to non-members, who must rely on the formal Test process for official validation. The primary benefits of include assured among diverse OPC products, reducing risks in environments, and enhanced market credibility through logo usage. As of 2025, OPC remains a standard, with efforts primarily focused on OPC UA; however, support continues to be tested for in hybrid systems.

Practical Implementation Considerations

Developers implementing OPC Data Access (OPC DA) clients and servers typically rely on the OPC Foundation's official resources, including the Core Components package for foundational interfaces and the .NET NuGet packages for managed code integration, which simplify COM interop through wrappers like OPCRcw.dll for handling registration and marshaling. These tools provide sample client in C# and VB.NET, demonstrating basic connections, group management, and data reads/writes without requiring deep COM expertise. Third-party toolkits, such as Advosol's OPC .NET Client Components or Softing's OPC Classic SDK, extend these with pre-built assemblies for rapid prototyping, supporting both synchronous and asynchronous operations across Windows platforms. Deployment of OPC DA applications often encounters challenges related to (DCOM) configuration, particularly for remote access across firewalls and networks, where levels must be set to "Packet Integrity" or higher to ensure while granting limited permissions to dedicated accounts like "opcuser" for runtime access and "opcadmin" for configuration. Multi-threading is essential for handling asynchronous callbacks, such as those from IOPCDataCallback::OnDataChange, which may arrive concurrently; developers should use thread-safe mechanisms like critical sections to protect shared resources and manage transaction IDs to prevent race conditions during group updates via IOPCGroupStateMgt. For error recovery in network failures, clients must implement reconnection logic by monitoring server status through IOPCServer::GetStatus, which reports states like OPC_STATUS_COMM_FAULT, and re-establish groups using cached ITEMIDs after detecting OPC_E_INVALIDHANDLE or similar HRESULT errors. Best practices emphasize validating ITEMIDs before operations using IOPCItemMgt::ValidateItems to catch OPC_E_INVALIDITEMID or OPC_E_UNKNOWNITEMID early, ensuring robust and via IOPCBrowse or IOPCItemProperties. Ongoing health monitoring via periodic GetStatus calls helps detect issues like OPC_STATUS_FAILED, allowing proactive group removal and reconnection for in high-load scenarios. Asynchronous interfaces like IOPCAsyncIO2 are recommended for better performance in large-scale deployments, queuing transactions per group to avoid blocking and handling partial failures with error arrays (ppErrors) rather than synchronous IOPCSyncIO for low-latency environments. Safe handling of types in data values requires explicit memory management with VariantClear after reads/writes, supporting conversions via VariantChangeTypeEx to canonical types (e.g., VT_R8 for floats) while checking for OPC_E_BADTYPE to prevent crashes from mismatched data. Testing toolkits such as Matrikon's OPC Tunneler or Advosol's OPC Expert Simulator facilitate validation of client-server interactions, simulating various error conditions and load scenarios without production hardware. Sample code from the , including the .NET API samples, offers starting points for basic server implementations using IOPCServer and client connections with group advising. OPC DA's reliance on COM/DCOM inherently limits it to Windows operating systems, restricting cross-platform deployment and introducing vulnerabilities if not hardened properly. Scalability is constrained by server-specific caps, with best practices recommending no more than 200-800 items per group to maintain performance, as exceeding these can lead to queue overflows (OPC_S_DATAQUEUEOVERFLOW) or connection limits around 32 clients in some implementations.

Applications and Evolution

Industrial Use Cases

OPC Data Access (OPC DA) primarily enables monitoring in environments by facilitating the reading of tags from programmable logic controllers (PLCs) for display on human-machine interfaces (HMIs) and integration into supervisory control and data acquisition (SCADA) systems. This allows operators to track process variables such as temperatures, pressures, and machine speeds continuously, supporting dynamic decision-making on the factory floor. In terms of integration, OPC DA connects () and manufacturing execution systems () to shop-floor devices, enabling seamless from equipment supplied by multiple vendors. This simplifies data flow from operational controls to higher-level business applications, reducing the need for bespoke interfaces. Representative examples include its application in process control for the oil and gas sector, where OPC DA accesses sensor values for monitoring integrity and operations. In automotive assembly lines, it provides real-time status updates from machines to optimize production sequencing and detect faults. For pharmaceuticals, OPC DA supports batch tracking by logging production parameters to ensure compliance and traceability during manufacturing runs. In practice, OPC DA's standardized access minimizes the development of custom drivers for diverse , streamlining deployment across vendors. Its value-quality-timestamp (VQT) further ensures reliable by providing on data accuracy and timing. As of 2025, OPC DA continues to play a legacy role in brownfield industrial systems, serving as a bridge to modern (IIoT) architectures through compatibility wrappers and gateways.

Relation to OPC UA

OPC Data Access (DA), built on Microsoft COM/DCOM technology, is inherently limited to Windows platforms and focuses primarily on exchange for , lacking built-in mechanisms beyond DCOM's basic . In contrast, (UA) is platform-independent, supporting diverse operating systems like and embedded devices, while incorporating comprehensive features such as (PKI) with certificates, , and user . Additionally, OPC UA unifies functionalities from multiple classic OPC specifications—including DA for data access, HDA for historical data, and AE for alarms and events—into a single, extensible based on nodes, variables, and object-oriented structures, enabling broader and complex . OPC UA Part 8, titled DataAccess, recreates the core model within the UA framework by defining an of nodes that represent variables, properties, and relationships, allowing UA clients to access real-time in a manner analogous to DA items and groups. This mapping translates DA concepts—such as branches to UA FolderType objects, items to VariableType nodes, and properties to HasProperty references—while extending them with UA's richer types, timestamps, and quality status codes derived from DA's 16-bit quality flags. To support legacy DA systems, OPC UA includes wrappers like the COM UA Wrapper, which encapsulates a DA to expose its as a UA server, and the COM UA Proxy, which allows DA clients to interact with UA servers by mapping NodeIds to DA ItemIds. Migration from OPC DA to OPC UA typically involves deploying OPC UA servers that incorporate DA tunneling for secure, network-transparent access to existing DA endpoints, thereby minimizing disruptions in legacy environments. Tools such as KEPServerEX provide bridging capabilities by acting as both an OPC UA client to connect to UA sources and a DA server for legacy compatibility, or vice versa, facilitating data aggregation and cross-protocol communication with benefits like enhanced cybersecurity and cross-platform deployment. These paths enable gradual adoption, where UA gateways wrap DA servers to serve modern clients without immediate full replacement. Although OPC DA has been considered deprecated since the early due to the obsolescence of COM/DCOM and the rise of as the industry standard, it continues to operate in numerous legacy industrial installations where stability is prioritized over new features. is now recommended for all new projects, offering superior scalability and future-proofing, yet DA's persistence underscores the need for hybrid solutions during transition periods. For interoperability, OPC UA clients can access DA servers indirectly through proxies or tunnels, ensuring no direct without adapters, which map services like read/write operations and subscriptions while handling differences in error codes and data variants. This approach maintains data flow in mixed environments but highlights UA's design for seamless evolution beyond DA's constraints.

References

  1. [1]
    Classic - OPC Foundation
    OPC Data Access (OPC DA)​​ The OPC DA specification defines the exchange of data including values, time and quality information.
  2. [2]
    Classic - OPC Foundation
    Title: Data Access Description: The original OPC Specification and its evolution from versions 1, 2.x, and 3.0, covering: The concepts of OPC Client/Server ...
  3. [3]
    [PDF] OPC UA for Analyzer Devices (ADI) is released as a companion ...
    Classic OPC specification for Data Access (DA) in. August 1996. 1998. The OPC Foundation begins converting its existing specification to web services. 1999.
  4. [4]
    History of OPC | OPCconnect.com
    The OPC specification version 1.0 was released on August 29 1996. A corrected version 1.0A of the OPC Data Access Specification, as it is now known, appeared in ...
  5. [5]
    [PDF] Data Access Custom Interface Standard Version 2.05 December 17 ...
    Dec 17, 2001 · This revision includes enhancements to the 1.0A Specification. Although changes were made throughout the document, the following areas are or ...
  6. [6]
    [PDF] OPC Data Access Custom Interface Specification 3.0 - Advosol
    Mar 4, 2003 · This specification defines the interface for OPC Data Access clients and servers, released March 4, 2003, version 3.0, to facilitate ...
  7. [7]
    OPC Data Access (OPC DA) Versions & Compatibility - Matrikon
    OPC DA stands for OPC Data Access. It is an OPC Foundation specification that defines how real-time data can be transferred between a data source and a data ...
  8. [8]
    Centralizing Data from Multiple Sources into One OPC Server
    Learn how to aggregate data from multiple other sources into a central location using functionality in TOP Server and Cogent DataHub.
  9. [9]
    [DOC] OPC Overview - PHENIX Experiment
    Microsoft Windows NT 4.0 (or later), or Windows 95 with DCOM support is required to properly handle the marshaling of OPC parameters. Note that in order to ...
  10. [10]
    Securing OPC DA With Unidirectional Gateways
    Apr 18, 2024 · OPC-DA uses DCOM under the hood, and accessing DCOM through a firewall means opening over 1000 ports on the firewall. >>The downside here is ...<|control11|><|separator|>
  11. [11]
    [PDF] Five Steps to Success with OPC Certification
    Oct 6, 2014 · Products are certified by the OPC Foundation Test Lab for three years and receive a unique certified serial number, an official certificate, ...
  12. [12]
    How to Certify - OPC Foundation
    The OPC Foundation Certification Program requires OPC-based products to pass an extensive level of testing to assure compliance, interoperability, robustness, ...
  13. [13]
    Test Tools - OPC Foundation
    A single test tool for testing OPC Classic Data Access Clients. This download includes: Learn how to use this tool with this Tutorial.
  14. [14]
    Classic - OPC Foundation
    .NET API Sample Client Source Code, This package provides the source code for sample OPC Classic client applications that use the OPC .NET API. More > ...
  15. [15]
    OPC .NET Client Development Toolkits - Advosol
    Advosol offers .NET components for all specifications, Classic OPC DA, HDA, A&E, XML-DA and OPC UA. These components provide a set of easy to use .NET classes ...
  16. [16]
    Softing OPC Classic SDK - GitHub
    This repositiory contains the discontinued product dataFEED OPC Classic SDK from Softing. This product can be used to create OPC DA/AE Server and Client ...
  17. [17]
    [PDF] OPC Security Whitepaper #3 - Hardening Guidelines for OPC Hosts
    Aug 31, 2006 · Summarizes current good practices for securing OPC applications running on Windows-based hosts. The white paper you are now reading is the last ...
  18. [18]
    NET API Sample Client Source Code - Classic - OPC Foundation
    This package provides the source code for sample OPC Classic client applications that use the OPC .NET API.<|control11|><|separator|>
  19. [19]
    OPC UA and the OPC Data Access - Open Platform Communications
    OPC DA has been widely embraced in industrial automation and is still in use, but it has drawbacks, such as being dependent on the Windows operating system and ...
  20. [20]
    Advise points - PI Interface for OPC DA - AVEVA™ Documentation
    May 21, 2025 · For best performance, ensure that your OPC group size does not exceed 800 items per group. Note: Your server might perform better with smaller ...Missing: scalability | Show results with:scalability
  21. [21]
    OPC DA clients number limit - Emerson Exchange 365
    Oct 10, 2018 · After reaching 32 connections, other OPC clients could not connect. Therefore, 32 seems to be the limit as per DeltaV 11.3 experience.Ver 13.3 | Application Station | OPC DA Server | Read/Write LimitationsOPC Server license limit - Emerson Exchange 365More results from emersonexchange365.com
  22. [22]
    What is the OPC DA Protocol?
    OPC DA enables real-time data exchange in industrial systems using COM/DCOM. Learn how it works and why it's still widely used today.
  23. [23]
    OPC to SAP Integration Solutions for Manufacturing - Junot Systems
    Streamline plant floor analysis and reporting with easy integration. Deploy sophisticated real-time interfaces between your operational-level systems and SAP.<|control11|><|separator|>
  24. [24]
    A Unified Architecture for Oil & Gas - OPC Connect
    Traditionally, this has been achieved using standardised communication protocols such as OPC-DA (data acquisition) and OPC-A&E (alarm and events) which have ...
  25. [25]
    OPC UA vs. OPC DA: Differences and Which to Choose - EMQX
    Apr 12, 2024 · OPC UA and OPC DA differ significantly in several aspects like interoperability, security, functionality, performance, and compatibility.Missing: limitations | Show results with:limitations
  26. [26]
    Bridging Legacy OPC Classic Servers(DA, AE, HDA) to SnapLogic ...
    Jun 19, 2025 · Despite significant advances in industrial automation, many critical devices still rely on legacy OPC Classic servers (DA, AE, HDA).
  27. [27]
    [PDF] Leveraging Industrial IoT and advanced technologies for digital ...
    control system via OPC Data Access or Unified Architecture. Data processing also occurs locally on the edge with minimum latency and delays and maximum ...
  28. [28]
    [PDF] Migration from OPC-DA to OPC-UA
    This paper gives a brief overview of the headline features of OPC-UA and a comparison with OPC-DA and outlines the necessity of migrating from OPC-DA and the ...
  29. [29]
    UA Part 8: DataAccess - OPC UA Online Reference - OPC Foundation
    This document defines nodes in the following nodesets: 1 Scope 2 Normative references 3 Terms, definitions and abbreviations
  30. [30]
    UA Part 8: DataAccess - Annex A (informative): OPC COM DA to UA Mapping
    ### Summary of Mapping Between OPC COM DA and UA, Wrappers, Proxies, and Interoperability
  31. [31]
    KEPserverEX OT Connectivity Platform | PTC
    ### Summary: KEPServerEX Migration from OPC DA to OPC UA