OPC Classic
In 1995, various companies decided to create a working group to define an interoperability standard. These companies were the following:
- Fisher Rosemount
- Intellution
- Intuitive Technology
- Opto22
- Rockwell
- Siemens AG
Members of Microsoft were also invited to provide the support necessary. The goal of the working group was to define a standard for accessing information in the Windows environment, based on the current technology at that time.
The technology that was developed was called OLE for Process Control (OPC). In August of 1996, the first version of OPC was defined. The following diagram shows the different layers of OPC Classic with the underlying critical communication protocols—COM, DCOM, and Remote Procedure Call (RPC):
COM is a software architecture developed by Microsoft to build component-based applications. At the time, it allowed programmers to encapsulate reusable pieces of code in such a way that other applications could use them without having to worry about the details of their implementation. COM objects can be replaced by newer versions, without having to rewrite the applications that use them. DCOMs are network-aware versions of COMs. A DCOM tries to hide from the software developers the differences between the COM objects that are running on the same computer and the COM objects that are running remotely on a different computer. To achieve this, all the parameters must be passed by value. This means that when invoking a function exposed by a COM object, the caller must pass the related parameters by value. The COM object, on the other hand, will reply to the caller, passing the results by value. The process of converting the parameters to data to be transferred over the wire is called marshalling. Once marshalling is complete, the data stream is serialized, transmitted, and finally restored to its original data ordering on the other end of the connection.
DCOMs use the mechanism of RPCs to send and receive information between COM components in a transparent way on the same network. The RPC mechanism was designed by Microsoft to allow system developers to request the execution of remote programs without having to develop specific procedures for the server. The client program sends a message to the server with the proper arguments and the server returns a message containing the results that come from the executed program.
OPC Classic had several limitations:
- The standard was coupled to a specific technology. OPC Classic, in fact, was built around and on top of Microsoft technology.
- COM traffic relies on several undetermined network ports to be opened, so it is often blocked by firewalls.
- DCOMs and RPCs were weighty and complicated mechanisms. DCOM objects often suffered from a lack of performance and were difficult to maintain.
The following diagram shows the implementation of the same data retrieval scenario using two different architectures and two different kinds of software. We have two PLCs connecting to a computer that is running the OPC Classic server for the PLC vendor. In the scenario on the left, named DCOM INTERFACE, the PLCs and the OPC server communicate using the native PLC protocol. The OPC client running on the SCADA computer accesses the data in the OPC server through the DCOM, since they are running on different computers.
In the scenario on the right, named COM INTERFACE, the PLCs and the OPC server communicate using the native PLC protocol, but the OPC client is embedded in a Historian agent running on the same machine as the OPC server and therefore accesses the data in the OPC server through COM. The data gathered by the Historian agent through the OPC client is then sent to the Historian server through TCP/IP. In the scenario on the right, the communication between the OPC client and the OPC server is done using COM since they are running on the same Windows box, therefore avoiding the performance and security issues that affect DCOM communication: