Windows Failover Cluster
Cluster Setup Overview
This document describes how a MOPS system may be configured to run ona Windows Failover cluster. The described configuration is an active-passive configuration with no load balancing.
A Windows Failover Cluster mau be used to provide a high-availability setup of MOPS. Issues affecting availability that is improved by a Windows Failover Cluster includes:
-
Hardware failures — the cluster will move applications to the operational node.
-
Software failures — the cluster may due to local software errors move applications to the next available node. May be caused by accidental deletion of files, etc.
-
Software updates — installing updates to software may be time-consuming enough to be considered system downtime.
- Windows updates — Scheduling Windows updates can be done so that the passive node is being updated while the active node runs the system software.
- MOPS upgrades — MOPS system software may be upgraded on the passive node. Note that shared system resources such as database may also need updates and will thus require some system downtime.
Choosing Cluster Configuration
The MOPS system may be divided into the following parts:
- Applications — In the context of the server we are here excluding client applications such as WinMOPS.
- Databases — The databases used by the system.
- Interfaces — the software used to receive from and send data to external systems.
These parts of the system may be organized in a cluster setup as:

On the left-hand side, in blue, we have a system msetup with three (3) clusters. One for application services, one for databases and one for interfaces.
On the right-hand side we have a cluster setup that places application services and databases on the same cluster. Interfaces have especially hight availability demands so these are still on a dedicated cluster.
The choice between these two configurations will depend on the needs of the specific installation. Each cluster mentioned above is assumed to be composed of two servers.
Regardless of the number of selected clusters, the software assumed to be running on the clusters are:
Application Services
- MOPS 4.0
- MOPS 14.1
Database Services
- PostgreSQL — Provides storage for MOPS 4.0 metadata, displays, dashboards, etc.
- MOPS Historian — Process data storage
- MOPS OPC UA Server for MOPS Historian - OPC UA interfaces reciding close to MOPS Historian.
- MOPS OPC DA Server for MOPS Historian - OPC DA interfaces reciding close to MOPS Historian.
- Oracle database server — MOPS 14.1 production quality data storage.
Interface Services
- MOPS OPC UA Replicator - provides an interface for sending data between OPC UA servers.
- MOPS OPC DA Client - provides an interface for sending data between OPC DA servers.
- Other interface processes — may be custom file system links, REST APIs, etc.

On some installations, Oracle database may be provided by a data-center installation.
Cluster Roles
Cluster roles should be defined in such a way that it is possible to run services on the preferred node. This means that some roles may be running on one node while other services are running on the other node. The image below shows a system where application services and databases share the same cluster.

Cluster Storage
In order to be able to cluster roles independent of each other, it should be possible to move a cluster role with all its dependent resources between nodes.
For this reason, two cluster roles may not depend on the same cluster storage. They should all have its own disk for the data it require.

The above image show cluster roles toghether with related disks. This image also show one disk shared by cluster roles. This disk is not treated as a cluster resource and available to all cluster roles at all times.
Recommended Cluster Roles
MOPS 4.0 Role
- CAP: Network name and IP
- SVC: mops-cluster-compose – manages the uptime of MOPS 4.0 containers
- SVC: MOPS 4.0 engine (REST API for MOPS Historian)
- (managed by mops-cluster-compose) MOPS 4.0 containers managed by docker compose
- DISK: (F:) Configuration files storage
PostgreSQL Role
- CAP: Network name and IP
- SVC: Postgres database service
- DISK: (G:) Database storage
MOPS 14.1 Role
- CAP: Network name and IP
- SVCs: MOPS 14.1 Services (MOPS Portal, MOPS Tag Broker, MOPS Calc SP, …)
- REG: Consider registry replication for SP config
- DISK: (I:) Configuration files
MOPS 4.0 Historian and OPC UA Server Role
- CAP: Network name and IP
- SVC: MOPS 4.0 Historian
- SVC: MOPS OPC UA Server for MOPS Historian
- DISK: (J:) Historian database. MOPS OPC UA Server config
MOPS OPC UA Replicator Role
- CAP: Network name and IP
- SVC: MOPS OPC US Replicator
- DISK: (K:) MOPS OPC UA Replicator configuration
Other Interface Processes
- CAP: Network name and IP
- SVCs: TBD
- DISK: TBD
Recommended Disk Sizes
Disk Recommendations/Application and Database Servers
| Disk | Description | Size |
|---|---|---|
| C: | System disk | Decided by customer |
| D: | Applications disk | 100 GB |
| E: | (Application server) Container images | 100 GB |
| F: | MOPS 4.0 Role cluster storage. Configuration files | 20 GB |
| G: | PostgreSQL Role cluster storage. Database. | 100 GB |
| I: | MOPS 14.1 Role cluster storage. Configuration and displays. | 20 GB |
| J: | MOPS 4.0 Historian and OPC UA Server Role cluster storage. Database, configuration | 300+ GB |
| L: | Backup disk for databases | 300+ GB |
Disk Recommendations/Interface Servers
| Disk | Description | Size |
|---|---|---|
| C: | System disk | Decided by customer |
| D: | Applications disk | 100 GB |
| K: | MOPS OPC UA Replicator Role cluster storage. Configuration. | 20 GB |
| X: | Project-defined interfaces may add additional cluster storage requirements | TBD (20 GB) |
Other/Shared Disk(s)
| Disk | Description | Size |
|---|---|---|
| Y: | Log files disk (accessed by cluster nodes) | 40 GB |