Multi-server Configurations


Caché or Ensemble


A single VC/m instance can manage remote code located on more than one Caché or Ensemble node by the use of InterSystems' Enterprise Cache Protocol (ECP). VC/m itself can be installed on one of the node where managed code is located, or you can set up a dedicated node to host VC/m. Either way, the node hosting VC/m lies at the centre of the ECP network used by VC/m. Outbound ECP connections from the VC/m-hosting node link to each of the other nodes where code is to be managed. In InterSystems terminology the VC/m node is an ECP application server and the other nodes are ECP data servers. Each of those nodes must also have an ECP connection to the VC/m node, i.e. they are also ECP application servers connecting to the VC/m node as their ECP data server. Thus a 'hub-and-spokes' topology is created, with a pair of links on each spoke to provide bidirectional connectivity.

Note: Some InterSystems license keys do not include ECP. Check with your InterSystems account manager if you are unsure about the status of yours.

The following instructions assume all the nodes are hosted on the same type of operating system (i.e. all Windows, or all UNIX/Linux, or all OpenVMS). If you need to deply VC/m in a heterogeneous environment consult VC/m Support for additional guidance.

1. Install VC/m on its node using other sections of this book.

  • When installing program files, if possible place them on a network share that all nodes in your VC/m network can access using the same name. For example, if your nodes run on Windows share the directory where you place the unpacked software kit, e.g. as \\vcmhost\vcmkit.
  • When entering the installation directory path as you run the installation script, specify the network share name rather than the local directory path. Note that if your host OS is Windows the Caché or Ensemble service needs access to the share, i.e. it cannot run under the LocalSystem account.

2. Enable the %Service_ECP service on all nodes, so they can all act as ECP data servers. Make sure that the VC/m node will be able to accept enough concurrent incoming ECP connections. If necessary, increase the parameter that affects this limit.

3. On each node apart from the one hosting VC/m, define the following using Management Portal:

  • An ECP connection to the VC/m node. For consistency use the same name on each node. The suggested name is VCM.
  • A local database called VCM-LOCAL.
  • A namespace called VCM-LOCAL that uses the VCM-LOCAL database as its default database for globals and routines.
  • A remote database definition named VCM giving access to the VC/m database on the VC/m node. That database is normally also named VCM.
  • A namespace that uses the VCM remote database as its default database for globals, and uses the VCM-LOCAL database as its default for routines. The namespace name must match the one on the VC/m node, which is normally called VCM.
  • The following mappings on the %ALL pseudo-namespace:
    • Global %gjtask to the VCM-LOCAL database.
    • Globals %vc* to the VCM database (remote).
    • Globals %vc.Stud* to the VCM-LOCAL database.
    • Routines %vc* to the VCM-LOCAL database.
    • Package VCmStudio to the VCM-LOCAL database.

4. On the VC/m node, export the routines %vc*.INT from the VCM namespace. Then import them into the just-created VCM-LOCAL namespace on each of the other nodes.

5. In the VCM-LOCAL namespace in a terminal session in each of those nodes, run the following: d Cache5^%vcins() When asked, confirm the loading of the classes, templates and add-ins.

6. In the VCM-LOCAL namespace start the VC/m task server process using the command d fast1^%vczn and also add code into SYSTEM^%ZSTART in the %SYS namespace so that fast1^%vczn is run in VCM-LOCAL whenever the instance is started.

7. On the VC/m node, define an ECP connection to each of the other nodes. If possible, name these connections to match the hostname of the computer where the target Caché or Ensemble instance runs. If your connections involve multiple instances of Caché or Ensemble that are running on the same host you will need you pick alternative names for your ECP connections. In such cases it is recommended that you also set the CliSysName parameter to match. For instance if your VC/m node needs to connect to two instances called QA and UAT that both run on a host called TESTING you could name the two ECP connections QA and UAT, setting the CliSysName of the QA instance to 'QA' and that of the UAT environment to 'UAT'. Search your InterSystems documentation for instructions about where to set this parameter. Changes to CliSysName only take effect at restart.

GT.M


Multi-server configurations are not currently supported on GT.M.