Skip to end of metadata
Go to start of metadata

Table of Contents

Overview

In this page, we provide detailed architectural and design elements necessary to realize platform agent function and services in accordance with the high level CI architectural description. We first review relevant CGSN infrastructure elements, and then describe the platform agent architecture and propose how platform agent functionality could be distributed across the infrastructure in a useful way.

CGSN Surface Mooring Categories and Configurations

CGSN infrastructure elements are drawn from a sophisticated menu of possible configurations, including mooring types, sensor frames, and computational, telemetry and power elements. Of the full range of planned deployments, the CI system is intended to be deployed on two categories of CGSN elements: High Power Surface Moorings (Figure 1) and Standard Power Surface Moorings (Figure 2), which together form 10 mooring configurations. From the cyberinfrastructure perspective, these systems are largely similar, and differ primarily in the existence of fuel cells in the high power moorings. Both mooring categories have the same telemetry and computational infrastructure, and both use photovoltaic, wind and battery power systems. Both mooring categories consist of buoy-based CPMs and DCLs, and riser/NSIF based DCLs. Most of the high power moorings (not including Global Southern), and one of the standard power moorings (Endurance OR Offshore), also have seafloor (MultiFunction Node) based DCLs. It is currently being clarified and confirmed that CI agents will command and control MFN based instruments just as they do in the riser and buoy based cases. All computational elements, DCLs and CPMs, can and must operate independently during network/CPM sleep times. A CI computational element is to be included in this infrastructure, directly connected to all DCLs at all times, by a mechanism TBD (ethernet or USB likely). One of the high power moorings (Global Southern), and two of the standard power moorings (Global Argentine and Global Irminger), will additionally use inductive modems to communicate to arrays of 12 CTD instruments fixed at depths along the mooring lines. Overall, CI could be responsible for up to 26 instruments and 6 platform elements in a specific mooring. The exact configurations that CI will occupy (occupy-CGSN!), including the instrument classes that will be deployed at specific sensor frames, are shown in Figures 1 and 2.

KEY

FBB: Satcom Fleet Broadband (High-Speed Satellite)
IRID: Iridium satellite dish (Low-Speed Satellite and/or SBD=Short burst data?)
FW: FreeWave (Medium range RF radio)
WiFi: WiFi base station (Short range RF)
WT: Wind Turbine system
FC: Fuel Cell system
PV: Photovoltaic system
BT: Battery system

Buoy: floating platform
CPM: Communications and Power Manager
DCL: Data Concentrator/Logger
Riser: tether system from anchor to buoy
NSIF: Near Surface Instrument Frame
MFN: Mooring MultiFunction Node
Inductive Modem: underwater low-bandwidth communications system
XXXXX: Instrument class (with page on Confluence)

Figure 1. High Power CGSN Surface Moorings. Dashed boxes indicate seafloor multifunction node DCLs where CI software configuration remains TBD.

Figure 2. Standard Power CGSN Surface Moorings. Dashed boxes indicate seafloor multifunction node DCLs where CI software configuration remains TBD.

CGSN Platform Agent Architecture

The hierarchical nature and similarities among CI-enabled CGSN moorings suggest a straightforward platform architecture with elements depicted in Figure 3. In this scheme, each mooring is represented in CI by a CGSN surface mooring platform agent (SMPAs, either high power or standard power). The SMPA is responsible for overall and CPM level monitoring and functions, and defers to sub-platform agents responsible for individual DCL monitoring and function. Platform agents run and communicate with one another and other CI elements as ION processes within the capability container, and utilize a DCL service that acts as a CI-CGSN bridge to each DCL element. Each platform agent has a dedicated greenlet that is configured to periodically query and update resource status (Figure 3, right) using the access service bridge, and raise alarms when configurable ranges are violated for resource levels. These alarms, along with instrument agent alarms and command responses, are utilized by the Mission Executive Service to continually survey the health and progress of a mission. A class hierarchy descending from the base resource agent class is depicted in Figure 4. This would be extended in analogous fashion for RSN platform agents.

Figure 3. Platform agent architecture for CGSN surface moorings. Platform agent software is constructed analogously to instrument agent software, with a common agent base class that dynamically imports a particular platform driver to manage a platform specific connection and protocol. These platform specific components may differ somewhat from their instrumentation counterparts. Depending on the detailed architectural scenario that is followed (see next section) CGSN platforms may or may not be connection-oriented. If they are not connection-oriented, as if the platform agent speaks UDP directly to CGSN services, then the connection state machine within the platform driver may be nearly trivial and really offering much less in the way of code function that it does in a connection-oriented case or with instruments. Similarly, it is not yet clear what, if any, meaningful state about CGSN services should be managed by the protocol. In the simplest case, platform drivers may have trivial state machines, or bypass then altogether, and simply provide a mechanism to translating agent commands, parsing responses, and enumerating resources tracked.

Figure 4. Platform agent class hierarchy, showing both CGSN and RSN elements. Platform drivers specialize by connection method, which is different than instrument drivers that specialize by number of TCP connections. Specific CGSN drivers are here envisioned as UDP, assuming no CI footprint on the DCLs (see discussion below), but could otherwise be TCP driver specializations.

The CGSN Platform Agent Architecture has the following elements and behaviors:

  • The common state model coordinates PA function, particularly idle, observatory, direct access and mission modes.
  • Mooring platform agents monitor CPM related resources, notably power, telemetry, CPM environment and resources.
  • Mooring platform agents provide commands to request telemetry, wake CPM and ethernet from CGSN.
  • DCL platform agents monitor DCL related resources, notably DCL environment, resources, and port status.
  • DCL platform agents provide commands to configure port power, protocol, and serial parameters.
  • All platform agents provide commands to dynamically configure alarm levels for the resources they monitor.
  • All platform agents publish state changes, configuration changes, errors, and resource alarms.

Distributed Process Architecture, CI-CGSN interface and Platform Federation.

The inherently distributed hardware architecture of CGSN platforms, along with CI and CGSN software elements both playing roles in observatory function complicates the detailed platform agent design and interaction somewhat. CI Platform and Instrument Agents running on a CI computational element, along with various support services, must be able to assess and request resources from both CGSN and DCL-Linux, launch port agents and drivers, and command and control instrumentation running on the DCL. A CI Access Process running on the DCL and started up by the CGSN startup script provides an extensible beachhead directly under CI control for accessing both CGSN and DCL-Linux services as well as for launching tertiary processes onto the DCL, such as drivers and port agents, required for instrument control. The proposed schemes (three possible scenarios), essentially a federation of independent platforms coordinated by the CI capability container running on the computational element, is illustrated in Figure 5.

Figure 5a. CGSN Platform Agent Architecture Detail, Scenario A. In this scenario, the DCL access service acts to broker all communications between the DCL and the CC, serving as an intermediary between agents and their resources.

Figure 5b. CGSN Platform Agent Architecture Detail, Scenario B. In this scenario, instrument and platform agents manage their own connections to the DCL. The DCL access service is primarily a mechanism to service CGSN requests.

Figure 5c. CGSN Platform Agent Architecture Detail, Scenario C. In this scenario, drivers run on the CI computational element and no CI resources whatsoever run on the DCLs. A CI access process fulfills CGSN requests, and platform agents send requests directly to the CGSN access point via UDP.

All 3 scenarios have the following common elements and behaviors:

  • A mission executive runs in the CC providing command and control of all agents to execute a deployed mission autonomously according to a mission configuration object.
  • A data / publication service runs in the CC which listens to all data and event streams and collects them for episodic transmission to shore.

Architecture Scenario A

In the first architecture scenario (Figure 5a), the DCL access service running on the CC is used to centralize all communications with all DCLs and also to service requests from the CG services on the DCL. Scenario A has the following elements and behaviors:

  • A CI access process runs on the remote DCL, started by CG startup scripts, containing the CI access point to communicate with the CGSN services, a CC access point to communicate with the CC, and other elements.
  • A DCL access process runs in the CC. DCL access manages request-response and async socket pair for each instrument and DCL platform agent to forward requests and deliver asynchronous messages between these agents and the DCLs.
  • A CI access process runs on the remote DCL, started by CG startup scripts, containing the CI access point to communicate with the CGSN services, a CC access point to communicate with the CC, and other elements.
  • DCL access manages a request-response and async socket pair between the mooring PA and each DCL. The mooring PA may use any DCL to request its CG services and must maintain links to all in the event a particular DCL fails.
  • DCL access listens for and fulfills status requests originating from CGSN that arrive at the CI access point and are forwarded to it by CC access.
  • DCL platform agents launch port agents according to platform agent state lifecycle by forwarding a launch request through DCL access.
  • Instrument agents launch drivers according to instrument agent state lifecycle by forwarding a launch request through DCL access, resulting in driver clients available in the CI access process.
  • DCL platform agents monitor DCL resources by forwarding CGSN requests through DCL access.
  • Instrument agents command and control drivers by forwarding messages through DCL access to the appropriate driver client maintained by the CI access process.

Architecture Scenario B

In the second scenario (Figure 5b), the DCL access service running on the CC is used only to provide access to CC services to fulfill CGSN status requests, and all other comms are directly managed by the respective agents. Scenario B has these elements and behaviors:

  • A DCL access process runs in the CC. DCL access listens for and fulfills status requests originating from CGSN that arrive at the CI access point and are forwarded to it by CC access.
  • Platform DCL agents manage their own comms to a DPA element that fulfills platform requests to launch ports and drivers, access IOCTL/Linux or monitor CGSN resources.
  • The mooring agent manages comms to a MPA element that fulfills platform requests, including access to CG services.
  • Instrument agents manage comms to remote DCL drivers directly.
  • Instrument agents request driver launch via their DCL platform agent.

Architecture Scenario C

A third scenario has a zero CI footprint on the DCLs and is depicted in Figure 5c. In this setup, the only special behavior is that platform agent drivers (connections and protocols) establish comms with and speak the CGSN UDP protocol to access the CGSN services.

Mission Execution

In R2, mission schedule control will be implemented as a Mission Execution Service running at the platform site. This service takes a configuration object that represents a sampling mission as a specified set of agent commands appended to "cron"-like scheduling information. Internally, the mission services will decompose into greenlets per taskable resource, or per task unit. The mission executive will request resources to be taken into mission mode, and will then run the mission, consisting of initial, periodic and final tasks with scheduling determined by a cron syntax. Tasks will be composed of collections of atomic resource commands, for example "set parameters, calibrate and then poll" at the top of each hour, or starting at midnight and every 15 seconds. The mission executive will include a facility for responding to error behaviors, such as setting and then witnessing some agent alarm, or observing a sample failure on poll. The mission executive will have an interface that provides the ability to set/clear/run/stop missions, where a UI could cause a new mission to be edited or retrieved from a database and sent to the executive. Capability container configuration may also include default missions to populate the service at platform boot time. Python packges croniter.py, APScheduler.py, http://code.activestate.com/recipes/577466-cron-like-triggers/, and others may be useful for parsing cron strings or scheduling tasks.

Labels

favourite favourite Delete
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.