Skip to end of metadata
Go to start of metadata
This page describes the deployment of Release 2

Release 2 Deployment Overview

Figure 1 shows an overview of the high-level Release 2 deployment architecture.

Figure 1. Release 2 Deployment

Assumptions and Constraints

  • The launched ION system will only be running in Portland
  • ION persisted will be replicated by operations tools to the Seattle CyberPoP
  • In addition, data will be replicated at acquisition time on remote platforms, Marine IO data servers and similar
  • Woods Hole Acquisition Point will act as network interchange only to secure CI
  • Internet and cloud access occurs via Seattle and Chicago CyberPoPs
    • May host firewalls, load balancers and web servers
  • Monitoring of the ION system will occur from San Diego Engineering Center and from other locations
  • No hardware messaging appliances in R2
  • Portland CyberPoP has Nimbus (KVM) and VMware hypervisors

Core Infrastructure

  • CyberPoP network and hardware infrastructure includes
    • Compellent storage system accessible via NFS and VMware mounts on VMs
    • A10 Load balancer

Statically Operated Dependencies

  • VMware hypervisors will host
    • Apache web server with Flask plugin
    • RabbitMQ brokers (cluster, federation) - one or multiple
    • CouchDB cluster or BigCouch cluster - one or multiple
    • EasticSearch cluster
    • ZooKeeper cluster
    • ERDDAP data externalization server
    • RSN port agents
    • iRODS server installation (for data acquisition from CG)
  • Nimbus installation on Portland CyberPoP
    • Contains ContextBroker
  • Messaging system maintains persistent state (durable queues)

ION system

  • Launched via cloudinit.d using CEI software stack
  • Services are launched first, then agents, then transform processes
    • Services are bootstrapped by boot level in dependency order
    • Restart cleans up persistent information and messaging state before system start
  • ION system contains web service gateways (http/https servers fronting services and agents)
  • ION system contains pyDAP OpenDAP server, fronting the internal science data persistence

External Interfaces

  • To CG Marine IO
    • CG will operate 3 OMCs (Operations and Management Center) in Woods Hole (WHOI), Corvallis (OSU) and San Diego (SIO)
    • OMCs are designed redundantly but each nominally fulfills a dedicated responsibility
    • An iRODS data sync exists with each OMC
    • Each OMC has a CI accessible software API for core command & control (e.g. establish telemetry) and status
  • To RSN Marine IO
    • RSN will operate the OMS system out of Portland, OR
    • All ports on the RSN cable are accessible via IP to CI software
  • To EPE IO
  • To ION operations
    • Scripting tools, such as preload, automatic load
  • To the general public (end users)
    • See below

System User Access

  • Via central ION domain names on ports 80, 443
  • Authentication via users' home identity providers using the CIlogon service
  • CG and RSN marine operators can connect via virtual serial port software for direct instrument access
  • Science end users can access the ION system OpenDAP servers (ERDDAP, pyDAP)
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.