Skip to end of metadata
Go to start of metadata

This OOI CyberInfrastructure OOINet System Architecture Specification (akronym CIAD), OOI document 2130-00003, specifies the system architecture and design for the science and education-driven applications of the OOINet system together with its infrastructure services. This system is developed and will be operated by the OOI Cyberinfrastructure (OOI CI) implementing organization.

OOINet application services include:

  • Interfacing with environmental sensors, instrument platforms, and observatory infrastructure, enabling data and command flow,
  • Acquisition of observational sensor data and external data and their ingestion into the OOI integrated observatory,
  • Generation and distribution of qualified science data products in (near) real time,
  • Synthesis of derived data products such as QA/QC'ed data products,
  • Integration of numerical ocean models and their output as derived data products,
  • Access to, syntactic transformation, semantic mediation, analysis, and visualization of science data, data products, and derived data products,
  • Interactive analysis and visualization of OOI integrated observatory data products in a social networking environment,
  • Planning and control of complex, long-running ocean observations, and of event-triggered adaptive observations,
  • Interactive control of observatory infrastructure.

The Integrated Observatory infrastructure services provide the foundation for the Integrated Observatory application capabilities supporting a broad range of integrated observatory user applications. Integrated Observatory infrastructure services include:

  • Management of distributed information repositories, including data stores for science data and derived data products, and general purpose repositories,
  • Management of observatory resources of various types with their metadata and keeping track of their life cycle and internal state,
  • A common operating infrastructure, comprising capabilities for message based communication and service-oriented application integration, with consistent cross-cutting identity management, interaction governance, and policy enforcement,
  • A common execution infrastructure that provides location-independent management of heterogeneous executable resources, including provisioning and control of compute and storage cloud resources,
  • Operation and management of the integrated observatory network.

The contents of this document have been developed and structured in accordance with the Department of Defense Architecture Framework (DoDAF), which provides guidelines for developing architectures for large-scale systems and for presenting relevant views on the architecture data in a number of products. The target audience includes decision makers, subsystem implementers, and end users. This documentation describes architectural principles, terms, design intent of system elements as well as integrated technologies. It contains detailed specification drawings and blueprints for construction, externally available under configuration control in the CI specification repository (see References, CI SPECS).

The system's core functional capabilities are structured into six subsystems. These subsystems provide extensive services that support the user applications capabilities and the infrastructure services listed above. The application supporting subsystems are Sensing and Acquisition, Analysis and Synthesis and Planning and Prosecution. The infrastructure subsystems are Data Management, the Common Execution Infrastructure and the Common Operating Infrastructure.

The subsystems will be implemented in an incremental schedule; in particular, Analysis and Synthesis starts in Release 2, and Planning and Prosecution starts in Release 3. Where practical, we have highlighted items not targeted for a Release with the phrase '(not in Release 1)', or similar language. Details of implementation schedules are provided in the Overview for each subsystem, and for the whole system in the Transition to Operations.

The Integrated Observatory Network is interfaces with multiple target environments through dedicated "implementation subsystems". The Marine Integration subsystem integrates with the OOI Marine Observatories, namely the satellite connected Coastal and Global Scale Nodes (CGSN) and the cabled Regional Scale Nodes (RSN). The External Observatory Integration subsystem integrates with external observatories, such as the Integrated Ocean Observatories System (IOOS), Neptune Canada, and WMO and their user audiences. The User Experience subsystem provides consistent, effective user interfaces throughout the system its deployments and user groups.

Subsystems and their Services

The following descriptions represent the overall architectural intent for each subsystem. For information on which features are available in a given release, please view the Overview page for the corresponding subsystem or see the CI Construction Plan.

The Common Operating Infrastructure (COI) provides the software integration platform for all functional capabilities within the system as laid out in the system's integration strategy. COI provides the capability container as a platform for service integration via asynchronous reliable messaging through the Exchange. COI applies consistent identity management, policy enforcement across the distributed system. COI provides contract-based governance across multiple domains of authority. COI provides registration capabilities for all resources governed by the integrated observatory, including instruments, data products and executing processes; COI also applies a uniform life-cycle management to all resources. These capabilities enable the management of all OOI resources, their supported activities and their representation of state in a uniform way. COI provides a framework to manage taskable (controllable) resources through governance-aware agents. Technologies applied in COI include Python and Java for the two core capability container implementations, AMQP messaging with a RabbitMQ message broker infrastructure for the Exchange, CouchDB as object and metadata persistence layer, and Internet2 security technologies including CIlogon for the identity management and governance components.

The Common Execution Infrastructure (CEI) provides the capabilities to schedule, provision and manage any kind of computation in the observatory network at any location independent of the characteristics of the executing environment. CEI provides infrastructure covering two primary levels: The Infrastructure as a Service (IaaS) management of compute resources, such as virtual machines, using cloud computing technologies, and the dynamic scheduling and management of system processes. The lower level Infrastructure as a Service (IaaS) capabilities manage Operational Units (e.g. virtual machines) in elastic computing environments. This enables elastic provisioning of compute resources based on system needs and user demand. CEI applies the Nimbus cloud computing software to OOI CyberPoP deployments, so that these deployments appear as cloud resources similar to external academic and commercial compute providers. In particular, CEI provisions execution engines that support the execution of higher level processes. On the higher level, CEI provides scheduling and management of executable processes, for instance to deploy system service processes and data stream processing algorithms on execution engines. The Elastic Processing Unit (EPU) provides a framework for high-available (HA) services. In later releases, CEI will support execution of OOI and user processes on Amazon's Elastic Compute Cloud (EC2), as well as on academic compute infrastructure (for instance XSEDE and the Open Science Grid) and other commercial facilities.
CEI also provides the tools to launch the system, to monitor its state of health and to control its execution policies.

The Data Management (DM) subsystem enables information distribution, persistence and access, making information available across the observatory network over extended periods of time. Information includes observational data and derived data products, as well as descriptive metadata and system internal information required for the operation of the Integrated Observatory. DM manages an inventory of all information artifacts together with their metadata in form of the OOI Common Data and Metadata Model. Data distribution is based on the COI Exchange messaging infrastructure and provides a topic based data publish/subscribe infrastructure. The Data Management subsystem also provides ingestion, transformation and presentation services. Ingestion takes real-time observational and external data and metadata and places them into Integrated Observatory repositories represented in the Common OOI Data and Metadata Model. Transformation services support derivation of data products based on real-time data streams, as well as syntactical data format transformations and ontology-supported semantic mediation. A data access and presentation strategy supports needs of diverse communities of use; for data access by providing a flexible access search and navigation of data products based on metadata and other search criteria. Technologies applied in DM include iRODS (Integrated Rule-Oriented Data System) and CouchDB for data preservation and replication, and Elastic Search to support index and search. Standard models that apply include the Unidata Common Data Model, GML and OCG SWE as design references for the common data model, the VSTO (Virtual Solar-Terrestrial Observatory) ontology model and the ESG (Earth System Grid) Faceted Search for data access based on vocabularies from the Marine Metadata Interoperability (MMI) project.

The Sensing and Acquisition (SA) subsystem provides capabilities for instrument and instrument platform management, data acquisition, as well as for observatory management and data product generation. SA services are relevant for the operational management of CGSN (Coastal Global Scale Node) and RSN (Regional Scale Node) observatories and their assets and for PI-provided instruments deployed on the OOI infrastructure. All instruments and platform resources are managed through agents and encapsulate device drivers that provide a consistent observatory interface to control a diversity of specialized devices. The drivers directly interface with vendor provided hardware and software. Technologies include existing instrument driver software environments, such as MBARI's SIAM (Software Infrastructure and Application for MBARI MOOS), the MIT MOOS architecture (Mission Oriented Operating Suite), and the Antelope platform. Device control is based on IEEE 1451 and SensorML standards. Instrument identification and activation is supported in form of the MBARI PUCK instrument interface.

The Analysis and Synthesis (AS) subsystem (starting in Release 2) supports a wide variety of data product analysis, manipulation, generation and presentation capabilities, in particular supporting advanced visualizations. AS provides a flexible workflow framework to support many forms of advanced activities to orchestrate user interactions and resource access. An interactive workspace component provides access to standard sets of analysis and visualization tools for interactive analysis and visualization applications directly driven by the user. AS provides tools to define virtual collaboration projects leveraging social networking concepts, for virtual observatories, classrooms and laboratories. AS provides the interfaces to integrate tools and applications provided by OOI's science and education users. Other AS capabilities include science event detection, data assimilation, and numerical model integration. Technologies applied include Kepler and Pegasus for distributed workflow execution and resource mapping. AS will provide a framework for the integration and execution of scientist-provided numerical ocean models such as the Regional Ocean Modeling System (ROMS) and Harvard Ocean Prediction System (HOPS). A suite of integrated applications, including Matlab, Kepler, and other workflow editors will support process and model specification, simulation, analysis, and visualization.

The Planning and Prosecution (PP) subsystem (starting in Release 3) provides situational awareness and multi-resource command and control on the level of the entire OOI Integrated Observatory. PP capabilities will orchestrate collections of resources managed by the other subsystems, supporting closed-loop optimized observe-analyze-act workflows. PP supports the definition of long-term and adaptive observational missions. PP provides generalized resource planning and control activities that will be applied to plan, schedule, and prosecute multi-objective observational programs. PP provides an event-response framework and also autonomous control of mobile platforms, such as intermittently connected, low-bandwidth global mooring controllers, AUVs and to the extent possible gliders and profilers. Technologies applied include ASPEN and CASPER from NASA JPL (Jet Propulsion Laboratory) for resource planning and control and MIT MOOS (Mission Oriented Operating Suite) as autonomy software for mobile platforms such as gliders and AUVs. In addition, the behavior-based MOOS-IvP sofware provides autonomous vehicle navigation in the presence of multiple objectives.

Implementation Subsystems

The Marine Integration (MI) subsystem provides the core architecture and capabilities to interface with individual sensors, sensor platforms, platform controllers, and observatory infrastructure such as power and communication bandwidth controllers, and other external observatory management systems. These capabilities enable the production of instrument-specific drivers and data produce generation algorithms. MI provides the Instrument Development Kit (IDK).

The External Observatory Integration (EOI) subsystem provides capabilities for the integration of external data sources, data consumers and observatories with the OOI Integrated Observatory. External observatories include IOOS, Neptune Canada and WMO. This enables the acquisition of external data products and their metadata into the OOI. It will also support the delivery of data products to users of these external observatories in community specific formats, such as in the format suggested for IOOS Regional Association (RA) data providers. EOI will provide Neptune Canada and WMO support in later releases. Technologies include NetCDF as data import/export format, OpenDAP/DAP/THREDDS as a data externalization server and catalog and ERDDAP as a data mediation engine.

The User Experience (UX) subsystem provides a uniform experience to all users of the OOI Integrated Observatory, through web-based and mobile user interfaces. UX provides designs, strategies and user workflows targeted at specific user groups. It leverages the COI presentation framework as platform for UI development and deployment.

Fundamental Strategies

The Integrated Observatory Network software integration strategy is based on two core principles: service orientation and asynchronous reliable messaging. A high-performance message Exchange provides the communication conduit with dynamic routing and interception capabilities for all interacting elements of the system. The message interface is defined independently of any implementation technology. The messaging infrastructure provides scalability, reliability and failure tolerance. A service-oriented architecture is key to managing and maintaining numerous and complex applications within a heterogeneous distributed system of systems. All functional capabilities and resources are represented exclusively through services with precisely defined service interfaces. Services can be accessed independent of location throughout the Integrated Observatory Network through secure messaging. External access capabilities, such as a gateway for HTTP service access exist. Services have defined interfaces, independent of implementation platforms or integrated technologies.

The Integrated Observatory Network deployment strategy is based on the virtualization of computing within cloud-computing environments and a high-speed OOI national network infrastructure. It leverages the COI capability container that provides all essential software infrastructure capabilities wherever system and user needed software capabilities are required. Capability containers can be deployed at any cloud execution site across the observatory network. Possible deployment sites include platform controllers on remote, intermittently connected coastal and global moorings, computational elements placed in the payload bay of AUVs and gliders and the full range of terrestrial CI deployments (CyberPoPs). The deployment of OOI marine assets starts with Release 2.

The network strategy supports a scalable, low latency, high bandwidth and secure distribution of science data in real time to end users and affiliated organizations across the country and world wide. It applies global load balancing to route traffic and access data to CyberPoP deployments most proximate or most suitable to satisfy requests at any given time. The network strategy is essential in providing a robust, geographically redundant, highly available Integrated Observatory Network presence.

The OOI multi-facility strategy supports the participation of multiple independent organizations and communities in the OOI Integrated Observatory. Each facility represents its own domain of authority with its own managers applicable and policies, while supporting the participation of its users and resources in the Integrated Observatory. Consistent governance is applied throughout the system determined by electronically represented agreements and contracts between the participating facilities. This model requires and enforces no central authority and policy rules. Instead, the participation of the facilities and their principals is fully subject to the agreements between the facilities, with policy enforced consistently by the integration infrastructure. This integrated observatory network is open and can be joined by user facilities. The multi-facility strategy will be applied starting with Release 2.

Project Organization and Transition to Operations

The capabilities of the integrated observatory are designed modularly and support incremental development and transition to operations. The OOI CI implementing project will deliver three incremental releases that increasingly support user applications and processes, beginning from automated data preservation and distribution, ending at advanced concepts of interactive ocean science, including instrument and observatory interactivity exploiting knowledge gained through observations and analyses.

The four releases will support user applications as follows:

  • Release 1 provides a fully capable automated end-to-end data distribution infrastructure, supporting the needs of data consumers such as data analysts and numerical modelers, with basic marine instrument support elements.
  • Release 2 adds end-to-end control of how data are collected, supporting more advanced processes of instrument providers with managed instrument control. It also provides capabilities for the real-time generation or OOI data products.
  • Release 3 adds end-to-end control of how data are processed, supporting more advanced processes of instrument providers and data product consumers, as well as on-demand measurements supporting event-driven opportunistic observations.

This schedule is presented in more detail in the Transition to Operations. Release 1 scope is defined in Release 1 Scoping, Release 2 scope is defined in Release 2 Scoping. Release 2 deployment is described here. Release 3 scope is defined in Release 3 Scoping.

Labels

read-cei read-cei Delete
read-pp read-pp Delete
read-ux read-ux Delete
read-eoi read-eoi Delete
read-dm read-dm Delete
read-coi read-coi Delete
read-sa read-sa Delete
read-as read-as Delete
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.