Skip to end of metadata
Go to start of metadata

General

Input Sources

Integration process

The Release 1 deployment configuration is a working document owned by the ITV team, serving as an interface specification for the following teams:

  • COI
    • Identification of service boot levels based on dependencies
    • Identification which CC type (Python, Java) is used for which service/process
  • CEI:
    • Identification of Deployable Types and the service processes they contain and the app files to start
    • Definition of EPU policy per Deployable Type
  • DM:
    • Identification of topic tree (exchange point) setup, when connecting a service as subscriber
  • Integration:
    • Documentation of service dependencies
    • Management of configuration list
  • Operations:
    • Security and operations concerns (firewall, in/out connections etc)
    • List of technologies
  • All subsystems:
    • Definition of production app file names for service processes
    • Links to service interface architecture pages
    • General awareness of R1

Integration Steps

Plan

  • Define 2 types of integration tests
    • Trial based (more than one service in one Python capability container)
    • Integration test framework based: Using a launch plan to start a system configuration, run tests and collect results in log files
  • System tests
    • Manual: Bring up the system using launch plan, connect UI servers, execute manual UX workflows
    • Automated: Use an automated UI test tool such as Selenium
  • Manual UI based integration runs with representatives from all teams
    • Identification of action items (short term, mid term)
    • Fix action items and retry integration run within the same week

Integration Point 1 (week 3 of R1C3 develop)

Plan:

  • First automatic integration test case: EOI data ingest and download based on launch plan
    • Launch plan system basis plus test specific variant for test
    • Contains: launch plan, CentOS image, EC2, AIS, backend services, Cassandra
  • AIS test cases (trial style) with backend mocked out
    • At least one test case per AIS operation (NOTE: A couple of AIS workflows still missing tests)
    • May fail initially
  • Integration of Grails UI with AIS (list data sources screens, calls AIS with mocked out backend).
    • Can be based on the LCA demo
    • Contains: Grails UI with simple screen no CIlogon, ioncore-java, AIS with mocked out backend
  • (optional) Integration of Grails UI with CIlogon and backend
    • Provide test only mode to avoid actual CIlogon call and provide mock
    • Contains: Grails UI with logon screen, ioncore-java, AIS, identity registry

See Also

Risks and Potential Blockers:

  • For first EOI integration test, the following list of tasks need to be done:
    • CEI
      • Need to be EPUified.
    • Operation
      • Need 2nd CentOS image to use with launch plan.
  • AIS Test Cases
    • Maurice to work with AIS team to identify any missing subsystem functionality
    • All responsible AIS team members to identify GPB for each API and dependent service
  • Integration of Grails UI with AIS
    • Utilize Stephen's list data sources screen to select available data sources.
      • This calls Grails, that calls through ioncore-java to AIS.
      • Use AIS mocked out back-end to return one data set
    • Select the data set
      • This calls Grails, that calls through ioncore-java to AIS.
      • Use AIS mocked out back-end to return metadata details for data set.

Integration Point 2 (week 6/7/8 of R1C3 develop)

Plan: NOTE: Person(s) responsible for driving integration/test identified in Red

  • End-to-end system launch plan (Tim F., Jamie) DRIVE THIS ON EC2 FIRST THEN MOVE TO LOCAL VMs
    • Bring up the system
    • Bring up all services on all levels (with init and ready programs for the levels and Deployable Types defined)
    • Usable for manual input and viewing
    • Use manually started scripts, programs to do actions (such as ingest data)
    • Contains: launch plan, CentOS image, Nimbus, UCSD hardware, backend services, Cassandra
  • More automatic integration test cases (Jamie, Tim F.)
    • Automate Ingestion Test that EOI/DM team put together
      • NOTE: This is dependent on End-to-end system launch plan test identified above.
      • Full Ingest test that Dave F./Tim L. have been working on
      • NOTE:  NO LONGER DOING THIS PART FOR IP2:  UI Selenium test tied in with Launch plan.  Datasets are populated automatically via David's canned datasets.
        • Have basic Selenium tests working.  Almost have junit driver version working that could be kicked off as part of full End to End launch plan
    • Contains: launch plan, CentOS image, Nimbus, UCSD hardware, Grails UI, AIS, backend services, Cassandra
  • AIS test cases with back-end enabled (Tim A., Maurice)
    • Utilized David's provided data sets.
    • Tests with live backend for WF's:
      • 100 (get data sets)
      • 101 (show details)
      • 102 (download)  (Just returns a valid URL, doesn't actually download)
      • 103 (subscribe to dataset)?
      • 104 (list subscriptions)?
      • 105 (publish dataset)?
      • 106 (monitor publications)? (Use Anonymous user to view publications pre-loaded)
      • 107 (register user)
      • 108 (return user roles)? (Contact Bill for test status)
      • 109 (manage ooi resources)
    • some test cases may fail (gives a success percentage)
    • some of the backend services might themselves return mock or provide canned data
  • Integration of Grails UI with AIS and working backend (list data sources screen, calls AIS, which calls DM/COI) (Tim A., Maurice)
    • Show the following WF's with UI and Live backend running statically on Amoeba:
      • 107 (Login + Register) - We already have login capability working, but no landing page that allows a user to click a "login" button - Registration screen added.
      • 108 (return user roles at login time)
      • 100 (get all published datasets) + (geospatial search to limit returned data sets)
      • 101 (show details of dataset)
      • 105 (publish new dataset)?  (use known valid dataset) - Will we have the backend for this yet?
      • 106 (monitor publications - show only your published data sets) + Change state of dataset from private to published
      • 102 (download dataset)? Dependent on Chris for IOSP for NetCDFjava.  (i.e. getting it out of THREDDs) - We'll skip this WF for IP2 testing.
      • 103 (subscribe to dataset)? Logic not quite there yet.
      • 104 (list subscriptions)?  Can we get email on a change? - Likely won't have back end functionality available, we'll skip this WF for IP2 testing.
      • 109 (display registered users) - We won't have the UI for this yet next week.  Skipping this WF for IP2 Testing.
    • AIS might still return mock data for some
  • First automated UI test using Selenium (Tim A., Jamie, Tom)
    • pick easiest workflow or mock UI
    • don't test CIlogon redirect etc, use anonymous or test mode
  • Unit/integration tests on the Grails side (Adam S, Tom)
    • Test from java level without Grails/Tomcat layer (junit tests already defined, but will add more as AIS functionality becomes available)
  • Basic end-to-end performance test case simulating AIS to backend (such as getDataResource, etc) (David S., Matt R.)
    • Such as register 100 data resources and then call getDataResource() and measure latency
    • Dependent on garbage collection task and apps
  • Added functionality (Adam S., Tom)
    • COI: Add permissions and access control
      • access control can be demoed as part of WF 108 above

See Also

Risks and Potential Blockers:

  • TBD

Integration Point 3 (week 11/12 of R1C3 develop)

Plan: NOTE: Person(s) responsible for driving integration/test identified in Red

  • More automatic integration test cases (Various owners)
    • Finish Automating End to End Boot Plan from IP2 testing. (Jamie)
    • Add trial integration tests at boot levels verifying that level is really ready (i.e. level 4 verify data is loaded). Run from ready program at selected boot levels. (Jamie)
    • Instrument agent fronting NMEA instrument simulator of GPS for ingestion of acquired measurement packets (Alon)
    • Verification Tests for IOC (Tim A./Alan C.)
      • Show whatever verification tests exist can be executed (even if they fail)
      • Create Spreadsheet that provides tracing of automated buildbot integration tests to verification tests (which integration tests serve as verification tests and are approved as such?)
  • ANF demo for CEI scaling (Dave F.)
    • Important sync point for IOC to make it work (Dave F.)
    • TBD: Use more of R1 functionality (such as resource registry, ingest, event notifications) (Dave F. maybe in Transition)
  • Automatic system test with UI: Use Grails Server for this. (Roger)
    • Test 1: Use R1 deploy script to run against local grails server/local CC kicking off the selenium tests manually
    • Test 2: Use Deployed system after boot level 10 is reached and use grails server then kick off selenium tests manually (note for Transition: automate the tests via build bot)
    • Have a subset of UX workflows covered (no CIlogon) with automated UI test tool such as Selenium
  • Operational Testing Environment Ready (Mark, Adam)
    • Test official URL for grails application (i.e. what the user would hit).
    • Security and firewall set up, DNS names
    • Error Monitoring of system (software on order. Manual at this point)
    • Log Consolidation (software on order. Manual at this point)
    • Operations procedures defined.
      • who is responsible for starting the system
      • what happens in case of error
      • what happens in case of reconfiguration
      • how do you find an error
      • Others?
  • Manual system acceptance (candidate) UX walk through scenarios (demo style practicing IOC Demo) (Tim, Jamie)
    • End-to-end system launch plan
    • Operations brings up the system
    • Walk through all workflows
    • Show error scenarios
      • internal errors - something fails (kill something on backend)
      • user error - enter incorrect data and verify error happens. Enter in or wrong URL, etc.

Risks and Potential Blockers:

  • UI Not complete enough for some of these test to properly proceed

Work for Transition

Bucket:

  • "Chaos monkey": Randomly kill service workers, VMs, etc?
  • Negative tests
    • Show system in the presence of errors (processes fail)

Detailed Planning:

Work Assignment

  • TBD

Steps:

  1. Define basic launch plan structure with bootlevels defined
    1. Describe basic tests for each boot level
  2. Define launch plan (for cloudinit.d) for system with data store, Cassandra and lower boot levels
  3. Run test that tests the data store and Cassandra
    1. Trial unit tests are already existing
  4.  

Potential Problems and Roadblocks:

#

Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.