|The lcaarch project is a GIT source code repository and integrated Python code base that provides some basic framework code for bootstrapping a CI-wide prototype and for running prototype services, and the actual service implementations themselves together with unit level and system level test cases. The scope of the services is the full functionality of the OOI CI deliverable of release 1 transcending any subsystem prototypes. Initially, all services start out as stubs with their message sending and receiving interface only and have the potential to grow into a fully functional operational architecture prototype as required for LCA.|
Using GIT (need to install first), get a copy of the source code tree with:
Run the bootstrap script from the lcaarch directory to see if things work:
Run all test cases recursively from lcaarch directory with:
There are many individual prototype development projects in OOI CI mostly structured by subsystem. Some of them are bigger, some smaller. Some active, some historic. Some on UCSD's gitosis server, some on github. This is good.
This project is an effort to integrate the major work going on in the individual development projects into a coherent system. The emphasis is not so much on solving the functional and technical tasks (e.g. how to interface with the Cassandra datastore, or how to provision a contextualized VM on Nimbus and EC2). Instead, the focus is on the services of the integrated system and their dependencies. What messages do the services react to? What messages do they send in response? How are the messages and their structure defined? What are some system level scenarios that exercise all the services?
In order to achieve this goal, it is sufficient if all services are stubbed out in their basic form only. Additional functionality can be integrated as possible and needed over time. The project has the potential to grow into a fully functional operational prototype that can demonstrate the system's core capabilities during LCA.
We start small. Things are expected to change in the beginning. The initial hurdle to use the lcaarch project should be small (meaning very few needed dependencies).
The following tells you what you should know about the organization and use of the lcaarch project.
lcaarch runs at least on Macs and in Ubuntu Linux environments. You don't need to run your own instance of a message broker or key/value store. By default the servers on amoeba.ucsd.edu are used.
See above for the quick start instructions. Read the README.txt for installation information about dependencies and for running the core functions and test cases. The test cases in the source code tree are good examples of the functionality and how to use it, and for test driven development. There is a growing number of test cases that demonstrate a wealth of unit level, service level, integration level (multiple services working together) and system level tests.
|| Named "lcaarch". The root of the project source tree. Everything is expected to be started from here (twistd and trial)
|| Python source code for the Integrated Observatory Network (i.e. ION) prototype
|| Anywhere in the source tree: these directories contain trial test cases
|| Agent processes and base classes for agent processes
|| Common base classes for bootstrapping the system or to represent a running, communicating process in the system
|| Capability Container mechanics
|| Information and data related modules that are not subsystem (COI, DM) services, but actual helper classes to access data stores or to manage ION data
|| Core classes for managing interactions, conversations, protocols
|| Base classes for services
| Actual service implementations (as processes) with their tests, structured by subsystem. Subdirectories may be added where necessary. Extensive functional code may be placed elsewhere.
|| Base classes for ION trial unit tests
|| Utility modules, functions and classes
|| Files in here are generated by the logging mechanism. There are trace level logs (i.e. copies of the console with debug information), explicit logs that were sent via messages to the logger service and other logs, such as all messages sent in the system
|| Any kind of other resource files needed for running the system
|| Configuration files for the ION system, such as the central system configuration file (ion.config) and specific config files. Many files are in the form of executable (by eval()) python statements, such as constant definitions of Python directories and lists.
||Configuration files for Python logging component|
|| Magnet container start scripts
|| Purpose and Explanation
|| Base class for all resource agent processes. Note that they are not services, but they may call services.
|| The core definitions for a process running within a capability container. A process has a unique identifier (container id + local id) and can send and receive messages via the container's AMQP messaging interface. Process instances are spawned by another process (called the supervisor process).
The default implementation is that when a message to the process is received with a header "op=performative", the class method "op_performative" is called. Typical service subclasses only have to define the operation methods they support.
|| Client base class for performing synchronous service calls in RPC style (but non-blocking for other processes)
|| Central class for bootstrapping the ION system in a container. Note, trial test cases have a different way of starting the container.
|| Loaded by the ion package by default. Basic initializations of all code in the system, such as setting up the logging system, or loading the central config file
|| Base class for a service process. On initialization of a service process, the "slc_init()" method is called.
|| Example service process implementation to use as reference.
|| Utility functions for processes
|| Base class for a twisted trial test cases. All such classes are executed if you do a "trial ion" from the lcaarch dir. This base class helps to set up the container's operational environment and to start and initialize the core services, if needed for a test case.
- package names are lower case only, one word no underscores
- module.py names are lower case only, with underscores as needed. Keep them short.
- Class names are CapitalCase (e.g. ServiceClass, MyService)
- Functions and class methods start with a lower case and then are either func_name or funcName
- Variables start with a lower case and then are either varname, var_name or varName
- Constants may be all upper case (e.h. CONSTANT = 'some')
- test case modules start with test_
- test case class names have the word Test in them (e.g. SomeTest or TestSomething)
- test case methods start with test_ (e.g. test_function1)
Doxygen automatically creates source code documentation every 30 minutes. This is VERY useful.
The following tells you how to use the project and develop you own code and contribute back.
Know which part of the system you work on (e.g. on a subsystem service, a specific agent etc). You already know if you work on the framework and base classes. So most likely you can ignore them for now as long as you understand them.
Know where to place your source code in the source tree
Know how to write a trial unit test and where to place it
Communicate frequently and do frequent git pulls from the master remote branch.
Make sure you get the source code tree in non-anonymous mode. Otherwise you cannot push to the server based GIT repository. This means use the "git clone git@..." style. If you previously checked out in anonymous mode, you can also edit your .git/config file.
Add your new files and commit your work locally with telling commit messages
Once you are confident about the sum of all your changes, merge locally with the remote master branch, and ONLY IF ALL THINGS STILL WORK LOCALLY, push to the server.
1 Setup the container (logging, load config files, check versions etc)
2 Walk through an ordered list of processes that will be started in this order. Different "installations" may have different bootstrap process sequences.
- Startup the root supervisor process on the local container
- The root supervisor spawns all the configured processes (via self.spawn_child)
- Each process is spawned (with spawn arguments) and immediately after receives an init message, which triggers the slc_init
The core set of processes including the CC agent are spawned in sequence first and then any additional services and processes (also in order). Dependency is (currently) built into the bootstrap order.
Every service (note: not a process) has a well known default name. This name is defined in the service declaration in the source code module of the service. For instance the COI datastore service uses the name "datastore" as well known name. All clients of this service can use this name to access the service.
AMQP queue names are determined by "scoping" the well known name of a service with the identifier of the running system. The system is the set of processes that were bootstrapped from the one root supervisor process, and all capability containers that share the system name.