Skip to end of metadata
Go to start of metadata
This page describes the testing of services in ION Release 2. This guide is based on the R2 container use guide and the R2 service implementation guide.

Test Best Practices

  • Nose test runs holds on to each test instances. if you have set self.attributes in tests, you need to dereference using either tearDown (or, better yet, addCleanup) functions. This will reduce all overall memory foot print and speed up the run time of the test suite.
  • In Python 2.7.2 addCleanup has been introduced, which will be called even if setUp() fails. So, use that instead of tearDown if needed since tearDown will not be called when setUp() fails, which we encountered a lot in R1...

Test Coverage

  • All ION code should be tested with both Unit and Integration tests
  • Mock library allows you to test your code without worrying about missing dependent services
  • Unit test suite should be executed by developers before every push/pull
  • Keep unit tests small and fast
  • Keep each unit test independent of each other
  • Name tests appropriately
  • Fix any failures in unit tests immediately!
    • We need to be able to trust the results of the unit test run
    • Don't "push and run" - check the results of the subsequent automated build for success
  • All unit tests run automatically in appropriate environments on every code change
  • All ION code should target code coverage of 80% or greater
    • Review coverage to understand what is/isn't being tested
    • Focus on Execution Coverage first, but note that different inputs may produce different results
    • Achieving 100% Actual Coverage would require testing every possible input - an unreasonable task
    • Cover boundary conditions: For numbers, test negatives, 0, positive, smallest, largest, NaN, infinity, etc. For strings test empty string, single character string, non-ASCII string, multi-MB strings, etc., etc., etc.
  • All ION code should be integration tested with the rest of the system, including the real dependent services
    • Developers need to work with integration team on integration tests
    • Integration tests on service operations that are dependent on incomplete or non-existing operations should be skipped until dependent services are available - but they should still be written!
  • Code reviews should not be approved without accompanying unit/integration tests

Testing Helpful Hints

Interactive shell

When tests are failing it is sometimes useful to be able to introspect local variables etc. Pyon provides a breakpoint ability using the IPython shell for this. Just add to code in the appropriate place (and don't forget to remove when all is fixed):

When running nosetests ensure that you pass the -s argument otherwise stdout will get swallowed and you won't see the prompt.


Pycharm provides an interactive debugger that can be connected to test runs as needed. See TBD.

Unit Testing

All developers should be unit testing their code. The CI project follows the concept of test first. A goal should be to test at least 80% of the code under development with a goal of 100%. Code coverage is provided by the test framework and can be easily checked and verified. In order to facilitate unit testing, CI has incorporated a "mock framework" to eliminate dependencies on code that may be developed in parallel. Note that code reviews should not provide approval if there aren't associated unit tests!

Unit testing code in CI is not run within the pyon container. The purpose of unit testing is to verify your code works as you expect it to work with minimal dependencies.

Test Driven Development

  • Write tests first. Make your assertions fail. (This will prevent you from writing a unit test that always passes, which happens more than you think!) Write your code. If you code correctly, the failed unit tests should pass after a few refactoring cycles. References here: slide 1. Better reference?
  • How many unit tests are you supposed to write per function? As many that would cover the happy paths, the branched conditions, and exceptions in your function. Ideally, test coverage should be 100%.
  • If an integration test fails due to a condition in your service code, write a unit test that exposes the bug with failed assertions. Fix your service. The failed unit test should pass. Fix any other failed unit tests due to your service code fixes.

When to use mock in your unit test

If your function takes parameters, you need to mock these parameters.

If your functions calls other functions, you need to mock those out. Let's call these 'mocked_out_funcs'. This could happen in two scenarios: 1. you are calling someone else' function. 2. you are calling your own function, for which presumably you already wrote a separate unit test and should not be tested again here.

Mock out the return value of the 'mocked_out_funcs' so your code can keep going during test.

How to use mock in your unit test

Each unit test consists of 3 blocks:

  1. Set up mock input parameters to your function under test. Set up 'mocked_out_funcs' as described above.
  2. Call the function under test.
  3. Assert the function has called other 'mocked_out_funcs' correctly. How many assertions do you need to write here? You should at least assert on the 'mock_out_funcs' with side effects, such as a resource registry create. Finally, assert your function has returned correctly. It's very important to write appropriate amount of assert statements. Otherwise, you are just running your function, which always passes and is not helpful for a unit test.

Learn about Pypi mock framework

  • Great video to get started here . This really shows the simplicity and elegance of the framework.
  • If you want to take advantage of the full power of the mock framework, you need to dig into the documentation here.

Nose tests

The CI project plans to use nose as a test loader for R2. Documentation on nose is here.

Use correct naming convention

Nose documentation says:

Any function or class that matches the configured testMatch regular expression (?:^|[\\b_\\.-])[Tt]est) by default – that is, has test or Test at a word boundary or following a - or _) and lives in a module that also matches that expression will be run as a test.

The Important thing out of this is: Put the word Test in a test class, put the word test in each test function. Otherwise, don't.

Use nose attribute

Nose has a flexible test loading scheme using attribute tags. Read about using nose tags here.

Recommended tags:

  • one attribute tag of value:
    • UNIT (unit test)
    • INT (integration test that needs extra setups, containers, etc.)
    • PFM (performance test that might take a long time to run...)
    • Need full definitions...
  • one keyword attribute of name 'group':
    • group='dm' (This test belongs to dm)
    • group='sa' (This test belongs to sa)
    • Need full definitions...

Code sample:

The above attribute tags define TestMyService class a UNIT test belonging to 'dm' group. You can also place @attr tags at individual test function level instead of class level.

Now you can do some cool things with nose test loader:

  • Run all tests
  • Run unit tests
  • Run integration tests
  • Run sa unit tests
  • If you want to get fancy and use a python expression: (run dm and sa unit tests)

Notice -A instead of -a above. Consult nose documentation for more options.

Note: if you want to run the nosetests under a specific system name, edit pyon.local.yml and set the following two configuration values:

Measuring unit test coverage

Both pyon and coi-services directories are setup such you can readily run test coverage with --with-coverage tag.

sample coverage output

You can drill down further by looking at the generated html output at coverage_results/html/index.html with a browser.

How to write pyon unit test

Where to write tests?

If you have a module you need to test for in ion/services/dm/.py, write your tests in ion/services/dm/test/.py

Inherit from PyonTestCase

PyonTestCases uses Pypi Mock to provide some convenience features:

  • _create_IonObject_mock, which allows you to create a Mock IonObject() function that
    1. Will patch and un-patch correctly for your test so you don't end up changing this attribute in your service module. You can read about where to patch here if interested.
    2. Will hook into existing IonObject validation code against the yaml definition in pyon and raise an exception if you are trying to create an invalid IonObject.
  • _create_service_mock, which will take a service name parameter:
    1. Find all dependencies and create a mock client object for you. For example, you have implemented a service called GreatService with service name 'great_service' against which you want to write unit tests. And, your 'great_service' has a dependency to 'resource_registry'. The returned mock client from calling _create_service_mock by passing in 'great_service' as a parameter will generate all necessary mock attributes/functions you need such as: mock_clients.resource_registry, mock_clients.resource_registry.create function, etc.
    2. Check the client code you are invoking is compliant with the Base Service Spec. For example, if the BaseResourceRegistry service has changed and eliminated the 'find' function, and you are still calling mock_clients.resource_registry.find() in your code, you will get an error in your unit test.
    3. Check that you implemented all service functions you promised to implement in your YAML definition. That's done in the test_verify_service test that you will inherit automatically.

Write you setUp function:

Let's start with a simple example of for which we will write a unit test.

Here's a sample setUp function straight out of

What is happening here?

  1. It calls _create_IonObject_mock first to create a mock IonObject function. TradeService uses IonObject() function so it needs to be mocked out during testing. Notice how you need to pass in the correct IonObject in '' scope because you are interested in patching the IonObject attribute in that module, not somewhere else. This is due to the way in which IonObject is defined. Since this involves patching and unpatching using addCleanup, it's always good to do it early in setUp before setUp has a chance to fail.
  2. It calls _create_service_mock on service 'trade' and returns mock_clients. After this call is made, all client mocks under mock_clients you need will have been created for you as mentioned previously. In this case, 'trade' service has only one dependency in the yaml file, which is 'resource_registry'. Other services, such as 'bank' service, have more than one dependencies. Those will all be created.
  3. It Instantiates the real TradeService() under testing. You can do this, along with the next step, in each test function, but it gets repetitive...
  4. It mocks out the 'client' attribute of the real service with our mock_clients created in step 2.
  5. You are done. This last step is purely for convenience. So, self.mock_create is just short for mock_clients.resource_registry.create. It depends on how much you want to type in each of your test functions later on. (Note that step 2 created all the mock test functions defined in resource_registry_service.yml, such as, etc. Since TradeService only used the resource registry create function, there's no need to rename the rest of the functions.

tearDown function

Nose test runs holds on to each test instances. if you have set self.attributes in tests, you need to dereference using either tearDown (or, better yet, addCleanup) functions. This will reduce all overall memory foot print and speed up the run time of the test suite.

In Python 2.7.2 addCleanup has been introduced, which will be called even if setUp() fails. So, use that instead of tearDown if needed since tearDown will not be called when setUp() fails, which we encountered a lot in R1...

Let's go to the meat of the unit test code

tradservice.p has only one function 'exercise', which basically either buys or sells some bonds depending on the order type.

To be complete, we need to write at least two test functions for 'exercise', one for order type 'buy' and another for 'sell'.

We will just take a look at the 'test_exercise_buy' function:

What is happening here?

Step 1. Create all the mocks you need.

  • exercise function takes a param 'order'. So we need to mock it. This is a 'buy' order we are testing, so set order.type='buy'. set the order.cash_amount.
  • exercise function calls self.clients.resource_registry.create function, which is already substituted for you in setUp. But, you need to provide the return_value of that function so your code can continue.
  • exercise function calls IonObject(order). Nothing to do there since mock IonObject is already substituted for the real IonObject function in setUp earlier.

Step 2. Call the function under test, which is 'exercise'

  • pass in the mock order parameter, and retrieve the confirmation_obj return value.

Step 3. How did we do? Time to write assertions

  • exercise should first create an order in the resource_registry. Did that happen? Let's assert that mock resource registry function did get called once with the order parameter.
  • exercise should then calculate the proceeds and call IonObject with the 'Confirmation' object Did that happen? Let's assert mock_ionobj did get called once with the correct params, including the correct proceeds.
  • exercise should then return the confirmation_obj to the caller. Is the confirmation object the return value from the IonObject() function? Let's assert that it is.

You are done.

Now, just need to go write the test_exercise_sell in a similar fashion.

More mock examples/tricks

Using mock side effect

You can use mock side effect to simulate exceptions, etc., when your test requires. Below are excerpts from Mock documentation:

Raising exceptions with mocks
A useful attribute is side_effect. If you set this to an exception class or instance then the exception will be raised when the mock is called. If you set it to a callable then it will be called whenever the mock is called. This allows you to do things like return members of a sequence from repeated calls:

>>> mock = Mock()
>>> mock.side_effect = Exception('Boom!')
>>> mock()
Traceback (most recent call last):
Exception: Boom!

>>> results = [1, 2, 3]
>>> def side_effect(*args, **kwargs):
... return results.pop()
>>> mock.side_effect = side_effect
>>> mock(), mock(), mock()
(3, 2, 1)

Below is an example:

In this case, we are testing 'new_account' function of The particular workflow we want to test is when a new customer is opening a new account. In that scenario, resource registry 'create' function must be called twice, first to create a new customer, and then to create a new account based on the new customer. So, we need to simulate two different return results from the 'create' function. The resource registry 'create_assocication' function can then use the return values from both 'create' calls as its parameters to make a valid association.

Now on to the test....

  1. Set up mocks
    • We setup the resource registry 'create' results in a list.
    • We create a side_effect function that basically just pops the results. Since this is FIFO, when setting up the results earlier, we set the return results in reverse. The first tuple is actually for the return results of the 2nd create call...
    • We set the mock_create's 'side_effect' attribute to our 'side_effect' function. When mock_create gets called, our side_effect function will be called and its return value will be used for the mock_create function's return value.
  2. We test the code by calling 'new_account' function.
  3. We make our assertions:
    • Assert resource registry 'create' function does get called twice.
    • Assert the 2nd resource registry 'create' function is called with the correct parameters.
    • Pop the stack. The pop_last_call is a convenience function in PyonTestCase that you can use to assert a function has been called multiple times, in reverse order.
    • Assert the 1st resource registry 'create' function is called with the correct parameters.
    • Now you see we can assert resource registry 'create_association' function is indeed called with the results from the two 'create' functions.

Using Magic Mock

Mock supports mocking magic methods. This allows mock objects to replace containers or other objects that implement Python protocols.

MagicMock is a subclass of Mock with default implementations of most of the magic methods. You can use MagicMock without having to configure the magic methods yourself.

Most developers will not need either of these features. But, should you ever need to mock magic methods, such as mocking a dictionary, you should look into it. Here is documentation on how to mock magic methods. Here's documentation on using Magic Mock.

Using sentinel object

The sentinel object provides a convenient way of providing unique objects for your tests.
You can read about this useful construct here.

We have one example of using it here:

In the above example, you use sentinel to:

  • Create a unique customer_obj used as return value from the resource registry 'find_resources' function.
  • Create a unique accounts object used as return value from the resource registry 'find_objects' function.

You can then make assertions using the sentinel objects.

Using patch and patch decorators

patch works by (temporarily) changing the object that a name points to with another one. There can be many names pointing to any individual object, so for patching to work you must ensure that you patch the name used by the system under test.

The patch decorators are used for patching objects only within the scope of the function they decorate. They automatically handle the unpatching for you, even if exceptions are raised. All of these functions can also be used in with statements or as class decorators.

An example of patch is used in _create_IonObject_mock function of Take a look if you are interested. Here's documentation on patch and patch decorators.

Important: If for some reason you need to alter a class or module attribute, definitely look into patch instead of re-inventing your own context manager functions.

Use class spec and mocksignature

By default, all method access on a Mock creates a new mock. This means that you can’t tell if any methods were called that shouldn’t have been. One way to get around this is by using spec to restrict the methods available on your mock.

A problem with using mock objects to replace real objects in your tests is that Mock can be too flexible. Your code can treat the mock objects in any way and you have to manually check that they were called correctly. If your code calls functions or methods with the wrong number of arguments then mocks don’t complain. The solution to this is mocksignature, which creates functions with the same signature as the original, but delegating to a mock. You can interrogate the mock in the usual way to check it has been called with the right arguments, but if it is called with the wrong number of arguments it will raise a TypeError in the same way your production code would.

Both spec and mocksignature are used in _create_service_mock function in Take a look if you are interested. Here's documentation on mock signature.

Common Mistakes/Errors

TypeError: 'Mock' object does not support indexing

You are providing a mock or mock return value that is not a list when the function expects one.

TypeError: Mock' object is not iterable

You are providing a mock object or mock return value that is not an iterable (for i in ...) when the function expects one.

AttributeError: Mock object has no attribute 'old_create'

You are calling a service function that no longer exists in the service interface.

Below, 'old_create' function of resource registry was being called in 'exercise' function. But, it no longer exists in resource registry service interface.

TypeError: <lambda>() takes at most 1 argument (2 given)

You are trying to call a service function with wrong number of arguments.

Below, resource registry 'create' function was called with one additional parameter('one more param') in 'exercise' function of

test_verify_service fails with AttributeError: 'NoneType'

The service you are mocking is not found. Perhaps you got the name wrong

Below, the mock of "instrument_management" should be "instrument_management_service"

test_verify_service fails with AssertionError

The service you are implementing is missing one or more function implementations.

Below, the 'exercise' function implementation is missing from the trade service.


An IonObjectError will be raised when your code calls IonObject() with invalid parameters.

Below, there was typo in bank service trying to create an IonObject type 'Bankcustomer' instead of 'BankCustomer'. The test code catches this issue.

Other pitfalls

As mentioned earlier, because of flexibility of Mock framework, you could end up calling functions that shouldn't have been without noticing the problems. It's also conceivable to create a typo in your test and have one of your assert statements always pass! We tried to remedy the situation by enforcing 'spec' and 'mocksignature' where possible in the PyonTestCase. But, it's always good to make your assert statement fail first to catch these subtle errors.

To illustrate with a simple example:

What's happening here? The last call passed because the 'assert' was typed with one too many 's'. Instead of making an assertion as we intended to, we ended up running a function called 'asssert_called_once_with' that the mock framework auto supplied for us since all method access on a mock creates a new mock.

Integration Testing

After code is unit tested, it should also be integrated with dependent subsystems. The integration process is a collaboration between the integration team and the development team. The integration tests should be written by the developers of the service/component being tested and issues should be coordinated with the integration team. Integration tests use the pyon CC and utilize dependent services rather than mocked out services. This is a building block to a fully integrated system.

Use of the container (start, stop, restart, load dependent processes)

Start of dependent services

Utility Tests

Luke has compiled a comprehensive directory of utility tests. These tests preload a certain configuration specific to the item that needs to be tested. The tests also contain breakpoint(s) that allow developers to interact with a container and a system that has been preloaded to a specific end. This reduces the need to launch multiple containers and various scripts to preload the system for testing purposes, the tests may also include preloaded data.

A list of tests may be found in ion/services/dm/test/, On Github.
To launch a specific test for utility purposes

Interactive Tour of Mock

  • Calls
    • inspecting calls: call_count, called, call_args_list
    • making vs inspecting
  • Asserts to use
    • built in: assert_called_with, assert_called_once_with, assert_any_call
    • via unit test:
      • self.assertTrue(m.func.called)
      • self.assertEquals(m.func.call_count, 1)
      • self.assertIn('your param', m.func.call_args[0])
  • Chaining - each call OR INSPECT returns a mock (they are different)
  • spec versus spec_set
    • spec takes a type name, does not have attributes set in _init_
    • spec_set takes an instance, has attributes set in _init_
  • Use of side_effect
    • Exceptions
    • actual side effects - can be used to provide "meat" to a mocked method
    • list of return values
    • returning DEFAULT
Real world uses of side_effect
  • with gevent utility blocking_cb
  • mocking up a recv loop
  • patching log (??)
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.
  1. Nov 22, 2011

    Michael Meisinger says: