Model self-testing

(Redirected from ModelSelfTesting)
Jump to: navigation, search

When writing ASCEND models, we aim to write them in such a way that they can be quickly utilised by a person without requiring that that person understand all the implementation details. This has lead us to offer specify and values methods as well as default_self methods to ensure that models can readily be placed in a condition where they can be solved by the new user, so that the user can quickly start to experiment with the interaction of variables in the system and not have to worry about its numerical subtleties too early in the piece.

A problem with this approach is that we have many apparently easy-to-use models but no information about whether or not they are doing what was originally intended. We need a way to instrument ASCEND models so that they can be automatically tested.

This way we can know when

  • changes to one library have broken another dependent library
  • changes to solver or compiler code have altered the answers given by existing models
  • changes to code have prevented existing models from converging
  • platform-specific issues result in models given different answers on different systems


To set up a model so that it can be self-tested, first use the default_self method to put the model into standard solvable form.

Then, add a self_test method to your model, and include statements of assertions (ASSERT expr) for the correct values of your variables, for example:

METHOD self_test;
    ASSERT abs(p - 1{bar}) < 1 {Pa};
    ASSERT abs(h - 2500{kJ/kg}) < 1 {kJ/kg};
END self_test;

There is an example of a self-testing module (including both passing and failing tests) in models/johnpye/testlog10.a4c.

Here is a screenshot of a self-test being run in the PyGTK interface:


Automated Testing

If you have compiled the PythonWrapper routines, add a couple of lines to the file and this will ensure that your model is self-tested automatically every time our code changes, or by running

./ TestSuiteName.testcasename


At present, the use of the 'self_test' method name means that only one operating point can be tested per model at this stage. An obvious extension would be update the syntax with some way of allowing you to write multiple tests per model.

Another issue is that the self-testing has no way of specifying any options for the solver. It is assumed that the testing library 'just solves' the model once the default_self method has been run.

Many of these limitation s are being overcome as the power of the Python bindings expands. See the examples given by the file, and also check the extpy external scripting method extension.

Ideas for expanded syntax?

I'd love to get some suggestions here. Please visit bug 173.

  • How to run multiple tests on a single model
  • How to specify solver options
  • Neater syntax for asserting that real values are within a tolerance range
  • Looping tests?
  • Tests of dynamic systems?
  • Should specifying the 'base case' be a part of the self_test method, or part of default_self, or something else?