Model self-testing: Difference between revisions
Restored page from Google Cache, uploaded by John Pye |
No edit summary |
||
| Line 1: | Line 1: | ||
When writing ASCEND models, we aim to write them in such a way that they can be quickly utilised by a person without requiring that that person understand all the implementation details. This has lead us to offer <tt>specify</tt> and <tt>values</tt> methods as well as <tt>default_self</tt> methods to ensure that models can readily be placed in a condition where they can be solved by the new user, so that the user can quickly start to experiment with the interaction of variables in the system and not have to worry about its numerical subtleties too early in the piece. | When writing ASCEND models, we aim to write them in such a way that they can be quickly utilised by a person without requiring that that person understand all the implementation details. This has lead us to offer <tt>specify</tt> and <tt>values</tt> methods as well as <tt>default_self</tt> methods to ensure that models can readily be placed in a condition where they can be solved by the new user, so that the user can quickly start to experiment with the interaction of variables in the system and not have to worry about its numerical subtleties too early in the piece. | ||
A problem with this approach is that we have many apparently easy-to-use models but no information about whether or not they are doing what was originally intended. We need a way to instrument ASCEND models so that they can be automatically tested. | A problem with this approach is that we have many apparently easy-to-use models but no information about whether or not they are doing what was originally intended. We need a way to instrument ASCEND models so that they can be automatically tested. | ||
This way we can know when | This way we can know when | ||
* changes to one library have broken another dependent library | * changes to one library have broken another dependent library | ||
| Line 11: | Line 9: | ||
* changes to code have prevented existing models from converging | * changes to code have prevented existing models from converging | ||
* platform-specific issues result in models given different answers on different systems | * platform-specific issues result in models given different answers on different systems | ||
== Syntax == | == Syntax == | ||
| Line 20: | Line 17: | ||
<source lang="a4c">METHOD self_test; | <source lang="a4c">METHOD self_test; | ||
ASSERT abs(p - 1{bar}) | ASSERT abs(p - 1{bar}) < 1 {Pa}; | ||
ASSERT abs(h - 2500{kJ/kg}) < 1 {kJ/kg}; | |||
ASSERT abs(h - 2500{kJ/kg}) | |||
END self_test;</source> | END self_test;</source> | ||
There is an example of a self-testing module (including both passing and failing tests) in {{src|models/johnpye/testlog10.a4c}}. | There is an example of a self-testing module (including both passing and failing tests) in {{src|models/johnpye/testlog10.a4c}}. | ||
Here is a screenshot of a self-test being run in the | Here is a screenshot of a self-test being run in the PyGTK interface: | ||
[[Image:ascend-selftest.png]] | [[Image:ascend-selftest.png]] | ||
== Automated Testing == | == Automated Testing == | ||
| Line 37: | Line 31: | ||
If you have compiled the [[PythonWrapper]] routines, add a couple of lines to the file <tt>test.py</tt> and this will ensure that your model is self-tested automatically every time our code changes, or by running | If you have compiled the [[PythonWrapper]] routines, add a couple of lines to the file <tt>test.py</tt> and this will ensure that your model is self-tested automatically every time our code changes, or by running | ||
./test.py TestSuiteName.testcasename | |||
== Limitations == | == Limitations == | ||
| Line 47: | Line 40: | ||
Many of these limitation s are being overcome as the power of the Python bindings expands. See the examples given by the <tt>test.py</tt> file, and also check the <tt>extpy</tt> '''external''' scripting method extension. | Many of these limitation s are being overcome as the power of the Python bindings expands. See the examples given by the <tt>test.py</tt> file, and also check the <tt>extpy</tt> '''external''' scripting method extension. | ||
== Ideas for expanded syntax? == | == Ideas for expanded syntax? == | ||
I'd love to get some suggestions here. Please visit | I'd love to get some suggestions here. Please visit {{bug|173}}. | ||
* How to run multiple tests on a single model | * How to run multiple tests on a single model | ||
| Line 61: | Line 52: | ||
* Should specifying the 'base case' be a part of the self_test method, or part of default_self, or something else? | * Should specifying the 'base case' be a part of the self_test method, or part of default_self, or something else? | ||
[[Category:Documentation]] | |||
[[Category:Development]] | [[Category:Development]] | ||
Latest revision as of 03:01, 8 November 2011
When writing ASCEND models, we aim to write them in such a way that they can be quickly utilised by a person without requiring that that person understand all the implementation details. This has lead us to offer specify and values methods as well as default_self methods to ensure that models can readily be placed in a condition where they can be solved by the new user, so that the user can quickly start to experiment with the interaction of variables in the system and not have to worry about its numerical subtleties too early in the piece.
A problem with this approach is that we have many apparently easy-to-use models but no information about whether or not they are doing what was originally intended. We need a way to instrument ASCEND models so that they can be automatically tested.
This way we can know when
- changes to one library have broken another dependent library
- changes to solver or compiler code have altered the answers given by existing models
- changes to code have prevented existing models from converging
- platform-specific issues result in models given different answers on different systems
Syntax
To set up a model so that it can be self-tested, first use the default_self method to put the model into standard solvable form.
Then, add a self_test method to your model, and include statements of assertions (ASSERT expr) for the correct values of your variables, for example:
METHOD self_test; ASSERT abs(p - 1{bar}) < 1 {Pa}; ASSERT abs(h - 2500{kJ/kg}) < 1 {kJ/kg}; END self_test;
There is an example of a self-testing module (including both passing and failing tests) in models/johnpye/testlog10.a4c.
Here is a screenshot of a self-test being run in the PyGTK interface:
Automated Testing
If you have compiled the PythonWrapper routines, add a couple of lines to the file test.py and this will ensure that your model is self-tested automatically every time our code changes, or by running
./test.py TestSuiteName.testcasename
Limitations
At present, the use of the 'self_test' method name means that only one operating point can be tested per model at this stage. An obvious extension would be update the syntax with some way of allowing you to write multiple tests per model.
Another issue is that the self-testing has no way of specifying any options for the solver. It is assumed that the testing library 'just solves' the model once the default_self method has been run.
Many of these limitation s are being overcome as the power of the Python bindings expands. See the examples given by the test.py file, and also check the extpy external scripting method extension.
Ideas for expanded syntax?
I'd love to get some suggestions here. Please visit bug 173.
- How to run multiple tests on a single model
- How to specify solver options
- Neater syntax for asserting that real values are within a tolerance range
- Looping tests?
- Tests of dynamic systems?
- Should specifying the 'base case' be a part of the self_test method, or part of default_self, or something else?
