Written by: Vesel Haxha – Senior Developer
The RegressionTest Tool for Dymola has been developed for library developers to help them improve the quality of library releases. It provides automated routines for checking models, generating reference results and running regression tests. It is a standalone application that controls Dymola and calls functions from within a supplied Modelica library.
Regression Test User Interface:
The application supports 4 main library checks (as shown in Figure 1)
- Model Check
- Generate Reference
- Regression Test
- Verify Encrypted
Figure 1: RegressionTest Tool
All of the tests run by the RegressionTest tool are multi-threaded which means it will start multiple instances of Dymola in the background and allocate each test to the next available thread. This allows us to reduce the time required to test a library because the tests run in parallel.
In addition, all of the tests can be automatically started using command line arguments meaning the whole process can be automated. The test results are stored in an XML file so that they can be loaded into the tool later for manual review. A supervisory service is also provided that can monitor your revision control system for changes and then automatically triggers tests and sends email reports to the user that committed the change.
1. Model Check
In the Model check every class in the selected library is checked using Dymola and the results summarised to quickly identify models that aren’t correct and those that generate warnings. The translation log is captured from Dymola and displayed in the tool. The different icons used in the tree view of the library identify whether the model checked correctly, produced warnings or failed to check.
Figure 2: Translation log in Model Check
As you can see from Figure 2, the RegressionTest software is user friendly and easy to use. It is worth noting that in the tree view of the Modelica Library the icons used in the package tree have the following meaning:
Any icon that indicates a failure, is propagated up through the library hierarchy to make it easy to locate errors at every level in the library. In terms of icons it is the same procedure with Generate Reference as well as Regression Test simulation.
2. Generate Reference
This test is the equivalent of running a simulation for every class in the library that has experiment settings stored in the class. The simulation result and translation log are then captured in the tool and stored in a library and version specific directory within the reference data directory. These can then be used later as a reference data set as part of a full regression test.
The icons within the tree view are used to quickly identify models that didn’t run correctly so that they can be corrected.
Figure 3: Generate Reference
3. Regression Test
In this test, a simulation is run for every class in the library that has experiment settings stored in the class and then the new simulation results are compared to a reference result file. The translation statistics are also compared.
By comparing the translation statistics between the new model and reference set, we can quickly identify changes in the model structure such as an increase in the number of state variables or the number and size of nonlinear systems. This helps identify undesirable changes in the model.
In Regression Test (shown in Figure 4), the comparison of the simulation results is done using the state variables and a user-defined list of variables that can be stored within the model. The variable values are compared at each time step and any difference that is outside a specified tolerance is identified as a failure of the regression test. The regression test results can be viewed as shown in the bottom of the figure on the right.
Figure 4: In Regression Test. Comparison result.
It is worth noting that in comparison result:
- The selected signal from the new result file is plotted alongside the upper and lower tolerance limits (high tube and low tube) derived from the reference signal as well as a 4th signal which is the difference between the new result and reference result.
- The plot window supports zooming (this is the default action) as shown in Figure 5.
- To zoom in on a region simply click and drag a rectangle and the plot will zoom on that region.
- To zoom out, right click in the plot window and select “Zoom Out” from the context menu.
- The plot window supports panning and it can be enabled by right clicking in the plot window and selecting “Pan”. To pan around the plot (after it has been zoomed in) simply click and drag the plot around. High and low tube or upper and lower tolerance line. If the working result line is within boundary then the results are successful (example shown in Figure 5).
Figure 5: High and low tube or upper and lower tolerance line
4. Verify Encrypted
The Verify Encrypted (shown if Figure 6) test is the equivalent of running a regression test using an encrypted version of the library. The reference result set is scanned to find the experiment names that need to be simulated and these are then run and the translation logs and results compared in the same way are for the standard regression test.
Figure 6: Verify Encrypted.
The RegressionTest tool is used to automate the checking and testing of libraries so that undesirable changes can be quickly and easily identified. This is useful in any project where a Modelica library is being developed and shared between users to accelerate the testing of any changes. Because the tests are multi-threaded, using multiple instances of Dymola, it can very quickly run through all the models and experiments in a library to make sure changes don’t create errors.
Please get in touch if you have any questions or have got a topic in mind that you would like us to write about. You can submit your questions / topics via: Tech Blog Questions / Topic Suggestion.