Testing Library: A Look Inside

We have written a number of blog posts on the topic of regression testing Modelica libraries to ensure model modifications don’t have undesired effects on the simulation results. In this blog post, I’m going to take a look at another library that could help you do this, the Testing Library. See our post on the MultiRun Tool for localised regression testing and Effective Modelica Library Development for our internal regression test solution.

Figure 1: Example of a test of a lossy gear using the Testing Library (Testing.Examples.Test_LossyGearDemo)

Figure 1: Example of a test of a lossy gear using the Testing Library for results checks (Testing.Examples.Test_LossyGearDemo)

The Testing Library is a free library from Dassault Systemes, and it is supplied in the Dymola installation. It aims to help Modelica library developers to detect undesirable effects of model changes, by creating test cases where current results can be compared to previous values. To do this the library provides:

  • Check blocks for comparing signals during the simulation with a reference result.
  • Log blocks to store signals to create reference results for later comparison.
  • Tools to facilitate creating tests, generating reference results and performing comparison tests.
  • Versions for testing functions via the text editor rather than the diagram.
  • Reporting of comparison outcomes in the simulation log and diagram.

Opening the Testing Library

When opened, the Testing Library adds two new toolbars, as highlighted in the figure below.

Figure 2: Testing Library structure in the Package Browser and toolbars

Figure 2: Testing Library structure in the Package Browser and toolbars

The Create and Run toolbars give you quick access to all the key functions in the libraries Testing.Runners package, making them easy to find.

Figure 3: The Testing Library toolbars provide easy access to the

Figure 3: The Testing Library toolbars provide easy access to the

Creating a Test Model

It is best to create an accompanying set of test cases when developing your models to examine their behaviour in a range of scenarios. By modifying your test cases with components from the Testing Library, you create test models that can create reference results and run comparison tests.

To demonstrate this, let’s use an example test case from the Modelica Standard Library, testing a simple gear model with mesh efficiency and friction losses, figure 4. Different torques are applied to inertias on either side of the lossy gear so that the torque dependent friction loss will pass through the backward rolling, stuck and forward rolling friction modes.

Figure 4: Diagram of the gear with losses test case (Modelica.Mechanics.Rotational.Examples.LossyGearDemo1)

Figure 4: Diagram of the gear with losses test case (Modelica.Mechanics.Rotational.Examples.LossyGearDemo1)

To transform this test case into a test model to work with the Testing Library we need to extend it and include the additions highlighted in the image below.

Figure 5: Lossy gear test case diagram with Testing Library additions for comparison testing, Testing.Examples.Test_LossyGearDemo

Figure 5: Lossy gear test case diagram with Testing Library additions for comparison testing

  1. An inner Setup block to define global testing setting; like the test mode to generate reference data or perform a comparison check , and the reference data file.
  2. The dashboard provides a graphical report of the outcome of the comparison.
  3. Blocks to compare signals and store reference data. The CheckLog blocks used here combine components for the comparison check, plus logging and reading of reference data dependent the Setup settings. Reference data can be read from SDF files or defined in time tables.
  4. Supply the current signal inputs to the comparison check blocks. These signals should be key metrics describing the behaviour of the model being tested.

If you use Tools > Create > New TestCase to create the new test case it will make this process straightforward by extending the selected test case, adding the Setup block and dashboard. It will also:

  1. Extend the TestCase base class to include the Dymola selections in the model annotation for the log blocks to store reference results, and include a distinctive icon.
  2. Add Test_at the start of the test model name, making it identifiable by the running functions as a test model for comparison.
Figure 6: Text of the lossy gear test case, highlighting additions to create a Testing Library comparison test model , Testing.Examples.Test_LossyGearDemo

Figure 6: Text of the lossy gear test case, highlighting additions to create a Testing Library comparison test model

Creating Reference Results

Now we have a test model all set up, we can create the reference results for future comparison test using the function Tools > Create > Create references. This function sets the test mode in the Setup block to create reference. The test model selected will be simulated, either with the experiment settings stored in the model or the settings in the function. The results file is converted to a SDF file, which is stored in the file location specified in the Setup block.

Figure 7: The dialog box for the Create references function

Figure 7: The dialog box for the Create references function

The size of the reference data file can be reduced by opening the SDF file and removing everything except the time and refs group. The ref group signals are defined by the log blocks used.

Running Comparison Tests

The function Tools > Run > Simulate and plot will let us perform a comparison test on our example, it will:

  • Change the test mode to check in the Setup block.
  • Simulate the selected test model, performing the comparison checks.
  • Print the check outcome to the simulation log (figure 8).
Figure 8: Comparison test reporting in the simulation log, including any assert message from comparison failures

Figure 8: Comparison test reporting in the simulation log, including any assert message from comparison failures

  • Update the dashboard (figure 9).
Figure 9: Dashboard updated with the outcome of the comparison check

Figure 9: Dashboard updated with the outcome of the comparison check in the diagram

  • Create plots comparing each signals actual, reference and min/max acceptable values to show when they are out of tolerance.
Figure 10: Comparison plots of signal values

Figure 10: Comparison plots of signal values

Alternatively, multiple tests can be compared using the function Tools > Run > Run tests, as long as their names start with Test_.

Regression Testing with the Testing Library

As well as supporting regression testing of models via the diagram layer, the Testing Library has options to do this for functions in the text.

The Testing Library provides everything needed to perform comparison testing in Dymola, and I think it would be well suited for regression testing smaller libraries with only a few test models. For larger numbers of test case, the options in our posts on the MultiRun Tool and our internal regression test tool are better solutions as significantly less test model creation effort is required.

The example used in this blog post is included in the Testing Library, Testing.Examples.Test_LossyGearDemo, so why not open up the library and take a look for yourself to see if this can help you test your models.

Written by: Hannah Hammond-Scott – Modelica Project Leader

Please get in touch if you have any questions or have got a topic in mind that you would like us to write about. You can submit your questions / topics via: Tech Blog Questions / Topic Suggestion

CONTACT US

Got a question? Just fill in this form and send it to us and we'll get back to you shortly.

Sending

© Copyright 2010-2022 Claytex Services Ltd All Rights Reserved

Log in with your credentials

Forgot your details?