Effective Modelica Library Development

In this blog post I’m going to tell you about the approach we use here at Claytex for our Modelica library development, including some of the tools we use to make our lives easier and our libraries more robust.

Some of the factors that influence our development approach are:

  • Working as a collaborative multi-disciplinary team
  • Efficiency
  • Automation where possible
  • A group of interdependent libraries developed in parallel

Continuous Integration for Library Development

We employ a software engineering approach called Continuous Integration. This means we frequently merge the changes we make into a common shared version of our libraries, where it is automatically tested to validate the effect of the changes made.

We’ve found that this approach has the benefits of:

  • Fewer bugs in the released libraries because unexpected or undesired simulation behaviour changes are identified early due to the automated testing.
  • Integration issues are easier to correct and solved early, as our team merge their work in to a common repository regularly.
  • Testing is quicker as it is automated and performed on a dedicated computer which can run numerous simulations in parallel.
  • Less switching between tasks as you are alerted about any problems with your library changes quickly so you can focus on fixing them before moving to the next task.

Our continuous integration follows the cycle shown below.

Figure 1: Continuous Integration cycle

Figure 1: Continuous Integration cycle

To be able to achieve this continuous integration cycle we employ a small set of key tools , as shown in Figure 2. We use an issue tracker to log, assign and track work that we want to do in our Modelica libraries. We use a version control repository to store and share our Modelica libraries. We use a continuous integration server to control the automated testing of our libraries. Our regression testing tool is used to simulate the experiments in our libraries and compare them to reference results to identify changes. It is important that the tools chosen integrate together to provide the maximum benefit of this approach.

Figure 2: Tools in the continuous integration cycle

Figure 2: Tools in the continuous integration cycle

In the following sections, I will look at each of these elements.

Issue Tracker

We use an issue tracking tool to:

  • Record ideas for future enhancements and new feature
  • Log bugs reported
  • Plan which enhancements, new features and bug fixes we want to include in the next release of our libraries.
  • Assign tasks to a specific library developer
  • Collaborate on tasks
  • Record and monitor progress

Version Control

Here at Claytex, we have a team of developers working on a collection of interdependent Modelica libraries simultaneously. So it is critical that we have a common repository where our libraries are stored, where we can seamlessly share the changes we are making to our Modelica libraries. To do this, we use a version control tool.

Version control tools are common in collaborative software development because without them development becomes extremely inefficient, error prone and frustrating. If each developer has their own independent copy of the libraries that they are working on, then sharing changes requires manual integration of their work with colleagues’ own copies, not knowing whether changes will be compatible. Without version control, the sharing and integration of a single model form a colleague can take days if there are any issues vs. a few minutes using revision control.

Our version control repository acts as the backup of our work as we develop our libraries, with synchronised copies locally and on our servers. Each developer has a local working copy of the master libraries on their PC which they work on. When we save our work it is stored in our local copy, but when we want to add our changes to the master copy, we commit all the changes to the master copy using the version control tool. Other developers can then update their local working copies from the master to get all the changes committed.

Figure 3: How version control works

Figure 3: How version control works

Because it may take a developer a while to fully complete a new development task on the libraries, we employ a trunk and branch system with our version control. Our developers can have their own line of development, called a branch, where they can try out ideas without disturbing the main line of development with errors. Once the new development is complete, it can then be merged from the developers branch in to the main trunk.

It also provides full traceability of the changes we make and the ability to rewind to previous iterations easily. These features are invaluable and provide a safety net to prevent the libraries getting ‘messed up’ because we can all see what we have changed and get back to a good version.

So, you should by now also realise that due to it’s backup and rewinding capabilities, version control can also be useful for single library developers.

Dymola will integrate with version control tools, more details can be found in the previous blog post, Version Control and Dymola.

Continuous Integration Server

Fundamental to our continuous integration process is the automated testing of the changes we make to our libraries to validate their effect. To do this, we use a continuous integration server which works in conjunction with our version control tool, to automatically run checks and tests of the libraries when changes are committed.

The continuous integration server controls build agents on our test PCs which perform numerous checks and tests simultaneously to speed up the process. The continuous integration server then reports the outcome of this work.

Regression Testing

As there wasn’t an off-the-shelf interface between a continuous integration server and Dymola to perform the testing, Claytex created an internal regression test tool product to be able to do this.

When we make a commit to our libraries trunk in the version control repository, our continuous integration server triggers our regression test tool to:

  1. Identify what changes have been made and what is affected by these changes in all our libraries.
  2. Simulate every library experiment affected by the changes on a test PC. Numerous simulations utilising every available core can be run in parallel to speed up testing.
  3. Compare the new results to previous good result stored in our reference data.
  4. Identify differences between the new and reference results that are above a pre-defined tolerance.
  5. Generate a report.

When these steps have been completed the continuous integration server notifies the developer that a report is ready for review. We then view the report in our regression test tool GUI, as shown below. Red icons identify experiments in our libraries with differences that need to be reviewed. The tool shows changes in the translation and simulation logs, as well as the variable values. The deltas in the variables are visualised in plots, and from the tool you can then review the results in Dymola to do further investigation.

Figure 4: Viewing a report in the regression test tool GUI

Figure 4: Viewing a report in the regression test tool GUI

Once the developer has decided whether the difference in the results are acceptable or a problem, they use the GUI to edit our reference result database to update the reference data. This way we spot any errors caused by our changes to the libraries as soon as they are made, so we can correct them before they cause issues for our development work or our customers. To be effective, this requires every class in our libraries is included in experiments to ensure they will be regression tested, and we have test libraries containing 1000’s of experiments for this purpose.

It was the development of our in-house regression test tool that enabled our MultiRun tool to be developed. The MutiRun tool allows:

  • Multiple experiments to be simulated in Dymola in parallel to take advantage of multiple CPUs
  • Automated comparison of the simulation results with reference results, where the variables compared can be customised.
  • Generations of reports to view the impact of changes to Modelica models
  • Creation of database of results

Can You Improve Your Library Development?

Hopefully this incite in to our Modelica library development practices here at Claytex may identify solutions to improving your own library development work. We’d love to hear about any other practices or tools you use to make your library development easier and more robust.

Written by: Hannah Hammond-Scott – Modelica Project Leader

Please get in touch if you have any questions or have got a topic in mind that you would like us to write about. You can submit your questions / topics via: Tech Blog Questions / Topic Suggestion

CONTACT US

Got a question? Just fill in this form and send it to us and we'll get back to you shortly.

Sending

© Copyright 2010-2020 Claytex Services Ltd All Rights Reserved

Log in with your credentials

Forgot your details?