In this blog post we look into the feasibility of manual library testing compared to a continuous integration process for library checking.
Consider a very common scenario where one or more people are involved in model library development. Improvements and changes are made to components in the library each day. Without an automated regression test tool in place, it is down to the library developers to determine how and exactly which component changes they made, impact the simulation performance, model statistics and results for each experiment and tests within the library where those components are used.
Therefore, after each component functional change is made, the affected experiments should be identified and run and the results checked against a previous set of results to make sure that the models and their results are still valid.
Not undertaking these types of checks leads to bugs and undetected changes occurring that may be spotted only weeks or months down the line and tracking what caused those changes becomes an extremely difficult if not impossible task, particularly without any form of revision control.
So we need to at least perform manual checking, but is it even feasible?
A simple test case would take 10 minutes at the very least to simulate and compare against reference data to decide if there was a difference or not.
Therefore, assuming all is well and no errors are found (a rare case), a library of 100 models for example would take between 16 and 17 hours to check manually for each change made to the libraries or roughly 40% of the working week assuming a 40 hour working week.
On a monetary view of things, this equates to circa £800 in company/employer costs (salary + benefits and additional costs) per manual regression check, potentially more. Each manual regression check equates to 40% of a 40 hour working week. With several changes made every week to components and systems a single manual regression check would soon take up the majority if not all of the engineers’ time. Soon the feasibility of basic manual checking becomes impossible, leading to un-detected build-up of bugs in model libraries and / or slow model development.
This simple but staggering reasoning brought us to the conclusion over a decade ago that a continuous integration process was required for the robust, traceable and efficient development of even a small application library of only a few models (Figure 1). We presented this process in a previous blog article that you can read here: Effective Modelica Library Development
Figure 1. Snapshot of the regression test tool analysing current and previous results for a library experiment displaying the error tube (blue), the new results (yellow) and the error (red).
Please get in touch for more information on how we can replicate our tested and proven continuous integration system within your company.
Written by: Alessandro Picarelli – Engineering Director
Please get in touch if you have any questions or have got a topic in mind that you would like us to write about. You can submit your questions / topics via: Tech Blog Questions / Topic Suggestion