Dymola, is an extremely powerful tool and also a complex beast at heart. A vast array of features, user customisable settings, options and flags comprise the simulation environment. Sometimes, this means that a model can run perfectly well on one person’s machine, but not quite on another person’s machine depending on the Dymola settings that have been chosen in that machine. At the best of times, this can be an inconvenience. Time spent going back and forth, checking with other coworkers in the office, trying to determine if the original failure is valid. More importantly, whose machine is “correct”? Often, the “incorrect” machine is on the wrong end of a best-of-three trial with those unfortunate enough to be sat on the closest desks to yours.
When everyone shares an office, such an exercise is merely an inconvenience, but with the global Covid-19 pandemic dispersing the office, working from home is commonplace. Corralling multiple people to look at something can be difficult. Each person is busy with their own lists of tasks. Trying to get 3 or more people into a conversation will mean either a meeting or an email chain; a marked loss of efficiency over a tap on the shoulder! So, how can we make debugging differences in model performance remotely easier, preferably using 2 machines (and people!) only?
Start with the obvious!
Any complex problem solving task requires some form of methodical thinking, and this is no different. Logic (well, the Pareto principle to be precise) dictates that some settings will have a larger impact than others on the simulation stability; others will simply be easier to check quickly. So, it makes sense to check the major settings which we can find fast first, before taking a deeper dive into the back-end settings of Dymola’s environment.
The first things to consider are the compiler and solver settings. Various compiler types and versions exist, but thankfully pure Dymola models can be considered relatively robust over varying verified compilers. Still, it is good practice to confirm the same type of compiler is being used across both machines. In particular, if the problematic model contains a sizeable amount of external code, then it can be susceptible to compiler type. Checking the compiler used is a very quick task.
Similarly, the solver type used to simulate the model will have an effect on results. Good practice dictates that the simulation settings for a model are stored in the experiment annotation at the top level of the model; this enables models sensitive to solver settings to default to robust settings that the user has chosen. A quick check of the annotation will determine whether the solver settings are included in the model, what they are, and cross referencing with the simulate model command that is generated within the commands window in Simulation mode will tell you whether you used them! Is is good practice to store simulation settings within the model and Dymola can do this automatically for us or at the touch of a button.
In the belly of the beast…
Having discounted the easy to spot causes, it’s time to start looking at the plethora of advanced flags Dymola contains. Applying a methodical approach means comparing the effects of changing each flag individually. Daunting as it might seem to have to change each flag individually and observe the result, there is actually a swift method which vastly reduces the time it takes to do this. As a bonus, it is a technique which lends itself to the rigours of remote working!
The list() keyboard shortcut prints the status of all of the variables and flags in the commands window. As different installations of Dymola can have their own specific setup, this variable list is specific to the instance the command is run in.
Having obtained the variable list, simply copy and paste it into an open text file and save. Various programmes exist for the comparison of text files, which show each change between the two; in order to highlight the different variable settings between the two environments, we want the files to be essentially the same format. Otherwise, we will find a bunch of false differences! Be sure to only save the pertinent information, trimming any extra lines you may have inadvertently copied.
Once our colleague has generated their own list of variables, finding the difference maker is now simple. We only need to check the ones which are known to be different. Having curtailed the number of flags to look at, the time spent looking into the issue is less. What’s more, this can be conduced on only one machine by one user! No need for both user to be working on the issue, once your colleague has generated their variable list.
Final thoughts
Disruption of existing workflows will always pose a challenge, not least when set against the backdrop of a global crisis. However, with a bit of thought, we are able to use these challenges as impetus to improve our work practices, by questioning how things were done previously. Some of these will be from necessity, some from a desire to recoup lost effectiveness of previous methods. Either way, when things return to a form of normal, maybe some of the improved work practices will be here to stay. Stay safe and healthy!
Written by: Theodor Ensbury – Project Engineer
Please get in touch if you have any questions or have got a topic in mind that you would like us to write about. You can submit your questions / topics via: Tech Blog Questions / Topic Suggestion