Over the past …. erm, many years I have come across different types of systems that have been modelled in a range of software tools, mainly with the aim of performing reduced cost and much faster “what-if” analysis by removing the need for building as many physical prototypes.
Despite the wide range of applications for systems simulation, some common traits emerged that diminished the potential value of the systems modelling and in some cases, gave it a bad name. I have listed below what I believe is essential for effective systems modelling at any level of detail.
Knowledge of the system that is going to be modelled:
This one for me is absolutely required. The engineer who is going to perform the modelling task MUST have knowledge of the physical system and its basic operating principles. We must remember that any one person cannot know everything and we should be humble about this. If we need to learn more about the system to be modelled, then so be it. Nowadays the information resources we have available to us are hugely extensive ranging from online journals, articles, videos, images and diagrams.
If we don’t know the basic operating principles and content of the system we are going to model then how can we even think of building it (or customising it) correctly?
Understanding what we want out of the model:
Once we have sufficient knowledge of the system and its basic operating principles, we need to think what questions we would like the model to answer for us. This will help us decide what level of detail we should have in the component models. For example if we are running duty cycle analysis for tailpipe emissions and fuel consumption, there will be little benefit in using detailed suspension bushings, which will sap valuable CPU resources for little or no benefit in the answer the model gives us for our questions.
Once we have decided the level of detail we require from our model and we have assembled the system model or selected one that suits from a library, we need to source the data we require. Data availability and quality is one of the biggest issues a modeller can encounter. It is an issue for several reasons amongst which:
- Supplier component data can be limited due to IP protection
- Map data might not exist for the component in question though if we have access to a physical component or drawings, then the geometry information should be used; for example, in the case of heat exchanger modelling.
- The data might/might not be valid and might have been tested under different operating conditions to the ones we are interested in.
I would always suggest to go for the types of component model that suit the best component data you can get your hands on. Tools such as Dymola will allow you to mix and match levels of detail and types of parameterisation within your model.
Model by model validation approach
The next step would be to create or re-use mini test rigs to exercise the components you are going to use in your system model. The tests should cover scenarios that the component might operate within and scenarios that the modeller can judge or predict the outcome of, to check that the components are working as expected.
Build the system model
You should now be in a good position to be able to build the system model with minimal debugging since the most of the debugging should have been done in the component scenario tests. It is often the case that the impatient/optimistic/ambitious modeller thinks they can throw a large system model together and it will simulate seamlessly when they press the simulate button. I can assure you that following the aforementioned seemingly longer component testing process will always give you a more robust and successful approach.
What happens next will largely depend on the success of the previous steps and the phylosophy of software tools you are using.
The simulation ran! Job done!
….erm, definitely not! We now need to use our knowledge of the system basic operating principles together with the control and boundary conditions we have applied to sanity check the results. If we blindly accept the results are correct then we’re living and modelling dangerously.
Why? Because the design of the physical (real) system will be largely influenced by the modelling results and we’re trying to reduce the number of expensive physical prototypes in our system development. If our results are not valid and we don’t spot that, the few expensive prototypes that we do build might be completely useless. This has the knock on effect of delays in the development plan which ultimately translate into more money having to be spent to rebuild the wasted prototypes.
The simulation crashed
“How frustrating!” …well yes. It is frustrating when this happens, however, the crash might be down to something occurring in the system model that is not physical and we should be thankful that the simulation tool has actually flagged this up rather than just number crunching and generating misleading results. It’s back to calling on our knowledge of the system operation basics, and frankly physics, to judge why the system model has reached this state. It could be down to incorrect component sizing (valves too small for the mass flow of fluid being pumped around the system, for example).
Hopefully this blog post has given you a brief insignt into a general process for systems modelling that I would recommend adopting to help you to succeed in your modelling jobs or in the case where you are reviewing the work a team or individual has done for you and the company. I will be posting follow-on blog posts looking at some areas in more detail where we should take particular care to get the most out of systems modelling and simulation.
Please get in touch if you feel our expertise could help you model systems more effectively and efficiently. There are many options we can discuss on how we can help out.
Written by: Alessandro Picarelli – Engineering Director