Case study: Using Dymola to automate testing and post processing by writing a function

Background – Why does steering feel matter?

Creating a realistic feel to the steering of a vehicle model is very important for DiL simulations, due to the role torque feedback through the steering wheel plays in how the driver interprets the dynamic response of the vehicle. A vehicle with a ‘bad’ steering feel is difficult for the driver to control in a realistic fashion, as the driver is receiving feedback information that does not correlate with the vehicle response as they would expect; a ‘good’ steering feel is the opposite, where vehicle response to a driver control action is comparable to real-life and occurs in a repeatable and consistent manner. Often good steering feel is described as ‘direct’ or ‘instinctive’, as the vehicle responds how the driver expects.

Recently, Claytex presented 4 papers at the 2nd Japanese Modelica conference in Tokyo, Japan. One of these papers, titled “Modelling and Development of a Pseudo-Hydraulic Power Steering Model for use in Real-Time Applications”, presented a pseudo-hydraulic power steering model developed for use in Driver-In-The-Loop (DiL) simulators, along with the experimental procedure conducted within Dymola to quantify the steering feel experienced by the driver. It can be found here.

In order to generate the data for the paper, several vehicle dynamics experiments were required to be run in Dymola, with post processing of the data required to derive specific metrics regarding the vehicle. As part of the work was to optimise the steering feel by updating the parameterisation of the steering, a function was created to automate the running of the experiments and the post processing; this blog post will present the methodology of the function created, showing how to automate running of multiple experiments and the subsequent post processing.

Setting up the function: user selects which metrics to evaluate and which experiment tests are to be used

Whilst describing steering feel is a subjective judgement, the paper presented at the conference builds upon other research undertaken in how to quantify a good steering feel and what result values are indicative of a good feel. In order to quantify steering feel, a collection of separate vehicle dynamic tests are required to be conducted, all of which is detailed in the paper. Naturally, post-processing is required in order to understand and analyse the steering feel, which is based on various metrics and parameters derived from the vehicle response which is captured in the data generated by the experiments.

Continually re-running experiments manually when changing and updating parameters is a laborious process; wouldn’t it be much easier, not to mention efficient, to automate it along with the data post-processing? Yes, it would! This is why a function was written which ran the Dymola experiments of interest with all the post-processing automated by Dymola, called EvaluateSteeringFeel. Users were able to select which metrics they wanted to analyse simply by ticking a Boolean parameter on the first tab of the function call.

Figure 1. Upon calling the EvaluateSteeringFeel function, the user was prompted to select which metrics they desired to be evaluated.

The test locations tab presents the user with a series of string inputs, where they can input the paths to each experiment required by the function. By deploying the annotation:

A dialog box was presented to the user showing each of the packages loaded into the package browser; clicking on the experiment model they wanted to use would load the path into the function. This feature was useful if models were moved around and relocated within packages, as it meant the experiment locations weren’t hard coded and the user was spared manually writing the paths themselves. Obviously, the user has to select the correct type of experiment (set up correctly) as described by the description for the function to run correctly and generate valid metrics! The option to select which experiments are used means that the function can be easily deployed for different vehicles, as the user can input the paths to the experiments using the vehicle they are interested in evaluating rather having to continually update the same experiment.

Figure 2. Maintenance of the links to the experiments is not required by enabling the user to navigate to the desired experiments in the dialog box.

 

Breaking down how the function decides which experiments run, and which to ignore

Looking at the code layer of the function, beyond the expected declarations of all the input parameters described in the previous figures, each of the experiments were called by a function. No outputs were declared, the reasons for this will be explained later. On the face of it, calling each experiment through a function seems like an unnecessary complication, although it served two purposes. Firstly, by declaring the function call within an if statement dependent on at least one of the metrics the experiment was designed to evaluate, the call of the experiment could be skipped if the experiment did not need to be called. Understandably, this reduced running superfluous experiments if the results were not required, thus saving time, especially if experiments took some time to simulate. Secondly, it meant all the advantages of object-oriented programming could be harnessed, namely a simpler, tidier code structure and ease of maintenance.

Figure 3. Calling each experiment from function provides usability benefits.

Automating the post processing withing the experiment call functions

Inside the experiment functions, both the code to call the experiment and the code to calculate the metrics is contained. Previous blog posts, ‘How to simulate a model multiple times with different parameter values‘ and ‘Handling trajectory files and utilising simulation results within Dymola‘ detail the specifics of how to call an experiment from a function, with the latter also going into how results can be mined and processed into results, so these processes won’t be covered in this blog post. Of note within the experiment functions is that each of the metrics is also enclosed in an if statement, like the whole experiment functions themselves. Once more, this means that if the user doesn’t want to evaluate a certain metric, then the calculations are skipped to save computational time. Looking at the image below, post processing can be split into 3 phases; isolating the data of interest, process the data and then execute the calculation to determine the metric.

Figure 4. Each metric is calculated in 3 stages.

Presenting the results to the user

With the metrics calculated and outputted back into the main function, then the final task for the function to do is to output the data to the user. This is done by collecting the metrics and passing it to a stream printer. Further details about the stream printer method used can be found in a previous blog post, Displaying results in the simulation log from a function in Dymola. Running the screen printer leads to all the results as calculated by the function to be presented to the user as below. The results could be presented to the user in other ways, such as outputting them in .mat files or excel documents, this is just one possible method of displaying the results.

Figure 5. Results presented to the user in the translation window when the EvaluateSteeringFeel function in its entirety concludes running.

Written by: Theodor Ensbury – Project Engineer

 

Please get in touch if you have any questions or have got a topic in mind that you would like us to write about. You can submit your questions / topics via: Tech Blog Questions / Topic Suggestion.

CONTACT US

Got a question? Just fill in this form and send it to us and we'll get back to you shortly.

Sending

© Copyright 2010-2018 Claytex Services Ltd All Rights Reserved. | Designed by CMA Marketing

Log in with your credentials

Forgot your details?