The Simulation Manager tool, developed by Claytex, offers a wide range of methods to help engineers set up and test anything from sensor hardware in the loop to full autonomous driving. However, testing is useless unless the relevant data can be extracted from the simulation. This blog covers the feedback stage in the Simulation Cycle where results from the simulation are analysed in the Test Manager.
Test metrics are variables in the simulation that are recorded, analysed, and are made available for assessing the performance of the vehicle. Key performance indicators are commonly used in industry and are selected from the test metrics in the same way. It becomes increasingly important to have good performance indicators as the number of tests increase so that each raw data file does not need to be examined individually. Instead test metrics show, at a glance, the performance of each test so that the key tests can be extracted and the relevant issues addressed. Multiple test metrics with different conditions can be grouped to provide greater analysis of each scenario.
Results analysis may be conducted in-loop after each simulation step or after the simulation has concluded with post-test analysis. Real time in the loop test metrics often make sense when the metrics are used to help the vehicle make real time decisions. Post processing test metrics can allow for greater optimisation to be quicker overall with less overhead in calculating information every time step. Post processing is also calculated by the test manager and may be easier to write and debug.
Figure 1. Simulation Manager Safe Space Analysis
The purpose of the test metrics is to provide the user with the information they need to judge if a test has been a success. Perhaps the simplest sensor, a stop sensor, checks if a vehicle has stopped where it is requested to. Stop sensors can be used in ADAS features such as emergency braking or autonomous delivery where a vehicle may need to be positioned very accurately to offload. A test metric can be any variable or combination of variables available including passenger comfort metrics such as lateral jerk and vehicle parameters such as maximum speed. Conditions can then be implemented to determine performance criteria such as comparing the vehicle’s speed to the speed limit.
Slightly more complicated is a safe space calculator, designed to show stopping distance around the vehicle. As with all simulations, configuring the simulated vehicle so that it matches the real vehicles performance characteristics is important. In the safe space calculator, the longitudinal safe space is related to the vehicle’s maximum guaranteed braking deceleration and its reaction time. Where possible visual representations of the test metrics including the safe space provides easier analysis of the results. This can be performed in real time projected on to the simulated road, displayed in the GUI or shown by other means using post processing. The image in figure 1 projects the post processed safe space as a time series. The blue polygon identifies the space around the vehicle that must be free of other objects and vehicles for this vehicle to be considered safe. If the safe space for another vehicle intersects with this blue polygon then we know that the vehicle has entered a situation where there is a risk of a collision.
Formally, the Responsibility-Sensitive Safety model developed by Mobileye is a rigorous mathematical model formalizing an interpretation of the Duty of Care law. The RSS is a good candidate for higher accuracy safe space calculation as it considers lateral and longitudinal acceleration of both the ego vehicle and nearby traffic. For lateral metrics the RSS sensor uses an OpenDRIVE map of the scene and considers the traffic heading and lateral velocity relative to its lane, not simply the ego vehicle. If all vehicles follow the RSS model and drive so that other vehicles are not placed within the safe space distances calculated, there should be no accidents.
Analysis of autonomous vehicles is notoriously difficult to perfect. The RSS may be compromised if the OpenDRIVE map doesn’t match the road and crashes may still occur due to other assumptions or vehicles not taking due care. When a test is failed the Simulation Manager should flag the test so that it can be reviewed in more detail to analyse the cause of failure. Tests may fail from traffic misbehaviour, simulation failure or due to the vehicle under test behaving incorrectly. Depending on the circumstance the test could be rerun, ignored, or the ego vehicle controller may need to be changed. Using these metrics to provide quick interpretations of the simulation results is more efficient than manually analysing the raw data.
A Test Oracle is the term sometimes used for a test manager that automates the simulation and analysis of test cases and passes or fails each test based on the test metrics. The list of all possible combinations of all available parameters is known as the parameter space with a single test using a specific combination of parameters occupying a single point in the space. As test cases build up, the coverage of the parameter space can be assessed from the distribution of the points. The simulator could also use the test coverage as a test metric.
The parameter space has as many dimensions as parameters, so can’t be visualised fully. Measuring distance can be achieved by calculating the percentage difference for a parameter in two scenarios between the minimum and maximum bounds and repeating for all parameters. The similarity of tests is then the hypothenuse of these distances.
Written by Rob Smith – Project Engineer
Please get in touch if you have any questions or have got a topic in mind that you would like us to write about. You can submit your questions / topics via: Tech Blog Questions / Topic Suggestion