I presented on this topic at a number of conferences in 2018 and thought it was about time to also add some of this to our blog to outline what we are developing.
Background
It is now well accepted that simulation will have to form part of the validation process for any autonomous vehicle as well as being critical during the development of the system. This is because it is simply not physically possible or practical to test these systems sufficiently in the real world due to the sheer number of scenarios that need to be considered. Using simulation for the development and validation of many ADAS features is probably also a necessity due to the increasing complexity of these systems.
The Rand Corporation published a report in 2016 that looked into this problem. They came up with a figure of 5 billion miles as the distance an autonomous vehicle would need to be driven in the real world to have confidence that the vehicle was 20% safer than a human driver. The actual number of miles estimated by different companies does vary but every published report suggests it will be several billion miles. This is simply not a practical number of miles to test physically especially when you consider that any change in the software or hardware would require the testing to be restarted.
What seems to be the accepted approach, and the route that Claytex advocates, is to have a mix of field testing, proving ground testing and simulation. Field trials do provide very valuable data about real world scenarios but it’s impossible to control what happens and therefore the tests are unrepeatable even when driving the same road multiple times. Proving ground testing offers some control of the scenarios but there is still a lot that cannot be controlled in a repeatable way such as the environmental conditions.

Virtual Testing
Simulation is able to overcome the issues of repeatability and offers complete control of the scenario and weather conditions. With the right tools we can test the full software stack running on the real hardware with sensors and vehicle behaviour modelled in the virtual environment.
What we also need our simulation tool to be able to do is to recreate scenarios captured from field trials and proving ground tests so that we can compare the system behaviour in simulation and the real world. This then allows us to build confidence that the simulation is accurate.
This approach should be followed to validate that your simulation environment is able to recreate real world scenarios and to make sure that the AI can’t tell the difference between the real and virtual worlds. Once you’ve established this baseline and validated the simulation tool, you can extrapolate from the baseline scenarios and start to consider a much wider set of scenarios including variations in environmental conditions, traffic conditions and the behaviour of other agents (vehicles, pedestrians, animals, etc.) in the environment.
To be able to build a simulation environment that is capable of meeting these requirements we need our system to offer a number of features. This is by no means an exhaustive list, just a few of the most important points to consider:
- We need accurate models of real-world locations spanning the globe (or at least the locations where the intended vehicle is supposed to operate).
- The graphics need to be photo-realistic but not only good enough to fool a human, they also need to fool image processing algorithms and graphics effects used in gaming are often not good enough. We then need to be able to apply lens models so that we can customise the camera views to include all the effects present in the cameras being used on the vehicle.
- We need models of all the vehicle sensors and these should be outputting the exact same format messages as the real devices you use on your vehicle. This means they need to be device specific models and we need access to radar, LiDAR, ultrasound, GPS and many other sensors.
- We need to be able to include a vehicle dynamics model that is representative of the vehicle.
- We need to be able to introduce traffic, pedestrians and other dynamic objects into the environment. This means not only lights and signs but also litter or balls that could be blowing around the streets.
- We need to be able to control the weather and adjust temperature, humidity, rain, fog, snow, etc. When we do this we also need the road conditions to change accordingly for both the visual appearance and friction levels.
- And all of this needs to happen in a real-time capable system so that we can include the real AI processor (hardware and software) to validate the full control system
There’s probably a lot more that really needs to come into this but this at least is a start when considering different simulation environments.
Using rFpro for autonomous vehicle simulation
At Claytex, we are building autonomous vehicle simulators using rFpro. For those of you that aren’t already familiar with rFpro, it originates as a Driver-in-the-Loop simulation tool first developed for Formula 1 over 10 years ago when the amount of testing in the series was severely limited and the teams needed to find an alternative way to develop the cars throughout the season. Since then, rFpro has been adopted by most top race series and is also extensively used by Automotive companies for all aspects of vehicle development: vehicle dynamics, control systems, ADAS and now autonomous systems.
Some of the key qualities that have enabled rFpro to achieve this success are the high fidelity graphics, the accurate models of real world locations and the open API’s that allow external models, control systems and even 3rd party simulation tools to be interfaced to provide a completely immersive environment.
At Claytex we have helped a number of Motorsport teams implement rFpro on their driving simulators over the past 10 years and, more recently, helped autonomous vehicle developers build full vehicle simulators that include their AI processor.
Our focus is on the development of sensor models, tools that wrap around rFpro to support the development of full vehicle simulators and building the complete simulator system. We have started with the development of Lidar, radar, ultrasound and GPS sensor models. These are being built on a common framework that will provide the foundation for all the sensor models we are developing. The goal is to build up a library of device specific sensor models starting with the sensors being used by our current customers and Innovate UK project partners.
The first versions of these sensor models, which are already available, are perfect sensors. A perfect sensor is not affected by weather and doesn’t have any of the noise present in the real sensor. These effects will be added over the course of the next year. The example below shows our model of the Velodyne VLP-16 “Puck” running in rFpro around the centre of Paris. The LiDAR sensor model is streaming data to Veloview, Velodynes own software.
We are currently working on a number of additional sensor models and expect the list of available device specific models to grow consistently through the year. The features included in these sensors will also improve as we gather more data and improve the correlation of the models and real devices.
Written by: Mike Dempsey – Managing Director
Please get in touch if you have any questions or have got a topic in mind that you would like us to write about. You can submit your questions / topics via: Tech Blog Questions / Topic Suggestion.