AVSandbox’s Advanced LiDAR Simulation for Safe and Effective Testing 

In the ever-evolving landscape of autonomous driving, ensuring the safety and efficacy of self-driving systems is paramount. This is where simulation tools like AVSandbox come into play, offering a revolutionary solution for testing and training autonomous vehicles in a virtual environment. 

Central to AVSandbox is its ability to model and simulate vehicles and sensors in diverse scenarios. One of the key sensor technologies modelled within this tool is LiDAR, for its crucial role in providing a precise environmental perception for autonomous vehicles. AVSandbox is continuously engaged in research and development to faithfully emulate the behaviour of real-world LiDAR sensors. Our current sensor models are designed to replicate the precise communication protocols and intricate data stream structures observed in actual LiDAR sensors, ensuring a seamless transition from simulation to real-world scenarios. 

LiDAR models within AVSandbox undergo rigorous validation processes, including exposure to different weather conditions, to ensure their behaviour mirrors that of real sensors accurately. This validation process is key in instilling confidence that the simulated environment closely mimics real-world scenarios, providing a robust testing ground for autonomous driving systems. 
Building LiDAR Model in Sandbox 

Building the model for a LiDAR sensor involves mastering two critical components: the scanning pattern and the structure of the frame data. The scanning pattern dictates how the sensor emits laser beams to craft an extensive 3D view of the scanning area. Meanwhile, the frame data structure is essential for transmitting the collected point cloud to the Electronic Control Unit (ECU) of the autonomous vehicle. In instances where detailed information about the scanning pattern and data structure is lacking, reverse engineering steps can be taken. In doing this, we extract necessary details using collected sampling data from real sensors, ensuring the development of a comprehensive and accurate model.  

In reverse engineering, understanding raw data from a sensor depends on how much information we have about the sensor’s structure (Figure 1). The more we know, the easier it is to decode the data. Sensor messages contain headers that describe key details like elevation, azimuth, range, and intensity of returned laser points. To decode this, we look for repeated patterns in the data and use what we know about the sensor’s layout from its documentation. 

Figure 1: Reverse Engineering to build the LiDAR Model 

AVSandbox is committed to be a transformative force for autonomous driving simulation, particularly in the realm of LiDAR sensor modelling. With its advanced capabilities, AVSandbox enables users to set the nature of materials within simulation objects, mirroring real-world scenarios. Sensor models provided by AVSandbox offer insights into both the 3D position of objects and their reflectivity, crucial for understanding the nature of the environment surrounding autonomous vehicles. 

Figure 2 illustrates the model of the Innoviz LiDAR sensor within AVSandbox. The top left and right images display a simulation test using  AVSandbox, while the lower image section showcases the LiDAR points captured during a simulation. In the lower image, the colours of the points and objects vary based on the intensity of the laser points returned from objects in the simulated environment.  

Figure 2: LiDAR modelling tests with AVSandbox 

The goal of AVSandbox is to ensure that autonomous vehicles are capable of driving safely in both controlled environments and the unpredictable conditions of the real world. By providing a realistic and reliable simulation platform, AVSandbox empowers companies and researchers to accelerate development and deployment of autonomous driving technology. 

In our ongoing pursuit of advancing autonomous driving simulation, AVSandbox provides a robust toolset for testing and training self-driving systems. Within AVSandbox, a diverse array of LiDAR sensors—including models from Innovusion, Velodyne, Innoviz, and Livox—enhances our approach to autonomous vehicle testing and training. With our commitment to precise sensor modelling and rigorous validation processes we actively advance our mission to create safer and more efficient roads.

Written by: Sathya Senadheera, QA Engineer & Ahmed Alsaab, Simulation Engineer

Please get in touch if you have any questions or have got a topic in mind that you would like us to write about. You can submit your questions / topics via: Tech Blog Questions / Topic Suggestion


Got a question? Just fill in this form and send it to us and we'll get back to you shortly.


© Copyright 2010-2024 Claytex Services Ltd All Rights Reserved

Log in with your credentials

Forgot your details?