Lidar sensors are a key component to mapping the environment for autonomous cars with their ability to measure millions of points each second to very high accuracy. These data points create a point cloud of the environment which can be used to determine where to drive. Compared to Radar, Lidar creates more data points with higher accuracy and unlike camera, gives the range to all points.
The light emitted from Lidar can be reflected, transmitted, or absorbed by the materials it hits. Light that reflects back to the lidar is received and receptors in the Lidar measure the time of flight to calculate the distance. The light can be emitted at once as is the case in flash lidar, or individually. If the beams are individual, the beams can be emitted at slightly different times to help match a light return to the angle it was fired at. Lidar typically has a range of up to 200m, limited to ensure the beams emitted are safe to the human eye and can measure data 360 degrees around the horizontal plane and approximately 32 degrees of elevation.
The behaviour of light when hitting an object is determined by the optical properties of the objects material. This can be defined using a Bidirectional Scattering Distribution Function (BSDF) which is split in to transmitted and reflected components. The optical properties of materials can be defined in different ways including diffuse, metal, subsurface scattering and transmission. On top of that there is a specular layer, sheen layer and clearcoat layer. These can be measured using real world measuring equipment and recreated in 3d modelling programs. Additionally, materials have different properties at different wavelengths, including between visible light (400-700nm) and lidar (900+ nm).
In simulation, light can be split simply in to three components; ambient, spectral and diffuse. Where physics based rendering is not required the material properties can be defined by how they reflect ambient, spectral and diffuse. Materials can also have maps to define specific properties over the surface of the object, such as the bump map which defines the height of the surface down to fractions of a millimetre.
Modelling lidar precisely requires an environment with physics based materials, a high frequency refresh rate and weather modelling. Simulating precipitation is extremely difficult due to the varying size, shape and velocity of droplets and the number of droplets in the air making it expensive to compute. For this reason rain is often implemented as a filter over the camera or lidar, but by measuring real world weather conditions recorded over long duration, the effects of rain and other sources of noise can be modelled with low computing effort.
Noise in lidar occurs when the beams are distorted by precipitation, dust or insects in the air and return a partial beam of light back to the sensor. The software inside the Lidar can be programmed to output the highest intensity return from each emission or the last object returned. Whatever the light hits, the intensity of light reflected back to the lidar will depend on the objects material properties such as surface roughness, which can also change in different weather conditions.
Another possible source of noise is from other lidar sensors. For sensor configurations with multiple lidar mounted on vehicles, lidars can be configured to ignore the light coming from the direction of the other sensor. This approach is not practical when the lidar is mounted on another vehicle; however, it is rare that another lidar will be mistaken for the returned signals as the wavelength, intensity and angle all need to match.
A normal days rainfall is shown below alongside the standard deviation of lidar range values to objects in the environment. By analysing the distance to a calibrated object over the course of this day and others where other variables are kept the same, the effects of rain on the return rate and intensity on the lidar can be estimated. The standard deviation of the returned range can be seen to peak during and around times where the rainfall is highest.
To match the real-world characteristics of lidar, the lidar sensors in simulation rotate around the environment at real time or as defined by the user. Data packets are created with the same byte layout to ensure that the data generated in simulation can be used in the same way as data from the real sensor.
Claytex sensor models including lidar use noise generated from weather models using conditions such as air temperature, humidity and rainfall from a dedicated weather station. The Lidar models, written in C++, are designed for speed and accuracy. The models can be implemented in any simulation environment but natively work on rFpro using Claytex Simulation Manager.
The rotational speed of the simulated sensor is configurable along with frequency at which new sensor data is generated and at which the data packets are sent over UDP. Real lidars commonly spin at over 360rpm, which requires a high frame rate from the simulation to send potentially hundreds of data packets each rotation. If desired, the frame rate can be lowered by rendering a larger area of the environment at each frame, reducing the workload on the computer.
Written by Rob Smith – AD Simulation Engineer
Please get in touch if you have any questions or have got a topic in mind that you would like us to write about. You can submit your questions / topics via: Tech Blog Questions / Topic Suggestion