The world may be getting smaller metaphorically, but the distances we need to travel remain as long as ever. In fact, people and goods have never covered as much territory as they currently are – it has been estimated that the average person spends four and a half years in a vehicle in their lifetime and a meal in the United States (US) can travel about 25,000 kilometers from farm to plate. That’s a lot of traveling, and with it comes huge costs and risks. Fortunately, highly-automated vehicles are revolutionizing the way in which both people and goods travel, making transportation safer, more efficient, and more environmentally friendly.
To achieve this, automated vehicles must be able to perceive and understand their environment. The vehicle needs to gather critical information such as what obstacles, street signs, other vehicles, and pedestrians are present and where they are located. It can then use this information to direct the vehicle’s operations and safely navigate it. Multiple sensors are used to collect this data, and the better the quality of the sensors, the better the data, and the safer and more efficient the vehicle.
Having the right combination of sensors is the key differentiator in performance across autonomous vehicles.
There are multiple sensors used, and the two sensors that are common in autonomous vehicles are Light Detection and Ranging (LIDAR) and cameras. LIDAR uses laser pulses to measure distances and create a 3D map of the environment, and cameras use visible light to capture images. While both technologies serve the same purpose of assessing the environment around the vehicle, they have different strengths and weaknesses.
LIDAR: Shedding Light on the Environment
With LIDAR, a laser beam is emitted and the time it takes for the beam to bounce back from objects in its path is measured. Then, this information is used to create a high-resolution 3D map of the environment surrounding the autonomous vehicle.
LIDAR is particularly useful in providing precise and reliable distance and depth information, critical for object detection and avoidance. The short wavelengths that are used in LIDAR provide information about the size and shape of objects, allowing the system to determine not just that something is present in the environment, but also what it is and how to classify it. This information can be used to create a detailed 3D map, enhancing the capacity of the autonomous system to plan and navigate.
LIDAR works equally well in both day and night as it uses an active illumination sensor – a sensor that emits its own source of light. As a result, it is unaffected by changes in ambient lighting.
LIDAR does have some limitations, though. The sensors are expensive, and while they provide high resolution data, they are not as dense as camera images. In addition, LIDAR sensors may struggle to detect objects with low reflectivity, such as black cars or asphalt.
Fortunately, cameras are able to pick up some of this slack.
Cameras - An Ideal Partner
Cameras are possibly the most common sensor there is – we have cameras on our phones, our computers, and even on our watches, and we are watched by thousands of cameras in our homes, at work, and in public spaces every day. This is because cameras are exceptionally cost-effective, and they are able to capture high-resolution images and videos, providing a detailed view of the environment.
This makes them particularly useful for autonomous vehicles, where they assist in object recognition, lane detection, and other applications. As cameras can also be used to detect colors and textures, they are helpful in identifying and classifying objects and they are effective in picking up darker objects.
Until relatively recently, most vehicles have been equipped with mono-camera setups, meaning only 2D images of the environment could be generated. With recent development in Neural Networks technologies, it is possible to generate depth information using inferencing. This is done by comparing the detected objects to data trained by the Neural Network and then estimate the distance using inferencing.
Stereo vision, on the other hand, can do more than just estimate the distance. By using two cameras placed slightly apart from each other to capture two images of the same scene at the same time, a 3D perception is generated. Advanced algorithms analyze the pixel disparity between the two images in order to extract information about the depth of each pixel and distance of the objects from the cameras, creating a 3D map of the scene. 3D stereoscopic cameras have many applications for autonomous vehicles, including visual navigation, object detection, and creating an accurate 3D perception of the environment.
3D stereo vision cameras do have limitations – depending on the contrast and texture in the image and their resolution they can struggle to accurately identify small objects at long distances and they don’t perform well in low-light environments.
Combining LIDAR and Cameras for a Complete Picture
LIDAR and cameras clearly have complementary strengths and limitations – where cameras can’t operate well in low-light environments, LIDAR is unaffected by lighting; where LIDAR can’t provide high-resolution images, cameras can, and can detect color, and texture; and where mono cameras cannot provide accurate enough information about depth and distance, LIDAR can create a 3D map of the environment.
By combining LIDAR and cameras, automated vehicles can take advantage of the strengths of both technologies, while mitigating the risks of their limitations. Used together, LIDAR and cameras can create a more complete and detailed picture of the environment surrounding an autonomous vehicle, allowing the system to make more accurate and reliable decisions. And where they overlap in functionality they create sensor redundancy, an important safety feature for any autonomous system.