We have officially entered the future and self-driving cars are a reality and no longer just something featured in futuristic movies. While at this point in time, human drivers are still necessary and involved – at least to an extent – the ultimate goal is for cars to be fully autonomous. In order for that to happen, cars must be equipped with the right tools that will enable them to “see” the surrounding environment, identify any potential obstacles and then act on that information accordingly.
5 Levels of Autonomous Driving
- Driver Assistance – human driver operates the car, but with assistance from an automated system for steering or acceleration (but not both).
- Partial Automation – automated assistance is offered for both steering and acceleration, but the driver is still responsible for safety-critical actions.
- Conditional Automation – sensors monitor the surrounding environment and the additional activities are automated such as braking. The driver must be ready to intervene should the system fail.
- High Automation – the vehicle can operate fully autonomously but the mode can only be activated under specific conditions.
- Full Automation – the driver specifies the destination and the car does the rest on its own. At this level, there is no need for a steering wheel or pedals.
Many cars are already equipped with at least one camera. Since 2018, cars in the USA have been required to be fitted with a reverse camera, and cars that also offer lane-changing warnings have forward-facing cameras as well. While these cameras were put in place in order to assist drivers, they can also provide information directly to autonomous driving systems.
Cameras can easily be mounted at various locations on the car, and stereo vision (i.e. two cameras looking at the same view) can be used to capture images of the entire environment in the car’s vicinity. 3D cameras are widely available, offering detailed and life-like images. Today’s cameras have the ability to detect, classify and measure the distance between objects on the road and the vehicle itself so that an autonomous vehicle would be able to avoid hitting a pedestrian or another car.
The downside to the use of cameras as an autonomous vehicle sensor is that, similar to the human eye, the visible light cameras’ ability to capture clear images is impacted by rain, fog, snow, or any other condition that causes poor visibility. If the camera does not provide a clear enough image, the system will not be able to tell the car what to do which can lead to accidents. While thermal cameras are better able to handle harsh weather and lighting conditions, both types would require, 4-6 cameras on the vehicle in order to render the most realistic images. This generates a large amount of data that must be processed, requiring a significant amount of hardware.
Radar sensors use radio waves for object recognition and detection. Radar can detect objects and measure their distance from the car and the speed at which they are moving in real-time.
Both long-range (77 GHz) and short-range (24 GHz) radar sensors are currently available, each serving a different purpose. Short-range radar is used for ADAS tasks like monitoring blind spots, lane-assist, and parking aids. Long-range radar helps cars keep a safe distance from other vehicles and offers brake assistance.
While radar works just as well in poor weather conditions, it does have some drawbacks. First of all, it is only 90-95% effective at detecting pedestrians, and while that is a high percentage, it is not enough to ensure the safety of pedestrians crossing the street in the path of an autonomous vehicle. And second, most radar sensors are 2-dimensional and only scan horizontally, making it impossible to accurately determine the height of an object. This creates a potentially dangerous problem for a car that will need to drive under a bridge or a road sign.
Lidar sensors are similar to radar but they use laser beams instead of radio waves. The added value of lidar is that in addition to providing accurate measures of distance between an object and the vehicle, it also creates a 3D point cloud that maps a 360-degree view around the car.
While lidar has the potential to provide the most accurate and useful picture of what is happening in the surrounding areas, it is an extremely expensive option and thus one that some carmakers are hesitant to adopt.
So Which Option is Best?
- What is the desired function of the sensor? A camera may be sufficient for parking assistance, while radar or lidar will be more reliable for emergency braking or collision avoidance, for example.
- What are the specific requirements for the function? If there is a need to detect additional vehicles from a long-range, radar will be better than a camera.
- Are there any extreme conditions that need to be taken into account? Remember that cameras do not operate well in poor visibility, but thermal cameras, radar and lidar are less impacted by adverse weather conditions.
- Is the sensor meant to provide driver assistance or automation? Cameras are generally sufficient when it comes to providing assistance, with radar and lidar providing more advanced capabilities.
- What are the cost implications? Lidar is the most expensive option but provides the highest resolution and most accurate mapping.
- What are the design implications? Will having 4-6 cameras mounted all over the vehicle cause design challenges that could be solved by mounting one lidar sensor on the roof?