Aided by rapid developments in technology, growth in the autonomous vehicle industry has been huge, and there are no signs of it slowing down. In 2021 there were an estimated 20.3 million cars with at least Level 1 autonomy (driver assistance) globally, and this number is predicted to increase to around 62.4. million by 2030.
The primary purpose of autonomous driving technologies is to improve driving safety. This is achieved by gathering accurate information about the environment and analyzing it so that obstacles can be detected and avoided. A range of sensors – including cameras, radar, and LiDAR – work together to identify and provide the needed detailed and precise representation of the environment around the vehicle – ideally in any kind of weather and lighting conditions.
These sensors form the basis of most of the advanced driver-assistance systems (ADAS) on the market today. The more we can optimize these technologies – both from a technological and an implementation perspective – the better and safer autonomous vehicles will be.
The Challenge of 3D Perception
Due to their capabilities and affordability, cameras are one of the most common sensors in use today for capturing high-resolution data about objects in a vehicle’s surroundings. Despite their popularity as ADAS and AD (Autonomous Driving) sensors, cameras are not always used to their full potential. Currently, most ADAS algorithms are based on monocular systems, where video is obtained from a single camera. Monocular vision cannot estimate directly and accurately the distance to the objects which is essential for meaningful 3D representations of obstacles in the environment. This means that highly autonomous vehicles based on monocular systems are not as safe as they could be.
There are other technologies capable of providing 3D perception to ADAS, one of which is LiDAR. However useful this technology is, it is also expensive and demands a complicated and costly integration process for vehicle manufacturers This poses a hurdle for the high-volume autonomous vehicle industry.
The two biggest challenges for the autonomous vehicle industry are to find affordable ways of obtaining accurate 3D data, and ensuring efficient ways of scaling up such a sensor suit. 3D stereo vision addresses both these challenges in both cost and performance.
3D Stereo Vision: Answering the Call?
3D stereo vision technology provides information about the relative depth and precise position of objects in the environment by using two cameras – similar to the way in which humans detect depth through the use of two eyes.
In 3D stereo vision, two cameras are mounted on a plane with a known distance which is known as baseline. The two cameras are capturing images at the same time and the overlapping area between these two images is used to create 3D perception. Algorithms can then be used to match the pixels, generate and analyze the disparities between pixels in these overlapping images. Because disparity and depth are inversely related (i.e. as the distance from the camera increases, the disparity decreases) these points of disparity can be used to calculate depth analytically. Epipolar geometry is then used to map these points as coordinates in 3D space. Or in simple terms, one camera gives you 2D pictures, two cameras give you 3D ones.
The Benefits of 3D Stereo Vision
3D stereo vision provides highly accurate data about all objects in the targeted area, along with critical information about the depth and relative position of these objects. 3D stereo vision also allows for a number of other critical applications such as semantic segmentation – where each pixel in an image is classified according to a category or label. This means it can not only tell that an object is there, but can also identify what it is.
The field of 3D stereo vision has grown rapidly over the past decade as it answers a critical need of the autonomous vehicle industry. Better cameras offer higher resolution and improved dynamic range resulting in the ability to detect smaller objects from longer distances even under adverse lighting conditions. In addition, higher computational power is available, meaning more sophisticated algorithms can run at a higher rate with more accurate performance. As it is not that expensive to onboard two cameras instead of one, the solution is simpler and cheaper than many other technologies that provide the same information. What’s more, it is relatively easy to incorporate into any existing ADAS system.
How Does 3D Stereo Work with other Sensors?
Any ADAS will make use of a number of sensors that work together to build the best possible perception of the vehicle’s environment. 3D Stereo Vision works alongside these other technologies to bring depth perception and accuracy to the ADAS data.
One of the most commonly used technologies in an ADAS for depth perception is LiDAR, which has similar capabilities to 3D stereo vision. 3D stereo vision and LiDAR are both capable of measuring distance and estimating depth, and both can be used to formulate a 3D point cloud – a set of data points in space that represent a 3D perception. Each technology could be used as the only sensor in an ADAS, or they can be used together to create redundancy, where in the event of failure of one of the systems the other compensates.
The real difference lies in the cost and the scalability – the exact challenges facing the autonomous vehicle industry. 3D stereo vision is the most cost-effective and easy-to-integrate technology that can provide high-quality, accurate 3D data. It brings all the benefits of 3D imaging at a fraction of the cost of other technologies such as LiDAR because it uses high-resolution cameras which are relatively inexpensive and consume minimal power.
High volume scalability is possible with 3D stereo vision because it is easy to integrate this technology into existing systems. With advanced technologies such as Foresight’s Mono2Stereo™ and ScaleCam proprietary software-based algorithms simple and rapid integration into current systems is possible. This makes Foresight’s solutions ideal for the high-volume upscaling of existing systems.
Foresight: Making 3D Stereo Vision Accessible
The great benefits of 3D Stereo Vision to the autonomous vehicle industry are clear to see. With Foresight’s revolutionary software solutions, the autonomous car industry now has a quick and inexpensive way of adding this technology to any ADAS.