Never miss a story from SoC and IP. Subscribe for in-depth analysis and articles.
The automotive industry is working towards enhancing the driver’s experience and overall safety. We have seen a plethora of technological innovations such as ADAS, tire pressure monitoring, automated emergency braking, IOT, etc., that have improved vehicle performance, efficiency, reliability, and safety.
Autonomous driving is leading to major disruptions in the automotive industry and requires pro-level sensing and decision-making capability. We make decisions based on how we sense and perceive things, and autonomous vehicles are no different and take decisions in the same way. Timing is critical in decision making, especially during challenging scenarios while driving. A greater understanding of the environment enables an end-to-end system to make better decisions, faster, and more consistently. It is like adding intelligence to safety.
How safe would driving become if the vehicle could sense and perceive everything as we do?
Estimating the depth on the go can provide great assistance to the driving experience. A mature and accurate depth estimation technology in a traffic scene can effectively ensure safety on the road. Just for instance, while driving at night it becomes difficult to judge the distance of vehicles approaching which can be scary. As we are inching towards self-driving cars/ autonomous vehicles, the need for exact depth perception is becoming louder.
Existing techniques cannot see details from distance, how cool it would be if the vehicle can calculate the distance between two vehicles and assist in getting a parking space! And that too from a distance.
Cadence and Light collaborated to deploy Tensilica Vision Q7 DSP solution infused in Light’s Clarity Depth Perception Platform. The combined technology has enhanced next-gen advanced driver-assistance systems (ADAS) with 10 times greater performance than a quad-core CPU. In this post, I will be covering the benefits of the collaboration of “Cadence Vision DSPs with Light’s Clarity” for depth sensing and perception.
Depth perception is the ability to visually perceive the world and its objects in three dimensions (3D) and the distance of such objects. Measuring depth relative to a camera is very enticing and is the key to unlocking exciting applications such as autonomous driving, 3D scene reconstruction and AR (Augmented Reality).
Stereopsis, as suggested by Charles Wheatstone in 1838, is the process to perceive objects of three dimensions. It suggested that two eyes see the same image at different angles and from different horizontal distances. This results in giving depth ques of horizontal disparity also called Binocular disparity. This phenomenon was used for entertainment primarily. Anaglyphs were used to create stereoscopic 3D effects when used with 2D colored glassed with each lens of chromatically opposite color (Usually Red and Cyan). We have used stereopsis with advanced algorithms to derive depth from two images, lately. A stereo camera is a type of camera with two or more image sensors. This allows the camera to simulate human binocular vision and the ability to perceive depth.
Accurate depth information helps in better understanding of the scene and making quick decisions to stay safe. Existing techniques such as lidar, radar and camera (monocular) pose challenges.
Light’s Clarity outperforms existing sensing systems by seamlessly combining visual information from multiple cameras to accurately estimate depth. The platform helps machines to make better decisions by seeing more and seeing further, thereby bridging the major gap between the currently available technologies and the goal of full self-driving. Instead of relying on active scanning to create a representation of the world, or relying solely on machine learning to estimate distances, it sees the way humans see. It creates a highly detailed 3D model of the world using multi-view depth perception. Light’s multi-camera depth perception platform improves upon existing stereo vision systems by using additional cameras, novel calibration, as well as unique signal processing to provide unprecedented depth quality and distance information for each pixel.
The ability to accurately perceive the distance and scale of objects, like even a small leaf, helps make quick decisions like automatic emergency braking, evasive steering control for a wider variety of objects, adaptive suspension adjustment for road hazards such as potholes and smoother adaptive cruise control and lane-keeping systems. Further, it helps to optimize several driver-assist and autonomous systems, including:
Clarity is needed for the next generation of vehicle capability and safety. Clarity creates a 3D map of the world in front of, next to, or behind the vehicle up to 30 times a second which enables vehicles to not just react, but to make safe proactive decisions. That level of understanding is exactly what is needed to make next-generation vehicles safer and more capable, and what is needed to push us to have cars that truly drive themselves. However, such capabilities rely on complex algorithms and ultra-fast computations that need to be executed on purpose-built processors. Fortunately, there is a solution...
Advanced features such as ADAS, automatic emergency braking (AEB), and lane following require cameras and real-time data processing for decision making. Cadence has a whole family of Tensilica vision processors that do not just do image and computer vision processing, they also have neural network capability for inferencing (using AI to recognize where the lane markings are, for example). The Tensilica Vision Q7 DSP is designed to provide Light with real-time data processing, ensuring low-latency, high-bandwidth transmission of high-resolution output. However, to handle the high computational requirements for real-world deployment, over a dozen Vision Q7 DSPs are used inside Clarity.
Tensilica Vision Q7 enables Light's Clarity Depth Perception Platform to power next-generation ADAS systems with 10X greater performance than quad-core CPU
Sixth-Generation Vision DSPs for Imaging, Computer Vision, and AI