Car cameras will transform V2V

0

The future of V2V will rely heavily on a piece of technology that wasn’t available when the system was first conceived: car cameras. By Eran Shir

Vehicle-to-vehicle (V2V) and vehicle-to-everything (V2X) vehicular communications systems have been touted over the years as the key drivers of future safety measures and collision avoidance technologies. These systems allow cars and commercial vehicles to exchange real-time information about their whereabouts and velocity, assisting in traffic and road hazard updates and enhancing future safety assist functions.

Today, V2V has yet to change the world of driving but the potential is there to increase safety and become an integral part of future driving. That said, it will do so in a way that was not imagined when the technology was first invented. This is because the future of V2V will rely heavily on a piece of technology that wasn’t available when the system was first conceived: car cameras.

V2V was originally designed as a system that sends sensor data over a very low bandwidth connection. Its early designers did not think of using vision data, AI, or the connectivity that can support vision-based data. Yet, as car cameras become more common, they can and will transform V2V and the systems it offers.

Can car cameras really make a difference?

Car cameras are set to become ubiquitous within the next few years, with each car having several cameras positioned around the vehicle and inside the cabin. Today, they are used mostly as a driver’s second pair of eyes: showing behind the car while reversing, supporting ADAS functionality, providing parking assist and automated driving. Additionally, dash cams offer a strong evidence function. They record events and collisions and are evolving into security devices, much like smart doorbells did. Yet, all these cameras still do not have the ability to create data that is communicated to other vehicles or with infrastructure. The vision data they create is not transported out of the car. AI is not applied over it and the data is not shared.

The future of V2V will rely heavily on a piece of technology that wasn’t available when the system was first conceived: car cameras

What would happen if car cameras could communicate to other vehicles what they ‘see’ in the world? This is where car cameras do a better job than classic V2V. Vision is the ultimate sensor. Because it ‘sees’ more of the world than other sensors it collects more data. Imagine a pothole. Using sensors, a car bumps into the pothole, hard brakes, or swerves around it. Based on the car’s sensor data, the system infers it’s a pothole and without any visual verification, sends this data to adjacent vehicles. It’s true that sensor data is everywhere and that there are many fine practices to analyse it, but could you be sure that you’ve sensed a pothole? Conversely, with vision data, the pothole is ‘seen’ by the camera. AI is applied on top of the vision data to ‘see’ the pothole even if the car didn’t swerve, brake, or bump into it. This means you would detect the pothole quicker and get rid of the current need for many cars to exhibit the same behaviour.

Free parking spot detection is another case in which vision trumps sensor data. This type of application focuses on helping drivers find available parking spots on the street. If fully deployed, it may considerably reduce congestion and circling in urban centres. For sensor-based parking spot detection, there are park-in park-out sensors. We estimate that this method can locate, at most, three to four available spots a day with the added difficulty of telling whether the finds are legitimate parking spots. This is because cars can only add data to parking spots in which they actually parked. Compare this to vision-based parking spot detection that collects data before the car is parked. A car driving in the street may see many free parking spots, including ones it won’t use. This data can generate considerably more parking spots overall. In fact, in a study in Milan, cars using vision could detect 30-40 free parking spots an hour versus the previously noted three to four for an entire day.

What would happen if car cameras could communicate to other vehicles what they ‘see’ in the world?

This system extends to many other driving elements: pedestrians, road hazards, and understanding the impact of work zones. When cars begin sharing the data they collect from the world, the options become endless. Serious road hazards such as chain collisions offer a prime example. With standard V2V, the system relies on all the cars involved in any collision having V2V technology. It will sense the chain from each of the cars and then infer that there has been a pileup. However, with vision data, just one car ‘seeing’ the chain collision and creating the right alerts for it will send the same (more accurate) message.

Compute and connectivity

Vision superiority comes at a cost. As opposed to sending sensor data across vehicles, as V2V was first envisioned (imagine driving and getting messages such as ‘hard brake ahead’ from cars you can’t see), vision data requires connectivity and computing. Connectivity is bandwidth; vision data is heavy, and the current networks don’t have the ability to transport it easily. This is where 5G networks come into play. But that isn’t all. Computing is also needed for running artificial intelligence models over the vision data. This will help make sense of the data, ‘see’ a pothole, recognise a parking spot and sense a work zone or hazard. The goal is to devise methods that use the connectivity budget wisely and sparingly to get the data that matters. This will need to be done wisely and on the edge, to avoid cost overruns and latency.

V2V and V2X were built around what was feasible and within reach when they were conceived: low bandwidth communication, ‘simple’ sensor data, and no computing

Towards a shared vision

Vision-based V2V includes another important addition to the original V2V vision. The original technology envisioned cars sending messages to other cars in their proximity. This means that driving in the V2V world can be a cloud of ‘me, me, me’ messages that can be difficult to make sense of. In some cases, like when sensing a free parking spot, these messages may be almost meaningless. The solution is a shared vision of several cars for one area, like a high-definition map for your immediate vicinity. This will rely on the individual data from each car in order to create a bigger picture through the shared data. The free parking spot system will work better this way, and road hazard information will be more valuable and informative. In the long run, this shared vision can assist navigation maps that today rely on user input and some GPS and sensor data. It is not easy to create a shared vision, especially when the data comes from different cars of different makers. But, luckily, new standards are coming. This new system would enable all cars to communicate with one shared, transient, memory of the road, with connectivity and computing optimised to make such a shared vision of the road economically feasible.

A new breed of applications

V2V and V2X were built around what was feasible and within reach when they were conceived: low bandwidth communication, ‘simple’ sensor data, and no computing. The proliferation of car cameras, 5G and computing enables the re-imagining of the options available with V2V type applications. They will be more valuable than before. These systems will communicate a more holistic view of the road and its surroundings, making driving much safer.


About the author:  Eran Shir is the founder and Chief Executive of Nexar, makers of the Nexar dash cam and provider of crowd-sourced street-level visual data

FOLLOW US ON GOOGLE NEWS

Source

Leave a comment