Ethical questions arise when safety and UX systems become centred on understanding human behaviour, writes Rana el Kaliouby
As in-vehicle safety features and user experience (UX) systems evolve, the incorporation of artificial intelligence (AI) raises some important ethical considerations.
For instance, the industry has seen the rise of driver monitoring systems (DMS) to augment safety features and address emerging regulatory requirements, such as the need to understand and detect signs of driver impairment like distraction and drowsiness. Human insight AI is required to understand the driver’s nuanced emotions and complex cognitive states and catch signs of potentially dangerous behaviour. Furthermore, when infotainment systems advance more and are ready to use AI, automakers will increasingly look for interior sensing solutions that understand both the driver’s state, and that of the occupants and cabin—the key to unlocking new opportunities for entertainment and wellness.
It is at this point when safety and UX systems become centred on understanding human behaviour that new ethical questions arise. These systems rely on potentially personal data in how they’re developed and deployed in cars. As a result, automotive tech companies are forced to take a hard look at the implications of the technology they provide.
Understanding the risks and reward
As mentioned, in-vehicle technology is advancing rapidly, using human insight AI that analyses human behaviours in vehicles. This need first became apparent with DMS systems, which analyse drivers’ facial movements, eye gaze, blink rate, head and body position, and more to understand signs of drowsiness, distraction and other human states. But there is an increasing OEM interest in interior sensing and cabin monitoring, which also use human insight AI. In these cases, AI-powered interior sensing systems can recognise activities and objects used by drivers and passengers (think, a cell phone) and interactions between occupants and in-vehicle systems. This provides further data for safety analyses, but it also can unlock insights into the in-cabin experience that allow for personalisation, such as content, music or atmospheric adjustments.
When paired with advanced AI systems, camera-based sensing is proving to be an effective and accurate way to detect these complex and nuanced human behaviours. However, these systems will inherently “know” a lot about the people that interact with them, and must be developed using potentially personal, real-world data.
To put it plainly: the reward here is the potential to save thousands of lives each year with improved safety systems, and to significantly enhance the in-cabin experience. But the risk is in putting consumers’ private data at risk, and losing trust between the automotive industry and the public. Luckily, there are deliberate steps the industry and auto tech makers can take to mitigate these risks and ensure industry advancement, for the betterment of all.
Considerations for ethical development
Ensuring ethical automotive AI systems begins with the technology’s development – specifically in how the algorithms are trained and validated. Developing AI that can detect complex human behaviors, activities, and emotional and cognitive states requires massive amounts of diverse and relevant data, of all kinds of people in changing environmental conditions.
These systems will inherently “know” a lot about the people that interact with them, and must be developed using potentially personal, real-world data
One key element is capturing a diverse dataset. This includes age, gender and skin tone, as well as different appearance features: is someone wearing glasses? A face mask, or a hat? Many configurations and variables need to be recognised by a machine learning algorithm to ensure accuracy and avoid bias. Mitigating bias is always important, but in this case, a biased automotive AI system used for safety could potentially be the difference between life and death. Suppose a DMS system could not recognise a woman or a person of colour. It could be catastrophic for the system to fail to trigger a critical safety alert.
Unfortunately, the difficulty and expense of real-world, diverse data collection in automotive settings does lead to cutting corners. But, by failing to pay attention or ensure you’re getting the right data, you risk introducing bias and system failure in the real world.
The default for data collection is through paid participants; however, this is expensive, time-consuming and logistically challenging—especially in a pandemic. Synthetic data is emerging as a promising way to augment real-world data where it may lack diversity or face other constraints.
Car manufacturers evaluating interior sensing systems should continue to assess how a vendor’s data is collected, and how diverse and contextually relevant this data is. Only then can data bias be mitigated and these systems be able to truly work on the road, in the real-world.
Considerations for ethical deployment
Once these systems are developed, it’s crucial to carefully consider how the technology is deployed. As mentioned, these systems are camera-based, and thus record the human face for the AI to run its analyses. Of course, this raises a slew of ethics and privacy-related questions, to which consumers will inevitably—and rightfully—demand answers. For example, what data is being collected? Where is it being stored? Who can see it? What is it being used for? What’s in it for me as a driver or passenger?
Automakers and auto tech vendors must think long and hard about such questions and actively mitigate risk. For example, for camera-based systems, companies can ensure that personal data—in this case, videos—is never stored. In recent years, technology advancements have made it so that DMS and interior sensing systems can run in real-time, on the edge of embedded systems. As a result, the system can collect only metrics, versus personal information and identifiable data. This also means data does not need to be sent to the cloud, allowing for another layer of protection.
The risk is in putting consumers’ private data at risk, and losing trust between the automotive industry and the public
With so many consumers becoming more conscious of their personal data, it’s paramount that the automotive industry considers the data that new systems will collect and take steps to openly and transparently mitigate privacy concerns in vehicles.
AI-powered DMS and interior sensing will capture data on human behaviour that has never been captured before. Naturally, this raises necessary privacy questions. But, when such critical safety systems and enhancements are the end-goal, what’s the trade-off? If this technology can save thousands of lives worldwide, are there some allowances we might make regarding personal privacy?
Perhaps the answer is yes, but this is by no means a guarantee. Doing this right starts with transparency and education across the entire automotive tech supply chain and through to the consumer. Only when we have informed conversations about the risks of new technology, mitigation tactics, and ultimate benefits, will we be able to successfully build trust, navigate the evolving regulatory landscape and build the future of the automotive industry.
About the author: Dr. Rana el Kaliouby is Deputy Chief Executive of Smart Eye, former Co-Founder and Chief Executive of Affectiva