Top considerations for developing AI-powered ADAS

0

Haynes Boone attorneys explore some of the factors determining whether a defect in an autonomous vehicle would be considered a manufacturing or a design defect

Modern vehicles generally include an array of autonomous vehicle features. These features range from simpler ones—such as collision avoidance system and cruise control—to more advanced features—such as highway steering. The more advanced autonomous vehicle features rely artificial intelligence (AI) models. As AI technology develops, vehicles with more advanced autonomous vehicle features will become more common. Vehicles with AI-powered autonomous features are expected to reduce, though not eliminate, accidents.

A legal framework is in place for determining liability in case of a crash. When an automobile is involved in an incident, the law determines whether it was the result of a negligent driver or a defective vehicle due to manufacturing error and then assigns liability as appropriate. Manufacturers have a duty to exercise reasonable care when designing their vehicles to make them safe when used as intended. But even if a manufacturer exercises reasonable care, they may still be strictly liable for manufacturing defects or design defects.

In the autonomous vehicle feature context, determining whether a defect falls under manufacturing or design defect category is important, as it can impact who will be held responsible.

Autonomous vehicle feature example

Consider an AI-powered autonomous vehicle feature such as adaptive cruise control that stops at traffic lights. To design and ’manufacture’ such a feature, an AI model is created, and real-world data is used to train that model. This real-world data may represent what the vehicle observes (through cameras and other sensors) correlated with the actions performed by the vehicle as it is driven in real world conditions. For example, data from the camera that represents a traffic light change from green to red would be correlated with data that represents the driver pressing the brake pedal to bring the vehicle to a stop.

Top considerations for developing AI-powered ADAS
Who is liable when AI is driving the car?

Before the real-world data is fed into the AI model, it is placed into a specific format for use by the AI model. The formatted data may then be filtered so that ‘acceptable’ data is provided to the AI model. As the AI model receives the formatted and filtered training data, it develops algorithms that correlate a certain type of input (what the vehicle observes) with a certain type of output (how to drive the vehicle). For example, the model will ideally recognise that when the input from the camera sensor feed indicates a traffic light change from green to red, the appropriate output is to activate the brake pedal and bring the vehicle to a stop.

Consider a scenario in which the vast majority of data points fed into the AI model are from drivers who properly stopped at the red light. But what if, in this scenario, a small portion of drivers decided to run the red light? And what if the AI model inadvertently develops an algorithm that under a specific set of circumstances, it will intentionally run a red light. It may then be the case that a vehicle using the traffic light control feature will encounter those specific set of circumstances and run a red light, causing an accident.

While the standard varies by state jurisdiction, products liability claims generally can be brought through several theories such as negligence, breach of warranty, and strict products liability. Under strict products liability, the manufacturer and/or seller of a product is liable for its defects regardless of whether they acted negligently. Strict products liability claims can allege design defects or manufacturing defects.

Is there a defect?

Given the complex nature of AI model development, it may be difficult to rely on the current products liability framework to determine whether there is a ‘defect’ in the example scenario described above. And to the extent there is a defect, it can be difficult to determine which liability theory to apply. In conventional products liability, manufacturing defects can be distinguished from design defects in that manufacturing defects tend to be unique to a particular product or batch of products, while design defects would be considered present in all the ’accurately manufactured’ products. But in the case of an AI-powered feature, there is a single end product that is used by every vehicle. The following provides some thoughts for considering whether the above example may fall under a manufacturing or design defect theory.

A manufacturing defect occurs when a product departs from its intended design and is more dangerous than consumers expect the product to be. Typically, a plaintiff must show that the product was defective due to an error in the manufacturing process and was the cause of the plaintiff’s injury.

A plaintiff may argue that there is a manufacturing defect in the AI model here because the autonomous vehicle feature did not perform according to its intended design and instead ran a red light. But a defendant may argue that the AI model performed exactly as designed by correlating real-world data of cameras and vehicle controls—in other words the ’defect; was in the data fed into the model.

While there are challenges with applying the current legal framework to AI systems, developers are still best suited to rely on standard practices to avoid liability

A design defect occurs when a product is manufactured correctly, but the defect is inherent in the design of the product itself, which makes the product dangerous to consumers. Typically, a plaintiff is only able to establish that a design defect exists when they prove there is a hypothetical alternative design that would be safer than the original design. This hypothetical alternative design must also be as economically feasible and practical as the original design, and must retain the primary purpose behind the original design.

A plaintiff may argue that there is a design defect in the AI model here because its design caused a vehicle to run a red light. The plaintiff may also argue that an alternative, safer design would have been to filter out ‘bad’ data from red light runners. The defendant may argue that the AI model design is not inherently dangerous because vehicles that rely on the autonomous vehicle feature run far fewer red lights than vehicles that don’t—and thus the design reduces the overall number of accidents.

Key considerations

The example described above represents a small fraction of the challenges in applying the current legal framework to AI-powered systems. Moreover, public policy on this issue should be careful to avoid unattended consequences.

2019 Cadillac CT6 with Super Cruise engaged
Cadillac Super Cruise offers hands-free driving

For example, it may seem prudent to impose a duty on AI-developers to filter out ’bad’ data that represents red light runs or other undesirable driving behavior. But what if filtering data in this manner leads to unintended and more dangerous problems. For example, it may be the case that filtering out the ‘bad’ data from red light runs produces a model that will cause vehicles to abruptly slam on the brakes when the vehicle detects a light change.

Even if filtering out ’bad’ data related to red light runs may be a relatively simple way to produce a safer traffic control feature on a vehicle, more complex AI-powered features may represent more challenges. For example, an auto-steering feature must take into account surrounding traffic, road conditions, and other environmental factors when switching lanes to navigate a highway. With an AI-powered feature that navigates a highway, it may be less clear what driving behaviour is considered ’bad’ when deciding what data to filter. Whatever metric is used to determine which drivers are ‘good’ and which drivers are ’bad’, there may still be bad drivers that are able to trick that metric and be included in the AI training data anyway.

While there are challenges with applying the current legal framework to AI systems, developers are still best suited to rely on standard practices to avoid liability.

Note: This article reflects only the present personal considerations, opinions, and/or views of the authors, which should not be attributed to any of the authors’ current or prior law firm(s) or former or present clients


About the authors: David McCombs is Partner at Haynes Boone. Eugene Goryunov is Partner at Haynes Boone and the IPR Team Lead. Calmann James Clements is Counsel at Haynes Boone. Mallika Dargan is an Associate in the Intellectual Property Practice Group in Haynes Boone’s Dallas-North office.

FOLLOW US ON GOOGLE NEWS

Source

Leave a comment