ADAS and autonomy: don’t re-invent the wheel

When designing scalable systems and applications that require low-latency and high power-efficiency, automakers can learn much from data centres. By Daniel Leih

The inclusion of advanced driver assistance systems (ADAS) is now a crucial aspect of automotive design to improve safety and ease of use. Manufacturers are looking to create vehicles with higher levels of autonomy, and eventually deliver completely autonomous driving (AD).

ADAS and AD, plus rising user expectations in terms of infotainment and personalisation, mean that cars are evolving into mobile data centres. Accordingly, communication between the key hardware elements—ICs, circuit boards or modules—needed for software-defined vehicles (SDVs) is absolutely critical to successful operation. Indeed, some existing vehicles already contain more than 100 million lines of code, while Straits Research puts the automotive software market at almost US$58bn by 2030 with a 14.8% CAGR.

The complexity of the software and the challenges associated with processing a vast amount of data in real-time from a variety of vision-system sensors like cameras, radar, LiDAR and ultrasound is daunting. For example, Figure 1 illustrates how the traditional communication infrastructures and standards used in the automotive industry are reaching their limits. Ethernet and Controller Area Network (CAN) buses still have their place in future vehicle architectures but must be complemented to meet the needs of the High-Performance Computing Platform (HPC) required to embed Artificial Intelligence (AI) and Machine Learning (ML) within ADAS and AD.

Figure 1 – The vehicle is becoming a data centre on wheels, as ADAS has to process real-time a wealth of data from different sensor types

PCIe Technology

Peripheral Component Interconnect Express (PCIe) technology was created in 2003 to serve the needs of the computing industry. Now, PCIe is deployed in aerospace and automotive, where it is being used within safety-critical applications implemented in firmware that must comply with DO-254.

PCIe is a point-to-point bidirectional bus that is something of a hybrid in that it is a serial bus that can be implemented in a single lane or parallel lanes of two, four, eight or 16 to realise greater bandwidth. Also, PCIe performance is increasing with every new generation. Figure 2 illustrates the evolution of PCIe.

Figure 2 – PCIe’s performance evolution

PCIe is already being used in some automotive applications; it entered services at about generation 4.0. However, with the performance improvements available through generation 6.0 with its data transfer rate of 64 GT/s and a total bandwidth of 128 GB/s if 16 lanes are used, many are now moving to embrace PCIe. Notably, PCIe provides backwards compatibility.

High-performance, low-power

On the premise that vehicles are becoming data centres on wheels, there are also many reasons why PCIe is used in land-based data centres. A data centre consists of one or more servers and peripherals that include storage devices, networking components, and I/O to support HPC in the cloud. PCIe is present in today’s high-performance processors, making it the ideal bus with which to establish low-latency, high-speed connections between the server and the peripherals.

For example, Non-Volatile Memory Express (NVMe) was designed specifically to work with flash memory using the PCIe interface. PCIe-based NVMe Solid State Drives (SSDs) provide much faster read/write times than an SSD with a SATA interface. Indeed, all storage systems, SSD or hard disk drive, simply do not deliver the kind of performance required for complex AI and ML applications.

The low latency afforded though PCIe between the applications running in the servers has a direct impact on the increased performance of the cloud. This means PCIe is being embedded in components other than processors and NVMe SSDs. It is also present with the many components that provide the gateway between the cloud and the systems accessing it. And while vehicles are becoming mobile data centres in their own right, they will also be a node moving with and between ‘smart cities.’

An optimised ADAS/AD system is likely to need Ethernet, CAN and SerDes, as well as PCIe

The use of NVMe in data centres is also popular from a power perspective. For instance, the US Department of Energy estimated that a large data centre (with tens of thousands of devices) requires more than 100MW of power, enough to power 80,000 homes. NVMe SSDs consume less than one-third of the power of a SATA SSD of comparable size, for example.

In the automotive sector, power consumption is of importance too, not least in electric vehicles (EVs) where it has a direct impacts on range. Indeed, automotive engineers in general, and EV designers in particular, are becoming increasingly focused on the issues of Size, Weight and Power (SWaP). This is no surprise when considering that future ADAS implementations could demand up to 1kW and require liquid cooling systems for thermal management.

But again, there’s the opportunity to draw from what’s been learned in other sectors. The aerospace industry has been designing to meet tight SWaP and Cost (SwaP-C) requirements for decades, and liquid-cooled line replaceable units (LRUs) such as power supplies have been used in some military platforms for over a decade.

Where to start?

The availability of PCIe hardware is something data centres have been taking advantage of for years, as they look to optimise their systems for different workloads. They are also adept at developing interconnect systems that employ different protocols; for example, PCIe working alongside less time-critical communications, such as Ethernet for geographically dispersed systems.

In the automotive environment, those ‘less time-critical’ communications include telemetry between sensors and lighting control. They don’t warrant PCIe, but short distance, higher data volume communications between ICs performing real-time processing and are only a few cm apart, do. Accordingly, an optimised ADAS/AD system is likely to need Ethernet, CAN and SerDes, as well as PCIe.

Unlike Ethernet, there is no specific automotive PCIe standard, but that has not curtailed its use in automotive applications in recent years. Similarly, the absence of aerospace PCIe standard has not deterred large aerospace/defense companies—constantly striving for SWaP-C benefits—from using the protocol in safety-critical applications.

Because solutions must be optimised for interoperability and scalability, PCIe is emerging as the preferred computer interconnect solution in the automotive industry too, providing ultra-low latency and low-power bandwidth scalability to CPUs and specialised accelerator devices. And while no specific automotive PCIe standard exists, silicon vendors are catering for PCIe further ingress into the harsh environment that is automotive.

Figure 3 – PCIe switches for low-latency, low-power, and high-performance connectivity

For example, in 2022, Microchip launched the industry’s first Gen 4 automotive-qualified PCIe switches. Called Switchtec PFX, PSX and PAX, the switches provide the high-speed interconnect required for distributed, real-time, safety-critical data processing in ADAS architectures. In addition to these switches, the company also supplies other PCIe-based hardware including NVMe controllers, NVRAM drives, retimers, redrivers and timing solutions, as well as Flash-based FPGAs and SoCs.

Lastly, the automotive industry must also consider the way data centres treat CapEx as an investment for a future annuity. To date, the majority of automotive OEMs have always seen CapEx as having a one-time return (at point-of-purchase), which works fine where hardware is concerned. Granted, most OEMs occasionally charge for software updates, but with SDV the business model needs a complete rethink. A focus purely on the hardware bill-of-material cost is no longer appropriate.

Key takeaways

For the level of automation in vehicles to increase, the car needs to become a high-performance computing ‘data centre on wheels,’ processing a vast amount of data from a variety of sensors. Fortunately, HPC is well established and is at the heart of High Frequency Trading (HFT), and cloud-based AI/ML applications. Proven hardware architectures and communications protocols like PCIe already exist. Which means that automakers can learn a lot from the way in which HPC in data centres is implemented.

As the likes of AWS, Google and other cloud service providers have spent years developing and optimising their HPC platforms, much of the hardware and software already exists. Automakers will do well to adapt these existing HPC architectures rather than re-inventing the wheel by developing solutions from scratch.


About the author: Daniel Leih is Product Marketing Manager of Microchip Technology’s USB and networking business unit

 

FOLLOW US ON GOOGLE NEWS

Source

Comments (0)
Add Comment