Self-driving Cars and Power Consumption — New Chip Designs

Nitin Vaish
4 min readMay 13, 2018

--

I wrote about 100x power efficiency gap between self-driving cars and humans not too long back. The industry is moving fast, and in few short months since, Waymo launched a self-driving car fleet in Phoenix, AZ; GM / Cruise plans to launch the service in 2019, and Lyft announced plans for launch the service in Las Vegas.

These self-driving vehicles are moving from test-beds to public roads in a controlled manner. However, the solutions for optimizing power consumption are still lagging. With just cameras and radars, today’s commercially available vehicles generate 6GB of data every 30 seconds. Full autonomy, which will need additional sensors (e.g. lidars), compute for training / perception / motion planning, the data per vehicle will increase several-fold. At the same time, it’s estimated that the power consumption per vehicle will be ~2.5 kW (by comparison, the average solar installation for households in the US is 4–5kW), and a trunk full with compute hardware. This results in either lower fuel efficiency for Internal Combustion Engine (ICE) vehicles or will lower the range for EVs — one metric, amongst many others, that will run counter to the lower opex model of fleet owners. University of Michigan recently published a paper, and estimate that Autonomous Vehicles (AVs) will increase energy consumption and green house gases by 3–20% due to increase in power consumption, weight, drag, and data transmission. As shown below, the authors estimate 41% of increase coming from compute. Increased drag, with multiple sensors on the vehicle, contributing another 10%!

University of Michigan

Nvidia has been by and large default hardware supplier for self-driving solutions, with more than 370 automotive partners. Their Xavier platform, with eight-core CPU and 512-core Graphical Processing Unit (GPU) can deliver 30 trillion operations per second (TOPS) and consumes 30 watts. Pegasus platform, which is better suited for self-driving solutions, with two Xavier chips and two more GPUs can do 320 TOPS and consumes 500 watts, or 0.64 TOPS per watt. In Q1'18, Nvidia reported $145 million revenue from Automotive, a record for the company. GPUs, with roots in rendering best graphics quality, typically perform lots of low-level mathematical operations for graphics as the end goal and are optimized such.

This brings to my third point. While GPUs currently are the industry standard, it still is a open question, whether are the optimal solution for AVs / deep learning applications. According to ARK Invest, there are at least 19 companies — both public (e.g. Google, AMD) and start-ups, that are working on chip design optimized for deep learning.

ARK Invest

This space is rapidly moving and the above list doesn’t include SambaNova Systems that raised $56 million in March 2018.

I wrote about Graphcore and Realtime Robotics in my previous post, here we will take a look at couple of more which have different approaches.

Mythic focuses on optimizing inferences instead of training, claims to be 100x more efficient vs. Nvidia’s Titan XP GPU. In addition to new chip architecture, using analog computing inside flash memory cells, they claim to accelerate machine learning tasks like facial recognition. By running inference algorithms inside blocks of flash memory, it eliminates the power consumed by traditional chips that need to move information in and out of external memory, thereby making machine learning algorithms faster and 50x more power efficient than other chips. In the end, this will likely enable edge computing on several embedded IoT devices, including AVs and security cameras, which company is targeting initially.

Similarly, stealthy Cerebras Systems, is reported to be developing tailor-made chips to optimize training step in deep learning models. In addition to seasoned team from the semiconductor industry, Cerebras has raised $112 million from the leading venture funds.

Similarly, Groq, which is founded by ex Google TPU team members, is developing dedicated chip for machine learning to be available in 2018. The chip is claimed to run 400 TOPS, with a power efficiency of 8 TOPS per watt — a 12x improvement over Nvidia’s Pegasus platform.

Suffice to say there is significant amount of innovation happening as the industry moves from CPUs to GPUs to other alternatives that will provide not just the processing horsepower, but is also better optimized for other key metrics, e.g. power consumption. This innovation and development of hardware + software ecosystem will be important for the scalability of AVs and related solutions based on deep learning.

--

--

Nitin Vaish

Decarbonization Solutions at Scale: Commercialization | Products | Investments