Connect with us

Technology

Visualizing the Race for EV Dominance

Published

on

Subscribe to the Elements free mailing list for more like this

The Race for EV Dominance

Electric Car Companies: Eating Tesla’s Dust

This was originally posted on Elements. Sign up to the free mailing list to get beautiful visualizations on natural resource megatrends in your email every week.

Tesla has reigned supreme among electric car companies, ever since it first released the Roadster back in 2008.

The California-based company headed by Elon Musk ended 2020 with 23% of the EV market and recently became the first automaker to hit a $1 trillion market capitalization. However, competitors like Volkswagen hope to accelerate their own EV efforts to unseat Musk’s company as the dominant manufacturer.

This graphic based on data from EV Volumes compares Tesla and other top carmakers’ positions today—from an all-electric perspective—and gives market share projections for 2025.

Auto Majors Playing Catch-up

According to Wood Mackenzie, Volkswagen will become the largest manufacturer of EVs before 2030. In order to achieve this, the world’s second-biggest carmaker is in talks with suppliers to secure direct access to the raw materials for batteries.

It also plans to build six battery factories in Europe by 2030 and to invest globally in charging stations. Still, according to EV Volumes projections, by 2025 the German company is forecasted to have only 12% of the market versus Tesla’s 21%.

CompanySales 2020 Sales 2025 (projections)Market cap (Oct '21, USD)
Tesla499,0002,800,000$1,023B
Volkswagen Group230,0001,500,000$170B
BYD136,000377,000$113B
SGMW (GM, Wulling Motors, SAIC)211,0001,100,000$89B
BMW48,000455,000$67B
Daimler (Mercedes-Benz)55,000483,000$103B
Renault-Nissan-Mitsubishi191,000606,000$39B
Geely40,000382,000$34B
Hyundai -Kia145,000750,000$112B
Stellantis82,000931,000$63B
Toyota 11,000382,000$240B
Ford 1,400282,000$63B

Other auto giants are following the same track towards EV adoption.

GM, the largest U.S. automaker, wants to stop selling fuel-burning cars by 2035. The company is making a big push into pure electric vehicles, with more than 30 new models expected by 2025.

Meanwhile, Ford expects 40% of its vehicles sold to be electric by the year 2030. The American carmaker has laid out plans to invest tens of billions of dollars in electric and autonomous vehicle efforts in the coming years.

Tesla’s Brand: A Secret Weapon

When it comes to electric car company brand awareness in the marketplace, Tesla still surpasses all others. In fact, more than one-fourth of shoppers who are considering an EV said Tesla is their top choice.

“They’ve done a wonderful job at presenting themselves as the innovative leader of electric vehicles and therefore, this is translating high awareness among consumers…”

—Rachelle Petusky, Research at Cox Automotive Mobility Group

Tesla recently surpassed Audi as the fourth-largest luxury car brand in the United States in 2020. It is now just behind BMW, Lexus, and Mercedes-Benz.

The Dominance of Electric Car Companies by 2040

BloombergNEF expects annual passenger EV sales to reach 13 million in 2025, 28 million in 2030, and 48 million by 2040, outselling gasoline and diesel models (42 million).

As the EV market continues to grow globally, competitors hope to take a run at Tesla’s lead—or at least stay in the race.

Click for Comments

Technology

Charted: The Exponential Growth in AI Computation

In eight decades, artificial intelligence has moved from purview of science fiction to reality. Here’s a quick history of AI computation.

Published

on

A cropped version of the time series chart showing the creation of machine learning systems on the x-axis and the amount of AI computation they used on the y-axis measured in FLOPs.

Charted: The Exponential Growth in AI Computation

Electronic computers had barely been around for a decade in the 1940s, before experiments with AI began. Now we have AI models that can write poetry and generate images from textual prompts. But what’s led to such exponential growth in such a short time?

This chart from Our World in Data tracks the history of AI through the amount of computation power used to train an AI model, using data from Epoch AI.

The Three Eras of AI Computation

In the 1950s, American mathematician Claude Shannon trained a robotic mouse called Theseus to navigate a maze and remember its course—the first apparent artificial learning of any kind.

Theseus was built on 40 floating point operations (FLOPs), a unit of measurement used to count the number of basic arithmetic operations (addition, subtraction, multiplication, or division) that a computer or processor can perform in one second.

ℹ️ FLOPs are often used as a metric to measure the computational performance of computer hardware. The higher the FLOP count, the higher computation, the more powerful the system.

Computation power, availability of training data, and algorithms are the three main ingredients to AI progress. And for the first few decades of AI advances, compute, which is the computational power needed to train an AI model, grew according to Moore’s Law.

PeriodEraCompute Doubling
1950–2010Pre-Deep Learning18–24 months
2010–2016Deep Learning5–7 months
2016–2022Large-scale models11 months

Source: “Compute Trends Across Three Eras of Machine Learning” by Sevilla et. al, 2022.

However, at the start of the Deep Learning Era, heralded by AlexNet (an image recognition AI) in 2012, that doubling timeframe shortened considerably to six months, as researchers invested more in computation and processors.

With the emergence of AlphaGo in 2015—a computer program that beat a human professional Go player—researchers have identified a third era: that of the large-scale AI models whose computation needs dwarf all previous AI systems.

Predicting AI Computation Progress

Looking back at the only the last decade itself, compute has grown so tremendously it’s difficult to comprehend.

For example, the compute used to train Minerva, an AI which can solve complex math problems, is nearly 6 million times that which was used to train AlexNet 10 years ago.

Here’s a list of important AI models through history and the amount of compute used to train them.

AIYearFLOPs
Theseus195040
Perceptron Mark I1957–58695,000
Neocognitron1980228 million
NetTalk198781 billion
TD-Gammon199218 trillion
NPLM20031.1 petaFLOPs
AlexNet2012470 petaFLOPs
AlphaGo20161.9 million petaFLOPs
GPT-32020314 million petaFLOPs
Minerva20222.7 billion petaFLOPs

Note: One petaFLOP = one quadrillion FLOPs. Source: “Compute Trends Across Three Eras of Machine Learning” by Sevilla et. al, 2022.

The result of this growth in computation, along with the availability of massive data sets and better algorithms, has yielded a lot of AI progress in seemingly very little time. Now AI doesn’t just match, but also beats human performance in many areas.

It’s difficult to say if the same pace of computation growth will be maintained. Large-scale models require increasingly more compute power to train, and if computation doesn’t continue to ramp up it could slow down progress. Exhausting all the data currently available for training AI models could also impede the development and implementation of new models.

However with all the funding poured into AI recently, perhaps more breakthroughs are around the corner—like matching the computation power of the human brain.

Where Does This Data Come From?

Source: “Compute Trends Across Three Eras of Machine Learning” by Sevilla et. al, 2022.

Note: The time estimated to for computation to double can vary depending on different research attempts, including Amodei and Hernandez (2018) and Lyzhov (2021). This article is based on our source’s findings. Please see their full paper for further details. Furthermore, the authors are cognizant of the framing concerns with deeming an AI model “regular-sized” or “large-sized” and said further research is needed in the area.

Methodology: The authors of the paper used two methods to determine the amount of compute used to train AI Models: counting the number of operations and tracking GPU time. Both approaches have drawbacks, namely: a lack of transparency with training processes and severe complexity as ML models grow.

Continue Reading

Subscribe

Popular