Charted: The Exponential Growth in AI Computation
Connect with us

Technology

Charted: The Exponential Growth in AI Computation

Published

on

Click to view this graphic in a higher-resolution.

A time series chart showing the creation of machine learning systems on the x-axis and the amount of AI computation they used on the y-axis measured in FLOPs.

Charted: The Exponential Growth in AI Computation

Electronic computers had barely been around for a decade in the 1940s, before experiments with AI began. Now we have AI models that can write poetry and generate images from textual prompts. But what’s led to such exponential growth in such a short time?

This chart from Our World in Data tracks the history of AI through the amount of computation power used to train an AI model, using data from Epoch AI.

The Three Eras of AI Computation

In the 1950s, American mathematician Claude Shannon trained a robotic mouse called Theseus to navigate a maze and remember its course—the first apparent artificial learning of any kind.

Theseus was built on 40 floating point operations (FLOPs), a unit of measurement used to count the number of basic arithmetic operations (addition, subtraction, multiplication, or division) that a computer or processor can perform in one second.

ℹ️ FLOPs are often used as a metric to measure the computational performance of computer hardware. The higher the FLOP count, the higher computation, the more powerful the system.

Computation power, availability of training data, and algorithms are the three main ingredients to AI progress. And for the first few decades of AI advances, compute, which is the computational power needed to train an AI model, grew according to Moore’s Law.

PeriodEraCompute Doubling
1950–2010Pre-Deep Learning18–24 months
2010–2016Deep Learning5–7 months
2016–2022Large-scale models11 months

Source: “Compute Trends Across Three Eras of Machine Learning” by Sevilla et. al, 2022.

However, at the start of the Deep Learning Era, heralded by AlexNet (an image recognition AI) in 2012, that doubling timeframe shortened considerably to six months, as researchers invested more in computation and processors.

With the emergence of AlphaGo in 2015—a computer program that beat a human professional Go player—researchers have identified a third era: that of the large-scale AI models whose computation needs dwarf all previous AI systems.

Predicting AI Computation Progress

Looking back at the only the last decade itself, compute has grown so tremendously it’s difficult to comprehend.

For example, the compute used to train Minerva, an AI which can solve complex math problems, is nearly 6 million times that which was used to train AlexNet 10 years ago.

Here’s a list of important AI models through history and the amount of compute used to train them.

AIYearFLOPs
Theseus195040
Perceptron Mark I1957–58695,000
Neocognitron1980228 million
NetTalk198781 billion
TD-Gammon199218 trillion
NPLM20031.1 petaFLOPs
AlexNet2012470 petaFLOPs
AlphaGo20161.9 million petaFLOPs
GPT-32020314 million petaFLOPs
Minerva20222.7 billion petaFLOPs

Note: One petaFLOP = one quadrillion FLOPs. Source: “Compute Trends Across Three Eras of Machine Learning” by Sevilla et. al, 2022.

The result of this growth in computation, along with the availability of massive data sets and better algorithms, has yielded a lot of AI progress in seemingly very little time. Now AI doesn’t just match, but also beats human performance in many areas.

It’s difficult to say if the same pace of computation growth will be maintained. Large-scale models require increasingly more compute power to train, and if computation doesn’t continue to ramp up it could slow down progress. Exhausting all the data currently available for training AI models could also impede the development and implementation of new models.

However with all the funding poured into AI recently, perhaps more breakthroughs are around the corner—like matching the computation power of the human brain.

Where Does This Data Come From?

Source: “Compute Trends Across Three Eras of Machine Learning” by Sevilla et. al, 2022.

Note: The time estimated to for computation to double can vary depending on different research attempts, including Amodei and Hernandez (2018) and Lyzhov (2021). This article is based on our source’s findings. Please see their full paper for further details. Furthermore, the authors are cognizant of the framing concerns with deeming an AI model “regular-sized” or “large-sized” and said further research is needed in the area.

Methodology: The authors of the paper used two methods to determine the amount of compute used to train AI Models: counting the number of operations and tracking GPU time. Both approaches have drawbacks, namely: a lack of transparency with training processes and severe complexity as ML models grow.

green check mark icon

This article was published as a part of Visual Capitalist's Creator Program, which features data-driven visuals from some of our favorite Creators around the world.

Click for Comments

Technology

AI Week Wrap Report: See All the Visuals in One Place

AI Week, sponsored by Terzo, is Visual Capitalist’s in-depth exploration of the latest insights in the world of artificial intelligence.

Published

on

AI Week was our first-of-its-kind editorial event. In it, we examined the latest AI insights, including the amount each nation is spending on AI, how people utilize AI today, the skills required for a career in AI, and much more.

All these insights were drawn from the 2025 Stanford AI Index and other cutting-edge sources. All of these are available at Visual Capitalist’s AI content hub–brought to you by our partners at Terzo.

Tap Into AI Week

This week, we examined several key areas of the AI landscape in 2025.

First, we examined the state of global AI investment, discovering that the U.S. has raised nearly half a trillion dollars in private AI investment since 2013—the most of any nation. 

Following the U.S. were China and the U.K., which invested $120 billion and $30 billion, respectively, during the same period.

Patent filings serve as a means to measure innovation and leadership in the technology sector while also providing legal protection for novel ideas and inventions.

In our second post, we examined AI leadership by analyzing the number of AI patents filed by major nations. We found that China has accumulated 70% of all global AI patents, the most in recent years.

However, evidence does suggest that many of these patents were applied for and protected within China alone.

AI can be found in nearly every digital product today. So, in the third post, we explored how people utilize AI today

Here, we found that the primary reason people use AI today is for both professional and personal support. Showing that AI can assist humans in managing both their emotions and life.

However, AI continues to find many uses in content creation, learning, and creativity.

A significant aspect of the conversation surrounding AI models is the substantial amount of money tech giants are investing in their training.

We examined corporate investment in various AI models, finding that in recent years, Google has spent the most. Although data is limited, it’s believed that the company spent $192 million in training Gemini 1.0 Ultra—the highest amount across all leading models.  

Bar chart showing the estimated training cost of training AI models in 2023 and 2024.

The conversation around AI has also raised the question of whether humans or machines are faster at technical tasks.

While AI systems have historically fallen short compared to humans, the gap has narrowed considerably over the past year. Now, AI surpasses humans in specific technical skills, including advanced mathematics and visual reasoning.

A line chart showing AI vs human performance in various technical tasks

The advent of AI has also created the need for AI-based jobs. Stanford University’s 2025 AI Index examined AI job postings throughout the U.S. and found that the most sought-after skill is the programming language Python.

Computer science, data analysis, and an understanding of the Agile working methodology were also identified as valuable AI skills.

Bar chart showing the most wanted skills in AI jobs in 2024.

For our final AI Week graphic, we pit American AIs against their Chinese counterparts in a test of performance.

The graphic charts the performance of the top U.S. and Chinese AI models on LMSYS’s Chatbot Arena. It shows that while U.S. models have consistently outperformed Chinese models, the performance gap has closed dramatically in recent years.

A line chart showing U.S vs. China AI performance

Helping the World to Discover Your Data

At Visual Capitalist, we craft campaigns like AI Week that tackle our client’s key challenges.

Whether by making your data more discoverable, leveraging our brand and audience of 12 million people per month, or consulting and educating around data discovery, our goal is to help you isolate the signal from the noise.

If you want to learn how companies like Terzo, BlackRock, MSCI, and Morningstar grew their brands by partnering with Visual Capitalist, contact us today.Use This Visualization

Continue Reading

Discover more visuals with Voronoi by Visual Capitalist Logo

Unvaccinated Drive Worst Single Measles Outbreak Since 2000

Popular