Technology
The Tech Giants Growing Behind China’s Great Firewall
The Tech Giants Growing Behind China’s Great Firewall
The Chart of the Week is a weekly Visual Capitalist feature on Fridays.
Every day, your feeds are likely dominated by the latest news about Silicon Valley’s biggest tech giants.
Whether it’s Facebook’s newest algorithm changes, Amazon’s announcement to enter the healthcare market, a new acquisition by Alphabet, or the buzz about the latest iPhone – the big four tech giants in the U.S. are covered extensively by the media, and we’re all very familiar with what they do.
However, what is less commonly talked about is the alternate universe that exists on the other side of China’s Great Firewall. It’s there that four Chinese tech giants are taking advantage of a lack of foreign competition to post explosive growth numbers – some which compare favorably even to their American peers.
Bizarro World
Like the “Bizarro Jerry” episode of Seinfeld, the Chinese-based tech giants look recognizably familiar – but markedly different – to the ones we know so well.
Alibaba
Likely the best known of China’s tech giants, Alibaba is the dominant online retailer in the country. The company had revenues of $25.1 billion in 2017 and is seeing that revenue grow at impressive speeds. In its most recent quarterly results (Q3, 2017), the company noted a 56% jump in revenue.
Amazon’s tough sell: Amazon does exist in the Chinese market, but it just has trouble competing with Jack Ma’s creation. Amazon has less than a 1% share of the e-commerce space in China, after a decade of trying to get a foothold. Further, Alibaba also runs AliCloud, which provides direct competition to Amazon’s AWS.
Baidu
Baidu is the largest search engine in China and also a leading player in AI. It’s the most visited website in China, and ranks #4 globally. The company will announce 2017 annual results in the coming weeks, after reporting a 29% jump in revenue in Q3 2017.
Google’s searching for a way in: Google was blocked in China in 2010 after refusing to filter search requests. However, since then, the giant has been able to take very small steps in entering the Chinese market – even though its signature search engine is still blocked, Google now has at least three offices in the country.
Tencent
Tencent has recently been in the news for its rapidly surging stock. The company, which owns the dominant social platform in China (WeChat), is now valued at over $500 billion. For those keeping tabs, Facebook is currently worth $550 billion.
It’s complicated: Facebook remains blocked by China, meaning that Zuckerberg and company can’t take advantage of a 1 billion plus market of people with growing buying power. Even if it found its way in, there are multiple social platforms in China and competition would be stiff.
Xiaomi
Dubbed as “China’s Apple”, Xiaomi is one of the world’s most valuable private companies. Things have been hot and cold for the ambitious smartphone manufacturer, but recently reports have surfaced that Xiaomi will IPO in the second half of 2018 for upwards of $50 billion.
Apple’s shine has dulled: Apple’s entrance into the Chinese market was once described as a success, but recently competition from domestic manufacturers has derailed that claim.
Crossing Over
After a new round of financing, Bytedance – a Chinese company specializing in short-form video – surpassed Uber as the most valuable startup in the world, valued at $75 billion.
Unlike some of its peers, Bytedance has found success outside of China. Tik Tok, for example, has well over 300 million users world wide, and the company is making investments into other popular broadcasting platforms, such as Live.me.
AI
Charted: The Exponential Growth in AI Computation
In eight decades, artificial intelligence has moved from purview of science fiction to reality. Here’s a quick history of AI computation.

Charted: The Exponential Growth in AI Computation
Electronic computers had barely been around for a decade in the 1940s, before experiments with AI began. Now we have AI models that can write poetry and generate images from textual prompts. But what’s led to such exponential growth in such a short time?
This chart from Our World in Data tracks the history of AI through the amount of computation power used to train an AI model, using data from Epoch AI.
The Three Eras of AI Computation
In the 1950s, American mathematician Claude Shannon trained a robotic mouse called Theseus to navigate a maze and remember its course—the first apparent artificial learning of any kind.
Theseus was built on 40 floating point operations (FLOPs), a unit of measurement used to count the number of basic arithmetic operations (addition, subtraction, multiplication, or division) that a computer or processor can perform in one second.
Computation power, availability of training data, and algorithms are the three main ingredients to AI progress. And for the first few decades of AI advances, compute, which is the computational power needed to train an AI model, grew according to Moore’s Law.
Period | Era | Compute Doubling |
---|---|---|
1950–2010 | Pre-Deep Learning | 18–24 months |
2010–2016 | Deep Learning | 5–7 months |
2016–2022 | Large-scale models | 11 months |
Source: “Compute Trends Across Three Eras of Machine Learning” by Sevilla et. al, 2022.
However, at the start of the Deep Learning Era, heralded by AlexNet (an image recognition AI) in 2012, that doubling timeframe shortened considerably to six months, as researchers invested more in computation and processors.
With the emergence of AlphaGo in 2015—a computer program that beat a human professional Go player—researchers have identified a third era: that of the large-scale AI models whose computation needs dwarf all previous AI systems.
Predicting AI Computation Progress
Looking back at the only the last decade itself, compute has grown so tremendously it’s difficult to comprehend.
For example, the compute used to train Minerva, an AI which can solve complex math problems, is nearly 6 million times that which was used to train AlexNet 10 years ago.
Here’s a list of important AI models through history and the amount of compute used to train them.
AI | Year | FLOPs |
---|---|---|
Theseus | 1950 | 40 |
Perceptron Mark I | 1957–58 | 695,000 |
Neocognitron | 1980 | 228 million |
NetTalk | 1987 | 81 billion |
TD-Gammon | 1992 | 18 trillion |
NPLM | 2003 | 1.1 petaFLOPs |
AlexNet | 2012 | 470 petaFLOPs |
AlphaGo | 2016 | 1.9 million petaFLOPs |
GPT-3 | 2020 | 314 million petaFLOPs |
Minerva | 2022 | 2.7 billion petaFLOPs |
Note: One petaFLOP = one quadrillion FLOPs. Source: “Compute Trends Across Three Eras of Machine Learning” by Sevilla et. al, 2022.
The result of this growth in computation, along with the availability of massive data sets and better algorithms, has yielded a lot of AI progress in seemingly very little time. Now AI doesn’t just match, but also beats human performance in many areas.
It’s difficult to say if the same pace of computation growth will be maintained. Large-scale models require increasingly more compute power to train, and if computation doesn’t continue to ramp up it could slow down progress. Exhausting all the data currently available for training AI models could also impede the development and implementation of new models.
However with all the funding poured into AI recently, perhaps more breakthroughs are around the corner—like matching the computation power of the human brain.
Where Does This Data Come From?
Source: “Compute Trends Across Three Eras of Machine Learning” by Sevilla et. al, 2022.
Note: The time estimated to for computation to double can vary depending on different research attempts, including Amodei and Hernandez (2018) and Lyzhov (2021). This article is based on our source’s findings. Please see their full paper for further details. Furthermore, the authors are cognizant of the framing concerns with deeming an AI model “regular-sized” or “large-sized” and said further research is needed in the area.
Methodology: The authors of the paper used two methods to determine the amount of compute used to train AI Models: counting the number of operations and tracking GPU time. Both approaches have drawbacks, namely: a lack of transparency with training processes and severe complexity as ML models grow.
-
Misc3 weeks ago
Ranked: The World’s Largest Stadiums
-
Maps1 week ago
The Incredible Historical Map That Changed Cartography
-
Markets2 weeks ago
Charted: Six Red Flags Pointing to China’s Economy Slowing Down
-
VC+7 days ago
What’s New on VC+ in September
-
Globalization4 weeks ago
Visualizing the BRICS Expansion in 4 Charts
-
Markets2 weeks ago
The 25 Best Stocks by Shareholder Wealth Creation (1926-2022)
-
Business4 days ago
Ranked: The 20 Best Franchises to Open in the U.S.
-
Technology4 weeks ago
Nvidia vs. AMD vs. Intel: Comparing AI Chip Sales