Technology
The Game of Life: Visualizing China’s Social Credit System

The Game of Life: Visualizing China’s Social Credit System
In an attempt to imbue trust, China has announced a plan to implement a national ranking system for its citizens and companies. Currently in pilot mode, the new system will be rolled out in 2020, and go through numerous iterations before becoming official.
While the system may be a useful tool for China to manage its growing 1.4 billion population, it has triggered global concerns around the ethics of big data, and whether the system is a breach of fundamental human rights.
Today’s infographic looks at how China’s proposed social credit system could work, and what the implications might be.
The Government is Always Watching
Currently, the pilot system varies from place to place, whereas the new system is envisioned as a unified system. Although the pilot program may be more of an experiment than a precursor, it gives a good indication of what to expect.
In the pilot system, each citizen is assigned 1,000 points and is consistently monitored and rated on how they behave. Points are earned through good deeds, and lost for bad behavior. Users increase points by donating blood or money, praising the government on social media, and helping the poor. Rewards for such behavior can range from getting a promotion at work fast-tracked, to receiving priority status for children’s school admissions.
In contrast, not visiting one’s aging parents regularly, spreading rumors on the internet, and cheating in online games are considered antisocial behaviors. Punishments include public shaming, exclusion from booking flights or train tickets, and restricted access to public services.
Big Data Goes Right to the Source
The perpetual surveillance that comes with the new system is expected to draw on huge amounts of data from a variety of traditional and digital sources.
Police officers have used AI-powered smart glasses and drones to effectively monitor citizens. Footage from these devices showing antisocial behavior can be broadcast to the public to shame the offenders, and deter others from behaving similarly.
For more serious offenders, some cities in China force people to repay debts by switching the person’s ringtone without their permission. The ringtone begins with the sound of a police siren, followed by a message such as:
“The person you are calling has been listed as a discredited person by the local court. Please urge this person to fulfill his or her legal obligations.”
Two of the largest companies in China, Tencent and Alibaba, were enlisted by the People’s Bank of China to play an important role in the credit system, raising the issue of third-party data security. WeChat—China’s largest social media platform, owned by Tencent—tracked behavior and ranked users accordingly, while displaying their location in real-time.
Following data concerns, these tech companies—and six others—were not awarded any licenses by the government. However, social media giants are still involved in orchestrating the public shaming of citizens who misbehave.
The Digital Dang’an
The social credit system may not be an entirely new initiative in China. The dang’an (English: record) is a paper file containing an individual’s school reports, information on physical characteristics, employment records, and photographs.
These dossiers, which were first used in the Maoist years, helped the government in maintaining control of its citizens. This gathering of citizen’s data for China’s social credit system may in fact be seen as a revival of the principle of dang’an in the digital era, with the system providing a powerful tool to monitor citizens whose data is more difficult to capture.
Is the System Working?
In 2018, people with a low score were prohibited from buying plane tickets almost 18 million times, while high-speed train ticket transactions were blocked 5.5 million times. A further 128 people were prohibited from leaving China, due to unpaid taxes.
The system could have major implications for foreign business practices—as preference could be given to companies already ranked in the system. Companies with higher scores will be rewarded with incentives which include lower tax rates and better credit conditions, with their behavior being judged in areas such as:
- Paid taxes
- Customs regulation
- Environmental protection
Despite the complexities of gathering vast amounts of data, the system is certainly making an impact. While there are benefits to having a standardized scoring system, and encouraging positive behavior—will it be worth the social cost of gamifying human life?
AI
Charted: The Exponential Growth in AI Computation
In eight decades, artificial intelligence has moved from purview of science fiction to reality. Here’s a quick history of AI computation.

Charted: The Exponential Growth in AI Computation
Electronic computers had barely been around for a decade in the 1940s, before experiments with AI began. Now we have AI models that can write poetry and generate images from textual prompts. But what’s led to such exponential growth in such a short time?
This chart from Our World in Data tracks the history of AI through the amount of computation power used to train an AI model, using data from Epoch AI.
The Three Eras of AI Computation
In the 1950s, American mathematician Claude Shannon trained a robotic mouse called Theseus to navigate a maze and remember its course—the first apparent artificial learning of any kind.
Theseus was built on 40 floating point operations (FLOPs), a unit of measurement used to count the number of basic arithmetic operations (addition, subtraction, multiplication, or division) that a computer or processor can perform in one second.
Computation power, availability of training data, and algorithms are the three main ingredients to AI progress. And for the first few decades of AI advances, compute, which is the computational power needed to train an AI model, grew according to Moore’s Law.
Period | Era | Compute Doubling |
---|---|---|
1950–2010 | Pre-Deep Learning | 18–24 months |
2010–2016 | Deep Learning | 5–7 months |
2016–2022 | Large-scale models | 11 months |
Source: “Compute Trends Across Three Eras of Machine Learning” by Sevilla et. al, 2022.
However, at the start of the Deep Learning Era, heralded by AlexNet (an image recognition AI) in 2012, that doubling timeframe shortened considerably to six months, as researchers invested more in computation and processors.
With the emergence of AlphaGo in 2015—a computer program that beat a human professional Go player—researchers have identified a third era: that of the large-scale AI models whose computation needs dwarf all previous AI systems.
Predicting AI Computation Progress
Looking back at the only the last decade itself, compute has grown so tremendously it’s difficult to comprehend.
For example, the compute used to train Minerva, an AI which can solve complex math problems, is nearly 6 million times that which was used to train AlexNet 10 years ago.
Here’s a list of important AI models through history and the amount of compute used to train them.
AI | Year | FLOPs |
---|---|---|
Theseus | 1950 | 40 |
Perceptron Mark I | 1957–58 | 695,000 |
Neocognitron | 1980 | 228 million |
NetTalk | 1987 | 81 billion |
TD-Gammon | 1992 | 18 trillion |
NPLM | 2003 | 1.1 petaFLOPs |
AlexNet | 2012 | 470 petaFLOPs |
AlphaGo | 2016 | 1.9 million petaFLOPs |
GPT-3 | 2020 | 314 million petaFLOPs |
Minerva | 2022 | 2.7 billion petaFLOPs |
Note: One petaFLOP = one quadrillion FLOPs. Source: “Compute Trends Across Three Eras of Machine Learning” by Sevilla et. al, 2022.
The result of this growth in computation, along with the availability of massive data sets and better algorithms, has yielded a lot of AI progress in seemingly very little time. Now AI doesn’t just match, but also beats human performance in many areas.
It’s difficult to say if the same pace of computation growth will be maintained. Large-scale models require increasingly more compute power to train, and if computation doesn’t continue to ramp up it could slow down progress. Exhausting all the data currently available for training AI models could also impede the development and implementation of new models.
However with all the funding poured into AI recently, perhaps more breakthroughs are around the corner—like matching the computation power of the human brain.
Where Does This Data Come From?
Source: “Compute Trends Across Three Eras of Machine Learning” by Sevilla et. al, 2022.
Note: The time estimated to for computation to double can vary depending on different research attempts, including Amodei and Hernandez (2018) and Lyzhov (2021). This article is based on our source’s findings. Please see their full paper for further details. Furthermore, the authors are cognizant of the framing concerns with deeming an AI model “regular-sized” or “large-sized” and said further research is needed in the area.
Methodology: The authors of the paper used two methods to determine the amount of compute used to train AI Models: counting the number of operations and tracking GPU time. Both approaches have drawbacks, namely: a lack of transparency with training processes and severe complexity as ML models grow.
-
Technology1 week ago
Visualizing Google’s Search Engine Market Share
-
Markets3 weeks ago
The Monthly Cost of Buying vs. Renting a House in America
-
Money1 week ago
Visualized: How Long Does it Take to Double Your Money?
-
Misc3 weeks ago
Ranked: The World’s Largest Stadiums
-
Maps1 week ago
The Incredible Historical Map That Changed Cartography
-
Markets2 weeks ago
Charted: Six Red Flags Pointing to China’s Economy Slowing Down
-
VC+7 days ago
What’s New on VC+ in September
-
Globalization4 weeks ago
Visualizing the BRICS Expansion in 4 Charts