AI
Which Jobs Will Be Most Impacted by ChatGPT?
Jobs Most Impacted by ChatGPT and Similar AI Models
On November 30, 2022, OpenAI heralded a new era of artificial intelligence (AI) by introducing ChatGPT to the world.
The AI chatbot stunned users with its human-like and thorough responses. ChatGPT could comprehend and answer a variety of different questions, make suggestions, research and write essays and briefs, and even tell jokes (amongst other tasks).
Many of these skills are used by workers in their jobs across the world, which begs the question: which jobs will be transformed, or even replaced, by generative AI in the coming future?
This infographic from Harrison Schell visualizes the March 2023 findings of OpenAI on the potential labor market impact of large language models (LLMs) and various applications of generative AI, including ChatGPT.
Methodology
The OpenAI working paper specifically examined the U.S. industries and jobs most “exposed” to large language models like GPT, which the chatbot ChatGPT operates on.
Key to the paper is the definition of what “exposed” actually means:
“A proxy for potential economic impact without distinguishing between labor-augmenting or labor-displacing effects.” – OpenAI
Thus, the results include both jobs where humans could possibly use AI to optimize their work, along with jobs that could potentially be automated altogether.
OpenAI found that 80% of the American workforce belonged to an occupation where at least 10% of their tasks can be done (or aided) by AI. One-fifth of the workforce belonged to an occupation where 50% of work tasks would be impacted by artificial intelligence.
The Jobs Most and Least at Risk of AI Disruption
Here is a list of jobs highlighted in the paper as likely to see (or already seeing) AI disruption, where AI can reduce the time to do tasks associated with the occupation by at least 50%.
Analysis was provided by a variety of human-made models as well as ChatGPT-4 models, with results from both showing below:
Jobs | Categorized By | AI Exposure |
---|---|---|
Accountants | AI | 100% |
Admin and legal assistants | AI | 100% |
Climate change policy analysts | AI | 100% |
Reporters & journalists | AI | 100% |
Mathematicians | Human & AI | 100% |
Tax preparers | Human | 100% |
Financial analysts | Human | 100% |
Writers & authors | Human | 100% |
Web designers | Human | 100% |
Blockchain engineers | AI | 97.1% |
Court reporters | AI | 96.4% |
Proofreaders | AI | 95.5% |
Correspondence clerks | AI | 95.2% |
Survey researchers | Human | 84.0% |
Interpreters/translators | Human | 82.4% |
PR specialists | Human | 80.6% |
Animal scientists | Human | 77.8% |
Editor’s note: The paper only highlights some jobs impacted. One AI model found a list of 84 additional jobs that were “fully exposed”, but not all were listed. One human model found 15 additional “fully exposed” jobs that were not listed.
Generally, jobs that require repetitive tasks, some level of data analysis, and routine decision-making were found to face the highest risk of exposure.
Perhaps unsurprisingly, “information processing industries” that involve writing, calculating, and high-level analysis have a higher exposure to LLM-based artificial intelligence. However, science and critical-thinking jobs within those industries negatively correlate with AI exposure.
On the flipside, not every job is likely to be affected. Here’s a list of jobs that are likely least exposed to large language model AI disruption.
Jobs Least Exposed to AI | |
---|---|
Athletes | Short-order cooks |
Large equipment operators | Barbers/hair stylists |
Glass installers & repairers | Dredge operators |
Automotive mechanics | Power-line installers/repairers |
Masons, carpenters, roofers | Oil field maintenance workers |
Plumbers, painters, pipefitters | Servers, dishwashers, bartenders |
Naturally, hands-on industries like manufacturing, mining, and agriculture were more protected, but still include information processing roles at risk.
Likewise, the in-person service industry is also expected to see minimal impact from these kinds of AI models. But, patterns are beginning to emerge for job-seekers and industries that may have to contend with artificial intelligence soon.
Artificial Intelligence Impacts on Different Levels of Jobs
OpenAI analyzed correlations between AI exposure in the labor market against a job’s requisite education level, wages, and job-training.
The paper found that jobs with higher wages have a higher exposure to LLM-based AI (though there were numerous low-wage jobs with high exposure as well).
Job Parameter | AI Exposure Correlation |
---|---|
Wages | Direct |
Education | Direct |
Training | Inverse |
Professionals with higher education degrees also appeared to be more greatly exposed to AI impact, compared to those without.
However, occupations with a greater level of on-the-job training had the least amount of work tasks exposed, compared to those jobs with little-to-no training.
Will AI’s Impact on the Job Market Be Good or Bad?
The potential impact of ChatGPT and similar AI-driven models on individual job titles depends on several factors, including the nature of the job, the level of automation that is possible, and the exact tasks required.
However, while certain repetitive and predictable tasks can be automated, others that require intangibles like creative input, understanding cultural nuance, reading social cues, or executing good judgement cannot be fully hands-off yet.
And keep in mind that AI exposure isn’t limited to job replacement. Job transformation, with workers utilizing the AI to speed up or improve tasks output, is extremely likely in many of these scenarios. Already, there are employment ads for “AI Whisperers” who can effectively optimize automated responses from generalist AI.
As the AI arms race moves forward at a rapid pace rarely seen before in the history of technology, it likely won’t take long for us to see the full impact of ChatGPT and other LLMs on both jobs and the economy.

This article was published as a part of Visual Capitalist's Creator Program, which features data-driven visuals from some of our favorite Creators around the world.
Technology
Visualizing AI vs. Human Performance in Technical Tasks
AI systems have seen rapid advancements, surpassing human performance in technical tasks such as advanced math and visual reasoning.

AI vs. Human Performance in Technical Tasks
This was originally posted on our Voronoi app. Download the app for free on iOS or Android and discover incredible data-driven charts from a variety of trusted sources.
The gap between human and machine reasoning is narrowing—and fast.
Over the past year, AI systems have continued to see rapid advancements, surpassing human performance in technical tasks where they previously fell short, such as advanced math and visual reasoning.
This graphic visualizes AI systems’ performance relative to human baselines for eight AI benchmarks measuring tasks including:
- Image classification
- Visual reasoning
- Medium-level reading comprehension
- English language understanding
- Multitask language understanding
- Competition-level mathematics
- PhD-level science questions
- Multimodal understanding and reasoning
This visualization is part of Visual Capitalist’s AI Week, sponsored by Terzo. Data comes from the Stanford University 2025 AI Index Report.
An AI benchmark is a standardized test used to evaluate the performance and capabilities of AI systems on specific tasks.
AI Models Are Surpassing Humans in Technical Tasks
Below, we show how AI models have performed relative to the human baseline in various technical tasks in recent years.
Year | Perfomance relative to the human baseline (100%) | Task |
---|---|---|
2012 | 89.15% | Image classification |
2013 | 91.42% | Image classification |
2014 | 96.94% | Image classification |
2015 | 99.47% | Image classification |
2016 | 100.74% | Image classification |
2016 | 80.09% | Visual reasoning |
2017 | 101.37% | Image classification |
2017 | 82.35% | Medium-level reading comprehension |
2017 | 86.49% | Visual reasoning |
2018 | 102.85% | Image classification |
2018 | 96.23% | Medium-level reading comprehension |
2018 | 86.70% | Visual reasoning |
2019 | 103.75% | Image classification |
2019 | 36.08% | Multitask language understanding |
2019 | 103.27% | Medium-level reading comprehension |
2019 | 94.21% | English language understanding |
2019 | 90.67% | Visual reasoning |
2020 | 104.11% | Image classification |
2020 | 60.02% | Multitask language understanding |
2020 | 103.92% | Medium-level reading comprehension |
2020 | 99.44% | English language understanding |
2020 | 91.38% | Visual reasoning |
2021 | 104.34% | Image classification |
2021 | 7.67% | Competition-level mathematics |
2021 | 66.82% | Multitask language understanding |
2021 | 104.15% | Medium-level reading comprehension |
2021 | 101.56% | English language understanding |
2021 | 102.48% | Visual reasoning |
2022 | 103.98% | Image classification |
2022 | 57.56% | Competition-level mathematics |
2022 | 83.74% | Multitask language understanding |
2022 | 101.67% | English language understanding |
2022 | 104.36% | Visual reasoning |
2023 | 47.78% | PhD-level science questions |
2023 | 93.67% | Competition-level mathematics |
2023 | 96.21% | Multitask language understanding |
2023 | 71.91% | Multimodal understanding and reasoning |
2024 | 108.00% | PhD-level science questions |
2024 | 108.78% | Competition-level mathematics |
2024 | 102.78% | Multitask language understanding |
2024 | 94.67% | Multimodal understanding and reasoning |
2024 | 101.78% | English language understanding |
From ChatGPT to Gemini, many of the world’s leading AI models are surpassing the human baseline in a range of technical tasks.
The only task where AI systems still haven’t caught up to humans is multimodal understanding and reasoning, which involves processing and reasoning across multiple formats and disciplines, such as images, charts, and diagrams.
However, the gap is closing quickly.
In 2024, OpenAI’s o1 model scored 78.2% on MMMU, a benchmark that evaluates models on multi-discipline tasks demanding college-level subject knowledge.
This was just 4.4 percentage points below the human benchmark of 82.6%. The o1 model also has one of the lowest hallucination rates out of all AI models.
This was major jump from the end of 2023, where Google Gemini scored just 59.4%, highlighting the rapid improvement of AI performance in these technical tasks.
To dive into all the AI Week content, visit our AI content hub, brought to you by Terzo.
Learn More on the Voronoi App 
To learn more about the global AI industry, check out this graphic that visualizes which countries are winning the AI patent race.
-
Misc2 weeks ago
Mapped: The Most Popular Beer in Each U.S. State
-
Maps4 weeks ago
Mapped: How Much a 24-Pack of Beer Costs in Each U.S. State
-
Maps3 weeks ago
Mapped: Which U.S. States Import the Most from China?
-
Money3 weeks ago
Ranked: Daily Incomes of the Richest and Poorest in 25 Countries
-
Automotive4 weeks ago
Ranked: The Cheapest Car Brands to Own and Maintain in the U.S.
-
Money2 weeks ago
Mapped: The Most Taxed States in America
-
United States3 weeks ago
Mapped: The Best Selling Vehicle in Every U.S. State in 2024
-
Business1 week ago
Timeline: Visualizing America’s Oldest Companies