Connect with us

Technology

Here’s 13 Ideas to Fight Fake News – and a Big Problem With All of Them

Published

on

Will humans or computer algorithms be the future arbiters of “truth”?

Today’s infographic from Futurism sums up the ideas that academics, technologists, and other experts are proposing that we implement to stop the spread of fake news.

Below the infographic, we raise our concerns about these methods.

Here's 13 Ideas to Fight Fake News - and a Big Problem With Them All

While fake news is certainly problematic, the solutions proposed to penalize articles deemed to be “untrue” are just as scary.

By centralizing fact checking, a system is created that is inherently fragile, biased, and prone for abuse. Furthermore, the idea of axing websites that are deemed to be “untrue” is an initiative that limits independent thought and discourse, while allowing legacy media to remain entrenched.

Centralizing “Truth”

It could be argued that the best thing about the internet is that it decentralizes content, allowing for any individual, blog, or independent media company to stimulate discussion and new ideas with low barriers to entry. Millions of new entrants have changed the media landscape, and it has left traditional media flailing to find ways to adjust revenue models while keeping their influence intact.

If we say that “truth” can only be verified by a centralized mechanism – a group of people, or an algorithm written by a group of people – we are welcoming the idea that arbitrary sources will be preferred, while others will not (unless they conform to certain standards).

Based on this mechanism, it is almost certain that well-established journalistic sources like The New York Times or The Washington Post will be the most trusted. By the same token, newer sources (like independent media, or blogs written by emerging thought leaders) will not be able to get traction unless they are referencing or receiving backing from these “trusted” gatekeepers.

The Impact?

This centralization is problematic – and here’s a step-by-step reasoning of why that is the case:

First, either method (human or computer) must rely on preconceived notions of what is “authoritative” and “true” to make decisions. Both will be biased in some way. Humans will lean towards a particular consensus or viewpoint, while computers must rank authority based on different factors (Pagerank, backlinks, source recognition, or headline/content analysis).

Next, there is a snowball effect involved: if only posts referencing these authoritative sources of “truth” can get traction on social media, then these sources become even more authoritative over time. This creates entrenchment that will be difficult to overcome, and new bloggers or media outlets will only be able to move up the ladder by associating their posts with an existing consensus. Grassroot movements and new ideas will suffer – especially those that conflict with mainstream beliefs, government, or corporate power.

Finally, this raises concerns about who fact checks the fact checkers. Forbes has a great post on this, showing that Snopes.com (a fact checker) could not even verify basic truths about its own operations.

Removing articles deemed to be “untrue” is a form of censorship. While it may help to remove many ridiculous articles from people’s social feeds, it will also impact the qualities of the internet that make it so great in the first place: its decentralized nature, and the ability for any one person to make a profound impact on the world.

Click for Comments

Technology

Visualizing AI Patents by Country

See which countries have been granted the most AI patents each year, from 2012 to 2022.

Published

on

Visualizing AI Patents by Country

This was originally posted on our Voronoi app. Download the app for free on iOS or Android and discover incredible data-driven charts from a variety of trusted sources.

This infographic shows the number of AI-related patents granted each year from 2010 to 2022 (latest data available). These figures come from the Center for Security and Emerging Technology (CSET), accessed via Stanford University’s 2024 AI Index Report.

From this data, we can see that China first overtook the U.S. in 2013. Since then, the country has seen enormous growth in the number of AI patents granted each year.

YearChinaEU and UKU.S.RoWGlobal Total
20103071379845711,999
20115161299805812,206
20129261129506602,648
20131,035919706272,723
20141,278971,0786673,120
20151,7211101,1355393,505
20161,6211281,2987143,761
20172,4281441,4891,0755,136
20184,7411551,6741,5748,144
20199,5303223,2112,72015,783
202013,0714065,4414,45523,373
202121,9076238,2197,51938,268
202235,3151,17312,07713,69962,264

In 2022, China was granted more patents than every other country combined.

While this suggests that the country is very active in researching the field of artificial intelligence, it doesn’t necessarily mean that China is the farthest in terms of capability.

Key Facts About AI Patents

According to CSET, AI patents relate to mathematical relationships and algorithms, which are considered abstract ideas under patent law. They can also have different meaning, depending on where they are filed.

In the U.S., AI patenting is concentrated amongst large companies including IBM, Microsoft, and Google. On the other hand, AI patenting in China is more distributed across government organizations, universities, and tech firms (e.g. Tencent).

In terms of focus area, China’s patents are typically related to computer vision, a field of AI that enables computers and systems to interpret visual data and inputs. Meanwhile America’s efforts are more evenly distributed across research fields.

Learn More About AI From Visual Capitalist

If you want to see more data visualizations on artificial intelligence, check out this graphic that shows which job departments will be impacted by AI the most.

Continue Reading
Voronoi, the app by Visual Capitalist. Where data tells the story. Download on App Store or Google Play

Subscribe

Popular