Data science and machine learning can help fight disinformation—but we need savvy news consumers, too

Data science and machine learning can help fight disinformation—but we need savvy news consumers, too

Olena Vlasova

Notes on a wall at MisinfoCon DC. The notes are about trust in media, tech adoption, the digital divide, the utility of print, and other factors that relate to disinformation.
Photo from MisinfoCon

Media and technology professionals are experimenting with ways to fight disinformation through machine learning and data science. That’s an important step. But as disinformation campaigns become increasingly sophisticated, we can’t forget about preparing the public to identify disinformation, too.

Approximately 64% of Americans feel that “fabricated news stories cause a great deal of confusion about the basic facts of current issues and events,” according to a study by the Pew Research Center in late 2016. Meanwhile, Ukrainian citizens have been dealing with aggressive disinformation campaigns since 2014.

How machine learning and data science can help fight disinformation

Facebook is experimenting with machine learning to fight disinformation. The company is also using algorithms to reduce the amount of false news in users’ News Feeds. After receiving criticism for allowing disinformation to spread on their platform, Facebook partnered with PolitiFact and Snopes, two of the most influential fact-checking websites, to filter out posts that had been debunked by fact checkers.

Google is working with the International Fact-Checking Network to offer free fact-checking tools and hold training sessions. Additionally, Google’s tech incubator Jigsaw launched the Share the Facts widget, which allows publishers to highlight their fact checks and verified information.

There are other tools to fight disinformation online. Spike and Hoaxy help to identify false news sites. Snopes, CrowdTangle, PHEME, Google Trends, and Meedan all assist in verifying breaking news. Le Décodex from the Le Monde database categorizes websites with tags such as real or fake.

Using artificial intelligence to support fact checking

One of the latest AI products by Israeli start-up helps advertisers avoid placing ads alongside false news and problematic content. The developers claim the system is able to spot false stories with almost 90% accuracy

Or Levi, the company’s founder, claims that people in advanced economies will consume more false news than true news within the next three or four years. “Because a lot of this content is recycled and repeated in different ways,” he says, “we believe we can use AI to pinpoint trends which detect it as being fake.” However, the developers confirm that the tool works best when assisted by human fact-checkers.

What AI cannot do, at least not yet, is test the accuracy of facts within articles and grasp the meaning behind certain statements. Another problem is the difficulty of categorizing stories that contain opinions or broad statements. In addition, people usually avoid taking an extra step to verify information, even if it is just a click away. 

“It’s an information arms race, and AI will definitely provide us with some tools to help,” Levi says. “But at the end of the day, the onus will probably always be on humans to use their own intuition to decide whether something is true or not.”

IREX’s approach to preparing the public to counter disinformation

We need to invest not only in improving platforms and algorithms, but also in educating people to be savvy news consumers.

This education needs to start in kindergarten, but most adults could also benefit from learning more about how to identify disinformation. A recent study found that “even smart people are shockingly bad at analyzing sources online.”

Learning to detect propaganda and disinformation is at the core of IREX’s Learn to Discern approach. The approach uses a customizable skill-building and awareness-raising curriculum based on the principle that “It is not about what you read, but about how you read it!”

Learn to Discern can be used with people of all ages. It uses a range tools and techniques, including interactive training sessions, videos, and games. Results from a recent impact study found that participants scored substantially higher than the general population on identifying disinformation, even 1.5 years after completing the program.

We first used Learn to Discern to counter sophisticated disinformation campaigns in Ukraine, and we are now piloting the curriculum of the United States with communities and reporters. We are also piloting the approach in Ukrainian schools. The pilot currently covers 50 schools in four Ukrainian cities. By 2021, it will be implemented throughout Ukraine, reaching 40,000 youth. 

If you are interested in exploring media literacy work in the United States with us, please contact Tara Susman-Peña at To explore possibilities for media literacy work in Ukraine or elsewhere, please contact Mehri Druckman at