Google and Facebook say they can control Taiwan's fake news problem. It's not that easy.

Google and Facebook say they can control Taiwan's fake news problem. It's not that easy.

Google, Facebook and LINE claim to have toolkits to stop the spread of disinformation on their platforms. In reality, content farms and propaganda peddlers are way ahead of the game. 

By Jason Liu (劉致昕), Ko Hao-hsiang (柯皓翔) and Hsu Chia-yu (許家瑜)
Photography by
Su Wei-ming (蘇威銘)
Design by
Brittany Myburgh
Translation by
Harrison Chen

This piece first appeared in The Reporter (報導者) and is published under a creative commons license.

In part one of this series, we met the uncles and aunties who fall prey to Chinese disinformation on messaging app LINE. In parts two and three, we learned about the scammers and agitprop hucksters who produce the misleading content. In part four, we talk to the social media companies that host the fake news and hear about their solutions to quash the problem. 

BUSINESS OPPORTUNITIES IN THE INFORMATION WARFARE WHACK-A-MOLE

From a single Google Analytics ID and a private Telegram group of 481 people, a mysterious figure named “Boss Evan” was able to rule a content farm empire that spanned Taiwan, Malaysia, Singapore, Hong Kong and China. He abetted the careers of fellow content farm operators, like the owner of the Ghost Island Mad News site, who set up their own series of fan pages and websites. 

There’s also different kinds of gold prospectors working the disinformation streams, from self-employed operators to teams of 500 or more. But Boss Evan's massive network of websites, which we learned about in part three, is only the tip of the iceberg, and there are at least six other major content farm networks. Add in the political parties who contract operators to produce content during election campaigns, then it's nearly a guarantee that content farms will continue to find our attention.

HOW POLITICS SQUIRMS BETWEEN THE THIN LINE OF INFO AND NEWS

Amidst this chaos, many are amazed there’s even a market for this business, and wonder what kind of impact they really have in Taiwan? 

Cheng Yu-chung (鄭宇君), an professor at National Chengchi University, uses trends from the past ten years to explain the rise of content farms. In the past, it was easy to distinguish between news reports and advertisements on the page of a newspaper. But as the media engaged in the practice of advertorials and sponsored content, the line gradually thinned, and people's ability to recognize them decreased. 

Second, when people use social media to receive their news, whether or not they decide to read something usually depends on who shared it. That is, they pay less attention to what media company or which author wrote it. This is a boon for content farms and their anonymous authors. Cheng stresses that an accountability mechanism must be established for media and key opinion leaders, otherwise they cannot be held accountable. In an environment where one cannot distinguish between news and information, “it becomes a loophole that Chinese party-state media can exploit.” 

The final reason is the way platform algorithms work. “Google’s algorithms put a high value on links and click rates. Content farms create articles with SEO (search engine optimization) in mind, but news is created with the intention of reporting facts... When people use these methods to attack Google, they have no way to tell what is real and what is fake.”

On this point, web companies insist they are constantly looking for solutions, including tweaking their algorithms and suspending violators. Taiwan's top three platforms for disinformation -- Google, Facebook and LINE -- have all introduced policies for greater transparency and educating users. At the same time, in order to protect freedom of speech, the task of identifying real and false information has been mostly left to volunteers from a handful of third parties and nonprofits.

FACT-CHECKERS CAN’T KEEP UP

These third-party fact-checkers have few resources and little manpower. In the battle against disinformation, this frontline is the most thinly stretched and fatigued.

A worker at one of these third-party organizations told us that in the worst-case scenario, it can take up to two months to verify a suspected piece of false information, and that it is even harder if the information contains images or video. The list of content that Facebook sends to them is ranked by priority, but they are not told how this priority is determined nor where the content comes from. As for LINE, the fact-checkers now suspect that producers of disinformation have found a way to stuff their fact-checking lists with actual news, making work even more difficult for them.

One nonprofit told us that cutting off Google AdSense money flows is the key to controlling disinformation, but their attempts to use Google’s reporting mechanism have so far received no response. They warn that even though YouTube has gotten more proactive in fighting disinformation in Taiwan, they cannot keep up with the volume and methods of disinformation.

Anita Chen, assistant manager on Google Taiwan’s Government Affairs and Public Policy team, says YouTube uses an automated algorithm to delete harmful content as defined by the community standards. Photo by Su Wei-ming

Anita Chen (陳幼臻), a senior associate with Google Taiwan's government affairs and public policy team, said that YouTube uses an automated algorithm to delete harmful content based on the platform's community standards. At the same time, they are developing a mechanism for fighting disinformation, including disclosing government-funded media and displaying disinformation warnings next to search results.

But a third party fact-checker working with Google has said, “their verification mechanism will only affect searches made from their main page, but many Taiwanese view videos directly from LINE.” It generally takes tech companies a very long time to establish global policies for detecting malicious users, while the disinformation business can react very quickly. For example, once a video is taken down, a slightly altered video will appear under a new username. Or, if a website is taken down, as long as the content still exists on a server, they can register a new domain at a new address and are back in business.

As the main pipeline for false information in Taiwan, LINE takes the achievement of third-party fact-checkers as the result of their official policies, and relies on education as the primary countermeasure for dealing with disinformation.

HOW CAN WE STEM THE ENDLESS FLOW OF DISINFORMATION?

Behind this endless war is a gap between how information is consumed and how the media and internet platform industries are fundamentally changing. In this new marketplace, so long as trends are not reversed, profit-seeking humans will let this kind of business survive. Will their surplus lead to a democratic deficit?

On the frontlines, Fakenews Cleaner, a non-profit that cultivates media literacy among elderly Taiwanese, points out that disinformation on LINE has penetrated all levels of society, from local communities to temples, and from political parties to civic groups. The result is a polarized society and the loss of rational dialogue and public discussion. 

For Roger Do (杜元甫), founder of AutoPolitic, a consultancy agency that provides online electoral strategies in Asia, these methods are an effective way for political camps to consolidate their existing supporters. “As long as you can attract user attention, Google will pay you, and it is natural to use content farms.” He stresses that the dissemination methods that content farms use could also be used by totalitarian countries. If such countries leverage their national and financial resources to create or influence content on a large scale, they would have the ability to change the numbers on public opinion on a variety of issues.

More importantly, as development in artificial intelligence (AI) continues it will become possible for content farms to automate content generation. He cites a Spanish company that used AI to search for articles that perform well among the public, and automatically create corresponding videos. The company was recently founded but makes an annual profit of over $600,000 USD, a new winner in the business of information warfare.

The information warfare business moves and reacts quickly, constantly developing new weapons to grab our attention. At the same time, the tech giants who dominate their respective markets whose profits are built on advertising, need to hold their customers' attention. 

As these tech giants continue to use the same business model to chase revenue, these information warfare profiteers continue to compete for our attention, profiting from communities of all stripes. If this gap isn't fixed, then these two parasites will continue to coexist, profiting together inside an ecosystem that is never repaired.

As for the Taiwanese elderly people we met in part one, many of them continue to receive messages on LINE about the supposed health benefits of yam milk tea. In India, similar disinformation has led to the sale of large quantities of fake drugs, leading doctors to volunteer to fight the disinformation. In the United States, false news about vaccines has already affected children’s health. Perhaps only when disinformation becomes a pressing issue in the public eye, will these information warfare profiteers admit that their business opportunity is also a public crisis.

THIS PIECE IS PUBLISHED UNDER A CC BY 3.0 LICENSE.

"We helped build the city, but the city chased us out." How the Sanying Tribe found home again in Taipei

"We helped build the city, but the city chased us out." How the Sanying Tribe found home again in Taipei

Meet Boss Evan -the man behind Taiwan's zombie content farms

Meet Boss Evan -the man behind Taiwan's zombie content farms