top of page

China Calls for More Big Tech Censorship Against ‘Misinformation'

China’s state-run Xinhua news service on Friday took great interest in an American researcher’s call for more aggressive censorship by Big Tech companies to reduce “misinformation.”

China’s censorship regime was originally justified as just such a crusade against false information and the regime’s apologists claim that is the primary goal of their million-censor army to this day.

The article that caught Xinhua’s interest referred primarily to incorrect information about the Chinese coronavirus, which adds an extra layer of bitter irony since the Chinese Communist Party (CCP) is the world’s undisputed heavyweight champion at spreading false coronavirus information:

“The pandemic lays bare how tech companies’ reluctance to act recursively worsens our world. In times of uncertainty, the vicious cycle is more potent than ever,” said Joan Donovan, research director of the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School in Cambridge, Massachusetts. […] The senior researcher demanded that “tech companies become more transparent, accountable and socially beneficial,” while urging holding them to this commitment long after the pandemic. “Social media companies must flatten the curve of misinformation,” she added.

The Chinese government would never leave it to any private companies to bulldoze that “curve of misinformation” according to their own notions of accuracy or fairness. In China, all of the misinformation bulldozers are driven by Communist Party officials, and a private company CEO who decided to “fact check” a statement by dictator Xi Jinping would never be seen or heard from again.

Dissident Xu Zhiyong, for example, who attempted to do just that in a scathing letter demanding Xi resign, is facing 15 years in prison for his “fact check.”

Donovan’s article, which probably caught the eye of the Chinese Communist Party because she railed at length against President Donald Trump and “right-wingers” (and special guest star Elon Musk) for being too enthusiastic about using hydroxychloroquine to combat the effects of the coronavirus, perhaps inadvertently made the case that only a massive government censorship apparatus on the scale of the one constructed by Beijing could provide the level of inoculation against “misinformation” that she advocates:

After blanket coverage of the distortion of the 2016 US election, the role of algorithms in fanning the rise of the far right in the United States and United Kingdom, and of the antivax movement, tech companies have announced policies against misinformation. But they have slacked off on building the infrastructure to do commercial-content moderation and, despite the hype, artificial intelligence is not sophisticated enough to moderate social-media posts without human supervision. Tech companies acknowledge that groups, such as The Internet Research Agency and Cambridge Analytica, used their platforms for large-scale operations to influence elections within and across borders. At the same time, these companies have balked at removing misinformation, which they say is too difficult to identify reliably.
Moderating content after something goes wrong is too late. Preventing misinformation requires curating knowledge and prioritizing science, especially during a public crisis. In my experience, tech companies prefer to downplay the influence of their platforms, rather than to make sure that influence is understood. Proper curation requires these corporations to engage independent researchers, both to identify potential manipulation and to provide context for ‘authoritative content’.

U.S. tech companies are not shy about admitting they cannot police everyone for misinformation, hate speech, and so forth. An ostensibly fair system can be made grossly unfair by applying it unevenly, which is the heart of the criticism from those who noticed that Twitter will contentiously label a post from President Trump for “glorifying violence” but seems untroubled by the Supreme Leader of Iran’s enthusiasm for genocide.

The Chinese Communist Party does use “sophisticated artificial intelligence” as part of its massive censorship apparatus, but also has a huge number of human operators working to suppress speech the government dislikes. China imposes its censorship mandates on foreign speech as much as possible and blocks foreign sources of information it cannot control.

The cost of such an operation is far beyond anything a profit-seeking company could pay and, as the current debate about Twitter censorship demonstrates, it is exceedingly difficult to find a standard of absolute truth or perfect fairness that would insulate private censors from complaints, retaliatory loss of business, and possibly legal action. 

Hydroxychloroquine is, like everything else related to the coronavirus, a matter of much debate among the medical community. That community is still torn by vigorous arguments over whether surgical masks provide any protection from the pandemic. The World Health Organization, of which Donovan wrote approvingly in her article, continues to insist they do not, but masks have become tremendously important to the American Left and the Democrat Party, and its members regard challenges to the effectiveness of masks as “misinformation.”

This, again, is not a problem for China. A high-ranking member of the ruling party in good standing would never be accused of spreading misinformation because the truth is whatever the Party says it is. If the truth changes tomorrow, the old truth becomes misinformation. If a high official falls from Party grace, he suddenly becomes a source of misinformation. 

The CCP believes its approach is the only way to police the turbulent Internet for “truth,” so it is quite happy to hear people who haven’t thought all the way to the end of the censorship game take a few fumbling steps in its direction.




bottom of page