Twitter AI Is Having Trouble Labeling ‘Misleading’ Coronavirus Posts



Twitter is reportedly having trouble using AI tools to accurately label “misleading” tweets about the Wuhan coronavirus, raising concerns about the use of artificial intelligence to accurately label content on the site. As one expert explained: “Arguably, labeling incorrectly does more harm than not labeling because then people come to rely on that and they come to trust it. Once you get it wrong, a couple hours go by and it’s over.”


CNET reports that social media website Twitter is having problems automatically labeling tweets that contain alleged misinformation about the Wuhan coronavirus. The firm announced on May 11 that it would be labeling tweets that spread conspiracy theories such as the virus being caused by 5G cell towers.


Twitter also stated that it will be removing tweets encouraging people to engage in behavior such as damaging 5G cell towers, other tweets which also include false or dispute claims would receive a label directing users to trusted information. The label reads: “Get the facts about COVID-19” and directs users towards a page of curated tweets debunking the 5G conspiracy theories.


However, Twitter’s automatics labeling has made a number of mistakes applying labels to tweets refuting conspiracy theories and providing accurate information. Tweets that have linked to reports from Reuters, BBC News, Wired, and Voice of America about the 5G conspiracy theories have received warning labels.


Experts believe that the mislabeled tweets could actually confuse Twitter users, especially if they don’t click on the label.  Hany Farid, a computer science professor at the University of California, Berkeley, commented: “Arguably, labeling incorrectly does more harm than not labeling because then people come to rely on that and they come to trust it. Once you get it wrong, a couple hours go by and it’s over.”

Twitter has not stated how many 5G-coronavirus tweets have been labeled or an estimate of how many labels have been incorrectly applied. The firm did state that its automated systems are new and will improve in time, with a spokesperson saying:

We are building and testing new tools so we can scale our application of these labels appropriately. There will be mistakes along the way. We appreciate your patience as we work to get this right, but this is why we are taking an iterative approach, so that we can learn and make adjustments along the way.

Twitter faces a huge task in moderating its platform with 166 million daily active users browsing and posting on the platform. The company has employed automatic tools to help workers review reports more efficiently but this has led to a new set of issues. Some researchers have found that about half of all accounts tweeting about the Chinese virus are bots,  instead of actual users.


Farid stated that he isn’t surprised that Twitter’s automated systems are marking errors, stating: “The difference between a headline with a conspiracy theory and one debunking it is very subtle. It’s literally the word ‘not’ and you need full blown language understanding, which we don’t have today.”


Source: Paper.li