Twitter announces its plans for helping cut back on misinformation on its platform ahead of midterm elections

By MixDex Article may include affiliate links

Ahead of the 2022 midterm elections, Twitter has announced a series of initiatives meant to cut back on the flow of misinformation.

The efforts were announced Aug. 11, 2022, less than three months before national elections take place Nov. 8, 2022.

Many states have also already conducted their primaries, which become prominent this year as the country becomes more polarized and multiple pro-Trump and conspiracy theorists are seeking public office.

Twitter will continue flagging tweets or links to content that it has identified as containing misinformation. These will feature a disclaimer or link to correct information from trusted sources and see some cosmetic updates that it tested late in 2021.

According to Twitter, the use of these types of label flags decreased engagement with tagged tweets by 13%, with retweets dropping 10% and likes decreasing by 15%. The updated labels saw a 17% increase in users clicking on them to read the clarification information.

It will also take measures to prevent tweets with misleading information from being recommended or amplified. Some tweets with misinformation may also be restricted from being liked or shared, though the company did not provide more detail about how that will be determined.

Twitter is also bringing back a feature it calls “prebunks” — which will trigger messaging at the top of feeds designed to proactively educate users on facts. Twitter will also start offering content hubs that cull content from national, state and regional news outlets about election and other key information.

The moves come as social networks come under fire for their role in spreading misinformation of all types — but in particular falsehoods about the 2020 election and COVID-19.

Social media and private messaging apps, some of which are owned by big tech, are also suspected in playing a large role in sparking outrage among the far right that may have contributed to the events of the Jan. 6 insurrection (some of these tools were also allegedly used to communicate during the riots, according to evidence presented publicly at congressional hearings).

That said, social media platforms are faced with the daunting task of having millions of bits of content flow in every hour, which would seemingly be impossible to moderate or patrol.

Tech companies say they use automated technologies to detect misinformation and other troubling content, though they admit those tools won’t catch every instance.

Users have also gotten savvier with ways to avoid having content flagged by automated systems — including trying to trick algorithms by using alternative characters, images or text and other methods.

Most tech companies employ people whose job it is to review questionable content and remove it if necessary, but critics say these workers are overworked and tasked with meeting specific quotas, causing them to rush through reviews. Some platforms also have rules that are open to interpretation.