Stopping Misinformation Machines

Social Media Tech
Facebooktwitterredditlinkedin

            New Orleans      Facebook and its Instagram child, Twitter and its tweets, and the rest of the gang are going from addictive communication tools that we all use to some degree as things we love and hate to dangerous weapons in many hands and misinformation machines that are out of control.  Facebook, particularly, seems to have jumped the shark.  One revelation seems worse than another.  Perhaps someone is minding their store, but no one seems able to teach the algorithms good manners and safety first.

I hardly need to recount that a whistleblower who had worked for Facebook came forward with thousands of documents and appeared before a Congressional hearing revealing that Facebook and its executives know full well the harm that they were monetizing behind the façade of simply being an information and communication tool.  Their own research had told them clearly that Instagram particularly was harming teenagers, especially young girls, but they did nothing.  Facebook’s go-to shtick has been to say “Oh, my” to every revelation, pretend that they will fix everything, and then whistle all the way to the bank.  It is unclear whether we have finally reached the point where lawmakers will take action.

Teenage girls may not be the trigger, but misinformation about the recent 2020 election might do it for politicians and regulators.  New research indicates that there is a dangerous symbiosis between all of the major platforms where monkeys see and then monkeys do, regardless of the consequences in a waterfall progression from YouTube down to the algorithms of Facebook and Twitter.   In a New York Times’ report:

Researchers at the Center for Social Media and Politics at New York University found a significant rise in election fraud YouTube videos shared on Twitter immediately after the Nov. 3 election. In November, those videos consistently accounted for about one-third of all election-related video shares on Twitter. The top YouTube channels about election fraud that were shared on Twitter that month came from sources that had promoted election misinformation in the past, such as Project Veritas, Right Side Broadcasting Network and One America News Network.

But the proportion of election fraud claims shared on Twitter dropped sharply after Dec. 8. That was the day YouTube said it would remove videos that promoted the unfounded theory that widespread errors and fraud changed the outcome of the presidential election. By Dec. 21, the proportion of election fraud content from YouTube that was shared on Twitter had dropped below 20 percent for the first time since the election.

The proportion fell further after Jan. 7, when YouTube announced that any channels that violated its election misinformation policy would receive a “strike,” and that channels that received three strikes in a 90-day period would be permanently removed. By Inauguration Day, the proportion was around 5 percent.

To my way of thinking, none of this means that we should all send roses to Alphabet/Google’s YouTube.  They were late to this game in general but had a better election integrity policy, and the other big, bad boys fell in line.  Recently, they have stepped up their game on anti-vaxxers as well, and their three-strikes policy has banned some of the mischief, so some attaboys have been earned, just not enough.

The real lesson is that these folks can’t regulate themselves, but that drawing hard lines on what is fair and what is foul, could make a huge difference in reducing misinformation and letting us get back to arguing about the facts rather than whether the earth is round and the sun rises in the east.

Facebooktwitterredditlinkedin