[ad_1]
Facebook might have run out the clock on Donald Trump’s posts—I predict a permanent ban at some point—but the episode is only one data point in a wider crisis of toxic expression on social platforms. A lot of attention has been paid to Section 230 of the 1996 Communications Decency Act, which allows platforms to moderate content without taking on legal responsibility for what users post. Many people in DC want to change or end that law. But the bigger question for Facebook and Twitter is, what kind of services do they want to be? One where comity rules, or one where divisive wedges poison society? Saying they want to be all hearts and flowers doesn’t mean anything. The question is what they want to do to get there.
A November 2020 New York Times article reported some instances where Facebook tinkered with ways to reduce misinformation and generally awful content. One, in an effort to tamp down conspiracy lunacy right after the election, assigned what it called N.E.Q. (news ecosystem quality) scores to articles, with reliable journalism ranked higher than lies and fantasy. It made for a “nicer News Feed.” But after a few weeks the company stopped the ranking scheme. In another experiment, Facebook trained a machine-learning algorithm to identify the kind of posts that were “bad for the world” and then demoted those in people’s feed. Indeed, there were fewer toxic posts. But people logged in to Facebook a bit less—and less time spent on Facebook is Mark Zuckerberg’s nightmare. The Times viewed an internal document where Facebook concluded,“The results were good except that it led to a decrease in sessions, which motivated us to try a different approach.”
I find that decision short-sighted. Maybe in the short term people would not log into Facebook quite so much. But that shortfall might challenge the company to concoct more wholesome features that would bring people back—and not feel so angry when they did use the service. Everyone would feel better, and fewer employees would threaten to quit because they feel that they are working for Satan.
When Facebook and Twitter began, neither founder suspected that their creations would be used to change public opinion, and certainly not to poison the body politic in the way Donald Trump did. The vision was to enrich people’s lives by letting them know what their friends were up to. But as their platforms grew, so did their ambitions. Zuckerberg set out to build Facebook as the ultimate personalized newspaper. Twitter positioned itself as “the Pulse of the Planet.”
In the past few years, however, it has been hard to look away from the consequences. The choice that the platforms face has little to do what is legal, and everything to do with what is right. Time and time again, when explaining why someone terrible remains on the platform, Zuckerberg invokes the company’s policies. But Facebook has things backwards when it invokes its own rules, as if it were referring to a tablet that some wonky Moses handed down. The company should more methodically examine the results of its policies, which in many cases scream wrong. Typically, Facebook defends a given outcome until enough people get disgusted at what is allowed to happen on its platform. Then it makes a change. That happened with anti-vaxxers, Holocaust denial, and now Donald Trump’s attempts to destroy democracy.
For now, of course, Zuckerberg is right when he says, “The priority for the whole country must now be to ensure that the remaining 13 days and the days after inauguration pass peacefully and in accordance with established democratic norms.” But after that, Mark Zuckerberg and Jack Dorsey have—in a term both utter a lot—“a lot of work to do.”
[ad_2]
Source link