Facebook in da Nile.

One should see this as the general problem with Facebook (and Twitter) as an information/news medium – it’s generated by subjective opinions rather than verifiable facts with verifiable sources. Verifiable identity (non-anonymity) has the virtue of incentivizing reputational capital and building trust across information exchanges. But Facebook then has to make a Sophie’s Choice: lose the traffic that raw emotionalism feeds or lose the trust of its user base. Facebook’s day of reckoning is coming.

From The Wall Street Journal:

Facebook Is Still In Denial About Its Biggest Problem
In a world where social media is the pre-eminent news conduit, ‘If it’s outrageous, it’s contagious’ is the new ‘If it bleeds, it leads’
It’s a good time to re-examine our relationship with Facebook Inc.
In the past month, it has been revealed that Facebook hosted a Russian influence operation which may have reached between 3 million and 20 million people on the social network, and that Facebook could be used to micro-target users with hate speech. It took the company more than two weeks to agree to share what it knows with Congress.
Increased scrutiny of Facebook is healthy. What went mainstream as a friendly place for loved ones to swap baby pictures and cat videos has morphed into an opaque and poorly understood metropolis rife with influence peddlers determined to manipulate what we know and how we think. We have barely begun to understand how the massive social network shapes our world.
Unfortunately, Facebook itself seems just as mystified, providing a response to all of this that has left many unsatisfied.
What the company’s leaders seem unable to reckon with is that its troubles are inherent in the design of its flagship social network, which prioritizes thrilling posts and ads over dull ones, and rewards cunning provocateurs over hapless users. No tweak to algorithms or processes can hope to fix a problem that seems enmeshed in the very fabric of Facebook.
Outrageous ads
On a network where article and video posts can be sponsored and distributed like ads, and ads themselves can go as viral as a wedding-fail video, there is hardly a difference between the two. And we now know that if an ad from one of Facebook’s more than five million advertisers goes viral—by making us feel something, not just joy but also fear or outrage—it will cost less per impression to spread across Facebook.
In one example, described in a recent Wall Street Journal article, a “controversial” ad went viral, leading to a 30% drop in the cost to reach each user. Joe Yakuel, founder and chief executive of Agency Within, which manages $100 million in digital ad purchases, told our reporter, “Even inadvertent controversy can cause a lot of engagement.”
Keeping people sharing and clicking is essential to Facebook’s all-important metric, engagement, which is closely linked to how many ads the network can show us and how many of them we will interact with. Left unchecked, algorithms like Facebook’s News Feed tend toward content that is intended to arouse our passions, regardless of source—or even veracity.
An old newspaper catchphrase was, “If it bleeds, it leads”—that is, if someone got hurt or killed, that’s the top story. In the age when Facebook supplies us with a disproportionate amount of our daily news, a more-appropriate catchphrase would be, “If it’s outrageous, it’s contagious.”
Will Facebook solve this problem on its own? The company has no immediate economic incentive to do so, says Yochai Benkler, a professor at Harvard Law School and co-director of the Berkman Klein Center for Internet and Society.
“Facebook has become so central to how people communicate, and it has so much market power, that it’s essentially immune to market signals,” Dr. Benkler says. The only thing that will force the company to change, he adds, is the brewing threat to its reputation.
Facebook’s next steps
Facebook Chief Executive Mark Zuckerberg recently said his company will do more to combat illegal and abusive misuse of the Facebook platform. The primary mechanism for vetting political and other ads will be “an even higher standard of transparency,” he said, achieved by, among other things, making all ads on the site viewable by everyone, where in the past they could be seen only by their target audience.
“Beyond pushing back against threats, we will also create more services to protect our community while engaging in political discourse,” Mr. Zuckerberg wrote.
This move is a good start, but it excuses Facebook from its responsibility to be the primary reviewer of all advertising it is paid to run. Why are we, the users, responsible for vetting ads on Facebook?
By default, most media firms vet the ads they run and refuse ones that might be offensive or illegal, says Scott Galloway, entrepreneur, professor of marketing at NYU Stern School of Business and author of “The Four,” a book criticizing the outsize growth and influence of Amazon, Apple, Facebook and Google.
Mr. Zuckerberg acknowledged in a recent Facebook post that the majority of advertising purchased on Facebook will continue to be bought “without the advertiser ever speaking to anyone at Facebook.” His argument for this policy: “We don’t check what people say before they say it, and frankly, I don’t think our society should want us to.”
This is false equivalence. Society may not want Facebook to read over everything typed by our friends and family before they share it. But many people would feel it’s reasonable for Facebook to review all of the content it gets paid (tens of billions of dollars) to publish and promote.
“Facebook has embraced the healthy gross margins and influence of a media firm but is allergic to the responsibilities of a media firm,” Mr. Galloway says.
More is needed
Mr. Zuckerberg has said it will hire 250 more humans to review ads and content posted to Facebook. For Facebook, a company with more than $14 billion in free cash flow in the past year, to say it is adding 250 people to its safety and security efforts is “pissing in the ocean,” Mr. Galloway says. “They could add 25,000 people, spend $1 billion on AI technologies to help those 25,000 employees sort, filter and ID questionable content and advertisers, and their cash flow would decline 10% to 20%.”
Of course, mobilizing a massive team of ad monitors could subject Facebook to exponentially more accusations of bias from all sides. For every blatant instance of abuse, there are hundreds of cases that fall into gray areas.
The whole situation has Facebook between a rock and a hard place. But it needs to do more, or else risk further damaging its brand and reputation, two things of paramount importance to a service that depends on the trust of its users.

621 thoughts on “Facebook in da Nile.

  1. Pingback: thesis guidelines
  2. Pingback: walmart cialis
  3. Pingback: cozaar price
  4. Pingback: coumadin 5mg cheap
  5. Pingback: cialis
  6. Pingback: essay service
  7. Pingback: combivent tablets
  8. Pingback: cialis opisanie
  9. Pingback: cleocin prices
  10. Pingback: ace homework
  11. Pingback: assignment helpers
  12. Pingback: sildenafil 1mg

Comments are closed.