Following revelations that Facebook allowed private company Cambridge Analytica, linked to Republican candidates, to access its users' personal information:
Facebook employed a Republican opposition-research firm to discredit activist protesters, in part by linking them to the liberal financier George Soros. It also tapped its business relationships, persuading a Jewish civil rights group to cast some criticism of the company as anti-Semitic.Given the data that millions of Americans freely give to Facebook, it's alarming to think of the extent to which Facebook could also leverage that data to discredit its critics, should they want to do so.
In addition (emphasis added):
...Russian hackers appeared to be probing Facebook accounts for people connected to the presidential campaigns, said two employees. Months later, as Mr. Trump battled Hillary Rodham Clinton in the general election, the team also found Facebook accounts linked to Russian hackers who were messaging journalists to share information from the stolen emails.Facebook is a multi-billion dollar social media company founded in 2004. It's phenomenally shameful that no one there leaned into creating and implementing a "policy on disinformation," particularly since Facebook has more resources than most to put into doing so.
Mr. Stamos, 39, told Colin Stretch, Facebook's general counsel, about the findings, said two people involved in the conversations. At the time, Facebook had no policy on disinformation or any resources dedicated to searching for it.
To take a step back, I remember in the 1990s, when the Internet was still new, at least for most ordinary folks, and many people were extremely skeptical of information that "came from the Internet." That culture has changed quite a bit. Now, it seems that many people will concede that misinformation exists on the Internet; it just exists on platforms other than the ones they use.
To be clear, I think misinformation and disinformation is more prominent on the political right in the USA. But Facebook as a platform is interesting in that it is used widely by people across the political spectrum and 81% of the public is reported to lack confidence in the company after the Cambridge Analytica scandal.
What I've been wondering for awhile now, as well, is for how long people will find reasons to continue to use Facebook, given the other options that exist for connecting — and given Facebook's increasingly visible bad faith practices. It seems that a lot of the continued use boils down to the lukewarm sentiment that "everyone else is on it and it's an easy way to get in touch with people," and, as more people begin to abandon or close their accounts, that habit will become easy to break.
My personal experience of the site is that, in addition to the national security and personal privacy concerns, and this might be too misanthropic, being connected to people necessarily involves being connected to..... people. Over-sharing, gullible, endlessly-arguing, irritating people. Frankly put, many people use Facebook for reasons that I don't care to. I don't enjoy or trust getting political information from Facebook users. Political debate on the site can become a completely hellish thing to have to monitor. And for photo-sharing and messaging, other options exist.
If dwindling personal use doesn't critically threaten Facebook's current model, something else needs to happen. I'm not arguing that the government should shut down Facebook. Regulate it, yes.
But I wonder if that time will come, or if users will drift away from Facebook when they realize that the cons of using a platform that engages in such unethical business practices aren't worth whatever benefits they get from it.
Shakesville is run as a safe space. First-time commenters: Please read Shakesville's Commenting Policy and Feminism 101 Section before commenting. We also do lots of in-thread moderation, so we ask that everyone read the entirety of any thread before commenting, to ensure compliance with any in-thread moderation. Thank you.
blog comments powered by Disqus