top of page
  • Writer's pictureMarissa Bialecki

Algorithms, Fake News, Privacy Concerns: A Comprehensive Breakdown of What’s Happening with Facebook

Like many of my fellow digital marketers and social media strategists, I found myself in a conversation with friends this weekend trying to explain just how bad things really are with Facebook at the moment. So much news has dropped in the past few weeks about the behemoth platform—news that impacts my work on a daily, if not sometimes hourly, basis—that it’s hard to keep track of who has what data, what wrongdoing was committed when and ultimately, how it all affects the 2 billion individuals who signed up for a Facebook account once upon a time because they liked the idea of being able to easily keep up with friends and family.


That conversation was a reminder that though I live and breathe this stuff, the average person doesn't. I’m fortunate to work on a well-outfitted and talented social media team, but occasionally that makes it easy to forget that not everyone else is sending Digiday articles on Slack ‘round the clock or has created Outlook calendar invites for when Mark Zuckerberg is testifying before Congress.


So in an effort to help myself synthesize everything that’s going on with Facebook, as well as for other marketers and non-digital folks alike, I decided to take a deep dive, break it all down, provide my analysis where possible and write this post. I’ve provided helpful links throughout from stories I’ve read that I think are worth your time (H/T to my coworkers, Matt and Kyle, for sharing many of those). Hopefully it will help you better understand the big picture and key issues or at least sound more informed next time the topic comes up at the water cooler with colleagues or at the dinner table with your Facebook friends IRL (in real life). Let’s begin.

All the scrutiny Facebook is under right now is because of Russian bots, fake news and Cambridge Analytica, right?

Yes and no, but that’s a good place to start because those three issues are all sort of intertwined because they each touch on the subject of political advertising.


Let’s start with Cambridge Analytica. Long story short, Cambridge Analytica got access to information from 50 million+ user profiles that was originally collected under the guise of being for academic purposes (about 300,000 users consented to giving access to their data for that purpose). The firm was later hired by the Trump campaign in 2016, used that data to better understand and target American voters with ads and has previously talked to Russian businesses, hence the controversy. The big problem: those 50 million+ Facebook users never consented to sharing their data, let alone to having it sold to another company. The bigger problem: that data was used to build targeting models for political advertising. The biggest problem: Facebook found out this data was inappropriately shared in 2015, but failed to notify affected users, was quiet about it until now and simply took Cambridge Analytica’s word that this user data had been deleted back then (which may or may not be the case). Update from Wired: Cambridge Analytica may also have been able to access private messages of those affected.


At the same time, social media--and not just Facebook--has been dealing with the onslaught of spam accounts, many of which are being used for nefarious purposes. See their recent announcement about shutting down several Russian-based Internet Research Agency accounts, pages and Instagram profiles. The fact that these accounts exist in the first place and have been able to use Facebook’s advertising capabilities to try to interfere with and influence elections and society as a whole is, in a word, horrifying. It’s harder and harder to tell what’s real and what’s fake on the internet. [Sidebar: if you want to fall down an even more horrifying rabbit hole of what the future of fake content could look like, listen to this Radiolab episode and visit futureoffakenews.com.] Facebook’s latest attempt to solve for this is to verify and authorize political advertisers (that is anyone running “issue ads”) and denote when a piece of content is a Political Ad and show users who paid for it. #Transparency and not a bad idea if you think you’re going to end up being regulated by the government.


Which brings me to the last point for this section: Facebook’s entire system rewarded the very type of news that it shouldn’t. And if they’re not careful, they’ll end up creating the same situation with Facebook Groups. Sensationalism tends to go viral more often than an investigative long-read. For a long time, as this fake news got more clicks, likes, comments and shares, Facebook’s algorithm rewarded this type of content by serving it more often and higher up in users’ newsfeeds. In many cases, the creators of this fake news profited financially off this system (and the users most susceptible to clicking their content) and they forged a sort of parasitic relationship with the algorithm they learned how to game. More on algorithms later.


If you’re extra curious about how this tangled web got woven, read: Wired’s “Inside the Two Years that Shook Facebook—And the World” and TechCrunch’s “Facebook and the Endless String of Worst-Case Scenarios”.

Why can’t Facebook just get rid of the fake accounts and spam?

Easier said than done. It’s a never-ending game of whack-a-mole and it requires real human resources. In a recent interview with Vox, Zuckerberg stated that they have about 14,000 people “working on security and community operations and review.” It’s unclear if those are all employees, contractors or a mix of both, but it’s still a lot of manpower, nonetheless. In total, Facebook had 25,105 employees at the end of 2017. You do the math.

Tell me more about that ubiquitous a-word: algorithms.

Ah algorithms, those little old things that are designed to solve problems and supposedly make our lives better, faster and easier. Turns out, they can’t be trained by humans and left to their own devices, and even as they get better, they still cause problems. After all, who defines “better” and how?


Facebook made significant changes to its algorithm earlier this year in an effort to provide users with more content that would lead to “meaningful social interactions” and “time well spent” on the platform. This means that users see less content from brands, publishers and advertisers and more from friends and family. A skeptic could argue this change was made in order to keep people on Facebook longer and prevent them from leaving the platform by clicking a link post or ad. Following that logic, Facebook could charge advertisers a premium to merely appear in people’s feeds and drive up costs per click (CPCs) for ads that take people off of Facebook. Perhaps. But Facebook has to play the long-game and they can’t totally snub advertisers who they’re dependent upon for revenue. Cheaper CPCs could be found on other platforms and places on the internet if that was the only thing advertisers cared about.


The algorithm is here to stay, for better or worse (RIP chronological feeds, though those can also be gamed). And will only continue to go through more changes, most of which we can’t even foresee at the moment.

So if I’m on Facebook, who has access to my data? Could other third party companies, like Cambridge Analytica, collect data on me?

Short answer: yes, third party companies could have accessed data on you in the past. For advertisers, it was a gold mine--suddenly they could zero in on exactly the people they wanted to target with a level of granularity that didn’t exist before (i.e., buyers of children’s cereal) and sync up all sorts of data that existed across multiple places (i.e., purchase history and habits + email addresses + age, gender, ethnicity, political affiliation, marital status + psychographics + more).


Unfortunately, it’s not always clear what data you’re giving up and how you’re being tracked across platforms, sites and retailers. And this third party data, along with Facebook’s own targeting parameters, led to some discriminatory advertising (like this and this). Facebook is trying and trying to be more transparent about what data they have access to, who has access to it and how you can control that access.


What can I do to protect my privacy? Are there settings I can change?

To start, read and familiarize yourself with Facebook’s privacy settings that they’ve now made a bit easier to access: https://newsroom.fb.com/news/2018/03/privacy-shortcuts/.

Check your privacy settings. Go into Settings > Privacy and customize from there. You can control who can search for you by your email address or phone number, as well as restrict who sees content you post and your Friends list.


Check your ad preferences and remove interests you don’t want advertisers to use to target you. Go to: https://www.facebook.com/ads/preferences to find these settings and review them periodically.


Remove third party apps that you’ve given access to your profile--which can be anything from music apps to dating apps to online games to whatever else you granted access to simply because it was easier to sign in with your Facebook credentials. Go into Settings > Apps and Websites and there you can review what has access to your profile and remove various apps.

Otherwise, and I can’t believe I’m still saying this in 2018, don’t accept friend requests from randos you don’t actually know. And don’t post anything on the internet--even if you think it’s “private” or “temporary” (lookin’ at you, Snapchat)--that you wouldn’t want to see published on the front page of The New York Times or on a billboard in Times Square. Everything lasts forever on the internet.


If you’re curious about Facebook and user privacy, I recommend checking out the following: Gizmodo’s “How Facebook Figures Out Everyone You’ve Ever Met”; Reply All’s “Is Facebook Spying on You?” Episode; and Gimlet’s “How to Avoid Being Tracked by Facebook”.

Okay, I get it now. But big corporations do questionable or downright bad things all the time. And I did fork over my data to Facebook in exchange for using their platform free of charge. Why is this a big deal?

You mean aside from all the issues mentioned above? For the most hardened cynics or most sympathetic parties to Facebook, there are still a few other reasons why this is a big deal.


First and foremost, Facebook has become ingrained in our society. With 2 billion monthly active users and $40 billion in revenue last year, it’s not going to go away. Sidenote for anyone you know who’s boasting about leaving Facebook to spend more time on Instagram: Facebook owns Instagram (as well as WhatsApp). And unlike when #DeleteUber was trending and you could switch to Lyft or taxis or other ridesharing services, there really isn’t a market alternative to Facebook. No other social media platform, not Twitter, not LinkedIn, not Snapchat, not YouTube, not Pinterest, not Instagram, is quite like Facebook. They’ve got the market cornered. So where are people going to spend all their time on the internet and where are advertisers going to reach them? If it seems like I’m making Facebook out to be a monopolizing, time-sucking, profit-hungry monster, I’ll remind you that not everything Facebook has done is bad. Think of the family members and friends they’ve reconnected, the dollars they’ve raised for charities through their direct response ads or the causes they’ve helped grow and champion, such as the ALS ice bucket challenge. Facebook’s power to be used for good is largely what gives them their staying power. And if I were on their PR/marketing team, those would be the stories I’d focus on telling and creating more of after the dust settles.


Speaking of which, from a crisis communications standpoint, this was an epic fail as it took Zuckerberg and Sheryl Sandberg days to respond to the Cambridge Analytica data breach. I repeat, it took the executives of the very platform that helped revolutionize and accelerate the pace of breaking news (often to the detriment of the truth) days to respond a crisis of their own doing. Not a good look, especially on the heels of all the other issues the platform has been dealing with. It calls into question their judgement and whether or not their apologies and explanations are genuine. Which leads to the next point...their apology tour isn’t over yet. Zuckerberg will testify before a joint hearing of the Senate Judiciary and Commerce committees and the House Energy and Commerce Committee this week (read his testimony here). While it’s not the purpose of the hearing, it may open the door to government regulation of Facebook and advertising on the platform which is somewhat uncharted territory and has all sorts of implications for Facebook, advertisers, users and shareholders, and other tech companies like Google and Twitter.


Lastly, even if Facebook magically solves all of the aforementioned problems, there’s still one enormous question that remains: who owns your data? Is it yours or does it belong to a company? None of these problems or issues could have happened if Facebook had not become a rich repository of user data and activity. Or if they hadn’t granted third parties access to their users and allowed them to seamlessly layer on even additional user data for targeted advertising. Where do we draw the line? Is it okay to let them use, and potentially sell access to, demographic data you self-identify on the platform, like marital status, sexual orientation, ethnicity, age or gender? Is it okay to let them scan private messages and potentially use data gleaned from those messages to advertise to users (they don’t currently do so)? Is it okay to let advertisers scan your images and use that data to target you for ads? Does data collected from facial recognition software on the platform help users know what content they’re being tagged in and better control their privacy, or does it expose them to more risk and harm? It feels cliche to say, but there are no easy answers and it seems we’re finally realizing it’s impossible to to put a price on a product we’ve been using for years for “free.”


0 comments
bottom of page