What Facebook Gets Wrong About 'The Social Dilemma'

Preamble

The Social Dilemma is a movie that explores the many facets of what we today call social media. It explores the implications of companies that thrive on the attention (and the data it produces) because of their ad-supported nature. It further explores what the current state of affairs is for the algorithms that back the news feeds and timelines of the everyman.

Facebook has put out a 7-point response to the movie and its suppositions. I am about to criticize that response in my own 7-point response. Because I think they missed the entire fucking point.

Before reading further, I suggest you read the original response which I’ve kindly uploaded to my site for safe archiving: https://bnolet.me/What-The-Social-Dilemma-Gets-Wrong.pdf

The original can be found here: https://about.fb.com/wp-content/uploads/2020/10/What-The-Social-Dilemma-Gets-Wrong.pdf

And an online archive aside from my website can be found here from the wayback machine: https://web.archive.org/web/20201003012134/https://about.fb.com/wp-content/uploads/2020/10/What-The-Social-Dilemma-Gets-Wrong.pdf

Author’s note: I will not be responding in the context of the original movie this is based off of. I will be responding based off my own theories and opinions of social media.

1. Addiction

Our News Feed product teams are not incentivized to build features that increase time-spent on our products. Instead we want to make sure we offer value to people, not just drive usage.

For example, in 2018 we changed our ranking for News Feed to prioritize meaningful social interactions and deprioritize things like viral videos.

The issue with this is that it’s not merely features that bring people to Facebook. It’s the people, it’s the content. Features are not novelty, it’s the rapidly revolving carousel of content that they can consume.

Sure, you can deprioritize viral videos, but that doesn’t mean that viral videos won’t be shared via news articles, or in the very ads that support the platform.

Features are not addictive. Controversy and novelty are.

We want people to control how they use our products, which is why we provide time management tools like an activity dashboard, a daily reminder, and ways to limit notifications.

See here’s the issue: The fact that folks need a time management tool for social media proves that there’s an addiction. If folks need to lean on something like that to make sure they don’t do too much doomscrolling, because they can’t help themselves, then that’s an addiction.

2. You are not the product

See this is where I started to get a little upset. The entire point defines exactly how you’re the product, yet it claims the exact opposite.

What “You are the product” means to say is that it’s by selling your attention that Facebook makes money. That’s how this boils down.

Eyeballs in, money out. Oof, that’s a scary visual.

3. Algorithms

And this was the point that broke the writer’s brain. They think they’re being snarky by taking aim at Netflix’s algorithms recommending one watch The Social Dilemma but really they’re just showing that there’s one more entity criticizing Facebook for its practices.

Algorithms and machine learning improve our services. For example, at Facebook, we use them to show content that’s more relevant to what people are interested in

Let me reword this one to reveal the meaning behind the meaning: Algorithms decide what content you see on your timelines. At Facebook, we use them to show you content that you’re more likely to engage with. We’ll come back to this engagement metric later on.

Oh, and then they claim the movie is a conspiracy documentary.

Aside: This section also includes a link to a page with text explaining how the Facebook news feed words but the link is an image that looks to be a thumbnail for a video. I don’t think this was malicious but it’s kinda weird.

4. Data

All I’m going to say about this section (because I do appreciate that steps in the right direction are being made) is that it took way too long for a multi-billion dollar company to establish these kinds of policies. I can only imagine who’s already been swept up by past neglect.

5. Polarization

The overwhelming majority of content that people see on Facebook is not polarizing or even political—it’s everyday content from people’s friends and family.

It’s not just the news, it’s not just the articles and the videos that polarize folks. The discussions that people have on Facebook are just as important as the interactions folks have with media content.

Going back to the algorithms: The algorithms on places like Facebook, Twitter, etc are all geared towards increasing the rate of engagement. The more people engage, the more people are paying attention to the platform.

Without conscious effort, social media AI is incentivized to elevate controversial content. Whether it’s an inflammatory comment, or it’s a cute little puppy, the AI is without judgement. The content is elevated based on the likelihood of engagement.

Intent is not taken into account.

The hashtag #boycottchappelleshow trended for hours on Twitter and Facebook and the intent was to get Dave Chappelle’s iconic TV show off of streaming platforms after ViacomCBS sold the licensing rights to the show without Dave Chappelle’s permission. This was lauded as a good hashtag to trend.

Similarly, another boycotting hashtag trended: #BoycottStarWarsVII. The initial complaint was that the cast was too diverse and didn’t feature enough whites. This was a complaint about political correctness. This was widely criticized as bigotry.

Both of these trends at some point reached the top of (at least in North America) the trending lists of both Facebook and Twitter.

So take into account that polarizing content is shared by everyday “people’s friends and family”, and you can see where I’m going.

Aunt Becky calls black history month a sham and why don’t we get a white history month. Your sister comments back, and you comment back. Your sister’s comment is a rational explanation detailing the history of slavery in the country and that most of history we learn is about white folk anyway. White is the default for history classes. Your comment is an expletive laden mess of hate, attacking your Aunt Becky for being a bigot and a racist. You tell her that she needs to do more thinking about herself before she comments on the state of affairs.

Because of other similar threads to this one, and because the AI has learned which of the comments between your sister and you, your comment is put above your sister’s. Your aunt Becky responds and tells you to stop being a politically correct jackass and the cycle continues. Others dogpile into the conversation, and then because the post is getting so much engagement, it gets elevated to others’ news feeds and then the cycle continues further.

Now everyone is mad at each other, nobody wants to hash this out, and differences are never resolved. Your aunt Becky will now insist and dig into her opinion that there needs to be a white history month.

Now that is just one example of how polarization occurs without a single piece of clickbait news, viral media, or Facebook ad being involved. This happens on Facebook pages. This happens in private and public groups. Everywhere on Facebook is a place to elevate destructive comments and discussion.

6. Elections

Facebook goes on and on about what’s changed since 2016 for Facebook, and on top of that that Facebook ads relating to the election are strictly monitored to make sure that they don’t violate Facebook guidelines.

It’s not Facebook ads that are the issue! Like I said in the previous point, it’s every day people having their conversations where the most destructive comments bubble to the top.

And then they boast about their ad library, which is about as transparent as my thumb. When you search for ads you have to choose one of four categories (likely the 4 they’re choosing to monitor well this month) and if you choose “search all”, you must search for ads by advertiser name.

7. Misinformation

Sure, I’ll give it to Facebook that their systems get better and better at detecting misinformation. But there’s a tradeoff here: context and intent may be missing in any given automated removal system. Systems like this can often catch some well-intended folks in the crossfire.

That being said, there’s still a lot of work to be done. Especially when it comes to anglophonic privilege. See, the AI that Facebook has trained has been primarily trained on English-language content. I don’t know what portion of the internet that that makes up, but you can certainly guarantee that it’s the majority.

That’s how cases like this can happen where for 5 years, anti-Rohingya content (and disturbing, graphical images of it, no less) could exist and persist as the Myanma weaponise Facebook against a minority: https://www.bbc.com/news/blogs-trending-45449938

Conclusion

No, I don’t think that companies like Facebook intend to leave this content up. No, I don’t think that the folks work at Facebook are actively trying to screw over the public. I don’t even think that’s part of the goal. But intent isn’t everything.

The effect that social media has on our minds, our addiction centers, our morale, and our opinions cannot be underestimated. We need to take care of our mental health and realize when we’re moving too fast or consuming too much content without gaining any actual value from it.

Stop doomscrolling. Read a book. You won’t miss a thing.