Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It optimizes for content with high engagement, and users quickly learn to tune their posts towards that so that their posts are seen.

Why is that Facebook's fault if they are just the middle man for their users (both on the content creation side and the content consuming/engagement side)?



Facebook isn't a neutral arbiter, they make active choices in what their users see. And they frequently choose things that maximize engagement in order to maximize profit. Things that are inflammatory and divisive drive engagement, so that tends to bubble to the top.

This isn't just a problem with Facebook and Twitter either, its an issue in modern journalism as well. Basically anything that is funded by advertisement will fall into this trap.


The same reason our society held the cigarette companies partially responsible for lung cancer deaths.

Meta is not breaking any legal law circa 2023, but history will bucket it along with asbestos/cigarette businesses


Why is it Facebook’s fault that they chose to operate their platform a certain way?

Probably because we generally associate responsibility with choice.

Are users also responsible for their own choice to create and respond to engagement-optimized posts? Sure, but that’s an orthogonal concern to Facebook’s own choice and the responsibility they bear for it.


I just don't get the line of thought that goes:

"I don't like the content that ends up on + promoted through algorithms on Facebook, therefore, they should be responsible"

It's not their content, and they aren't the ones engaging with it.

They are pushing it because users post it and then engage in it.

Say we got rid of Facebook tomorrow. This exact same problem would happen at another company/platform unless there was heavy intervention/moderation (which to my understand, Facebook absolutely already does some level of moderation?). What's that equivalent to? I'm not a "free speech advocate" or anything like that but what you're asking for is censorship instead of asking users to bear the responsibility of the content they consume.


I wouldn’t consider only the content/feed selection algorithm - I would take a holistic look at how people interact with Facebook (like button only initially, then expanded to a few more choices; real identity, not many comments shown at a time), then also compare it to other sites and their outcomes like Reddit (up and down votes, anonymous if you wish, lots of comments shown at a time).

Regarding users bearing responsibility for what they consume - I have seen others report running into this and I have as well - no amount of feed grooming keeps the controversial content off my feed for a meaningful amount of time. I actively unfollow / snooze content I don’t want to see, but Facebook has countered this by adding “suggested for you” items to my feed. It’s impossible to only see my friends’ updates. I must be exposed to content selected by Facebook as a cost of using Facebook.


> what you're asking for is censorship

Most people use the word censorship to refer to banning some specific kind of content: banning what. Meanwhile, the discussion around Facebook is usually around how they measure and reward content, not what the content is.

If you're actually being sincere in this discussion, I think you may not see how engagement is just one particular kind of how and it's not a given that social media sites optimize for it. It was an innovation that some sites introduced because it benefited their business model and it caught on for a while. But it wasn't universally adopted, needn't be used at all, and is becoming less fashionable among both sites and users. Facebook's a bit behind the game on restructuring away from it though, possibly because of over-investment and momentum, or possibly because of being a closely-held company that can be philosophically stubborn.


> If you're actually being sincere in this discussion

I am.

> Meanwhile, the discussion around Facebook is usually around how they measure and reward content, not what the content is.

I thought the original point was "divisive content gets promoted and Facebook doesn't stop it/encourages it".


FYI, "gets promoted" is the key phrase there, at least to the many people disagreeing with and downvoting you here.

Promoting that content is the "how" that Facebook has chosen to do and continues to do, and something that isn't a given for operating a social media company. That's what they're being held responsible for.


I'm not trying to sound naive here when I say:

their content delivery/ranking algorithms prioritize (from my understanding as just a general person in the public who has never seen what's probably millions of lines of code across many different internal private systems at Meta) user engagement over all

If the algorithm is spitting out content that we're all critical of (because it's divisive, bad for mental health, etc.), why are we not acknowledging that it's just a side effect of

1. user input + 2. user engagement to that input = 3. ranked content delivery algorithm

If Facebook went away tomorrow, another platform would fill its place. Humans have shown that they like to doom-scroll on their phone and are ok being fed ads in the mean time because social media scrolling is basically the equivalent of an electronic cigarette for the brain in terms of dopamine delivery.

Because humans have chosen to prioritize the shittier meanier grossier nastier content, Facebook's "models" have been trained on that. Why are we criticizing Facebook's models that we basically played our own individual role in building?


> Why are we criticizing Facebook's models that we basically played our own individual role in building?

Practicality and precedent. We can effectively regulate the algorithms a countable number of companies use in their products, but can't effectively make people not susceptible to doom-scrolling (nicotine, heroin, gambling, whatever).

> If Facebook went away tomorrow, another platform would fill its place.

Not if the troubling practices were illegal or too expensive to operate. It would be a pretty dumb investment to bother. Instead, new platforms would pursue different techniques for content presentation and profitability, just like existing competitors already do.


> We can effectively regulate the algorithms a countable number of companies use in their products, but can't effectively make people not susceptible to doom-scrolling (nicotine, heroin, gambling, whatever).

What does regulation of Facebook's algorithm look like to you? Promote specifically which types of content less/not at all? Obviously "politically divisive/disinformation" but who gets to be in charge of that/classifying what is/isn't disinformation/divisive?


You keep missing that the problem is engagement optimization, not political divisive/disinformation per se.

A site that optimizes for content that people heavily interact with (engagement) is different than a site that prioritizes the relationship type between users (friends original content > public figure OC > shares > etc) or positive response (upvotes), etc

None of these many other systems rely on discerning the specific nature of content or making sure it’s of some certain arbitrary moral character. But they each lead the network as a whole to favor different kinds of content and so the experience for users ends up different. And it’s already known that social media sites can be successful using these other techniques.


> A site that optimizes for content that people heavily interact with (engagement) is different than a site that prioritizes the relationship type between users

I think that's hit the nail on the head. In a neutral environment, a user would be shown a feed of friends posts with ads mixed in for their demographic.

In an optimised for engagement environment, a user is shown posts that are known to increase engagement and quite often those posts are inflammatory.

This engagement environment is much more likely to expose people to extreme views


Because that middle man is pushing outrage. They purposely changed how it works from your friends pics of their kids to “child molester loose in your community, as what your opposite political party wanted.” That change is well documented, because it drove engagement.


That middle man is a blank canvas. Users submit friend pictures and child molester links. Other users vote the child molester links to the top. Why aren't you sharing any of the accountability on consumption/promotion on the users?


But it’s not just voting. They bias what gets put in front of people, and what doesn’t.

It’s a bit like saying a drug dealer has no accountability because it’s the user who buys the drugs. In this case FB is well aware of the consequences of their actions, and even worse, their total inability to do anything when it’s not in a dominant language. This has had profound consequences in places like Myanmar and India where disinformation spreads like wildfire and leads to the killing of people. You’d think they’d step back and go “whoa, this isn’t what we intended for FB to be” and do something about it. The opposite has happened.


> They bias what gets put in front of people, and what doesn’t.

What came first though, the chicken or the egg?

What came first, the content in an unbiased fashion and then the algorithms were trained on which content generated best engagement and it got promoted

or

did Facebook just behind the scenes start pushing bad content on a whim with no backing information? I don't think that happened


They aren’t a simple middle man though - they are actively choosing the rules for what content gets amplified on the platform.

They’ve done a search over the space of things that drive engagement with the site and have written a feed/content sort algorithm that optimizes for those things. They could just as easily choose to optimize for something else - like number of heart reactions only - but they choose not to because that would presumably result in lower engagement and thus less eyeball time and thus less profit.


This is like saying a journalist is "just a middle man for information". But we all know how propaganda works: it is a careful curation and presentation of certain kinds of information at the behest of others, which ends up creating a narrative that may or may not reflect reality. It is the obligation of the journalist to represent the facts in a way that reflects a common notion of "truth" which includes avoiding deception and misdirection.

Not calling Meta a news company, just making a comparison re: curation and how it can transform the inputs into something else.


How on earth are they just the “middle man” where they are the ones prioritizing certain content through the feed and their algorithms? Do you feel they should bear no responsibility for anything?


Why are they prioritizing the content? They run a platform they created with algorithms they created that react to user engagement metrics. It's the users engaging. Are we saying the user needs to be protected from themselves? It sounds like it.

They didn't post/create the content, and they prioritize what users engage with the most.

Why do they deserve blame?


> They didn't post/create the content

They just incentivized it. Your argument is akin to "Yes your honor, I paid the assassin, but I didn't murder that man. The assassin did!"

> Why do they deserve blame?

Because the algorithms aren't neutral.

If Facebook let you follow who you wanted and showed only that content in chronological order, you'd have a leg to stand on. But they are tampering with the flow of information in ways that increase engagement artificially using psychological manipulation.

It's fine if you're content to let them hide behind their algorithms, but you're wrong both in a moral sense and likely soon in a legal sense.


Because Facebook does internal studies to figure out that this is harmful for society, and it continues to do these things regardless. It's a form of tragedy of the commons.

Mind you this is not a question of "just doing fiduciary duty". It could very well be argued that not fomenting fear and extremism in its userbase is good for business and the brand. Yet it chooses to be evil.

And it's obvious why. Facebook is dying. There are almost no "regular" posts and people on there anymore. They're desperate to maintain some sort of engagement even if it means making the platform a gathering spot for the KKK.


> Because Facebook does internal studies to figure out that this is harmful for society, and it continues to do these things regardless.

from a business perspective, if Facebook decided to shut its doors tomorrow from a moral/ethical standpoint, wouldn't somebody else just fill the gap?

There's clearly a "need" for this kind of platform in society from a capitalism standpoint.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: