> It's not just LLMs, it's how the algorithms promote engagement. i.e. rage bait, videos with obvious inaccuracies etc.
I guess, but I'm on quite a few "algorithm-free" forums where the same thing happens. I think it's just human nature. The reason it's under control on HN is rigorous moderation; when the moderators are asleep, you often see dubious political stuff bubble up. And in the comments, there's often a fair amount of patently incorrect takes and vitriol.
On HN everybody sees the same ordering. Therefore you get to read opinions that are not specifically selected to make you feel just the perfect amount of outrage/self-righteousness.
Some of that you may experience as 'dubious political stuff' and 'patently incorrect takes'.
Edit, just to be clear: I'm not saying HN should be unmoderated.
Yeah this is a critical difference, most of the issues are sidestepped because everyone knows nobody can force a custom frontpage tailored for a specific reader.
So there’s no reason to try a lot of the tricks and schemes that scoundrels might have elsewhere, even if those same scoundrels also have HN accounts.
Only when certain people don't decide to band together and hide posts from everyone's feed by abusing "flag" function. Coincidentally those posts often fit neatly in the categories you outlined.
Abuse of the flagging system is probably one of the worst problems currently facing HN. It looks like mods might be trying to do something about it, as I've occasionally seen improperly-flagged posts get resuscitated, but it appears to take manual action by moderators, and by the time they get to it, the damage is done: The article was censored off the front page.
Even with addition of tomhow, they are clearly stretched too thin to make any meaningful impact. Their official answer to this issue by the way is to point out that you can message them on email to elicit this manual action, which if you ask me is a fucking joke and clearly shows the mammoth age stack in which this site is written and lack of resources allocated to its support is having a massive impact on their ability to keep up with massive traffic. But then again, this site only exists to funnel attention to yc's startups, and it is something that you need to keep in mind while trying to answer any questions about its current state.
I think I never downvoted anyone on hackernews yet - it just does not seem important.
On reddit on the other hand, I just had to downvote wrong opinions. This works to some extent, until moderators interfere and ban you. That part made me stop use reddit actually, in particular since someone made a complaint and I got banned for some days. I objected and the moderators of course did not respond. I can not allow random moderators to just chime in arbitrarily and flag "this comment you made is a threat", when it clearly was not. But you can not really argue with reddit moderators.
You can’t get banned just for downvoting. Nobody can see someone else’s voting history. You buried the lead, you were banned for your comments not for your voting activity.
I don’t know why this is being downvoted, I’ve witnessed it many times myself.
It’s true that HN has a good level of discussion but one of the methods used to get that is to remove conversation on controversial topics. So I’m skeptical this is a model that could fit all of society’s needs, to say the least.
The comment consists of criticism on flagging behavior. Though it might have a point, it seems only vaguely related to its parent comment about non-personalized ordering.
In downvoting it, they are proving me right. For posterity, there is a mastodon account [0] collecting flagged posts in an easily digestible form, it really does paint a certain picture if you ask me.
I want to agree with this. Maybe OP is young or didn't frequent other communities before "social networks", but on IRC, even on Usenet you'd see these behaviors eventually.
Since they are relatively open, at some point comes in someone that doesn't give care about anything or it's extremely vocal about something and... there goes the nice forum.
MySpace was quite literally my space. You could basically make a custom website with a framework that included socialisation. But mostly it was just geocities for those who only might want to learn html. So it was a creative canvas with a palette.
>Maybe OP is young or didn't frequent other communities before "social networks", but on IRC, even on Usenet you'd see these behaviors eventually.
I was too young for IRC/Usenet and started using the net/web in the late 90s, frequenting some forums. Agreed that anyone can come in and upset the balance.
I'd say the difference is that on the open web, you're free to discover and participate in those social settings for the most part. With everything being so centralised and behind an algorithm the things you're presented are more 'push' than 'pull'.
I think the nuance here is that with algorithmic based outrage, the outrage is often very narrow and targeted to play on your individual belief system. It will seek out your fringe beliefs and use that against you in the name of engagement.
Compare that to a typical flame war on HN (before the mods step in) or IRC.
On HN/IRC it’s pretty easy to identify when there are people riling up the crowd. And they aren’t doing it to seek out your engagement.
On Facebook, etc, they give you the impression that the individuals riling up the crowd are actually the majority of people, rather than a loud minority.
Theres a big difference between consuming controversial content from people you believe are a loud minority vs. controversial content from (what you believe is from) a majority of people.
Or if the moderation was good someone would go “nope, take that bullshit elsewhere” and kick them out, followed by everyone getting on with their lives. It wasn’t obligatory for communities to be cesspits.
> Maybe OP is young or didn't frequent other communities before "social networks", but on IRC, even on Usenet you'd see these behaviors eventually
I’m not exactly old yet, but I agree. I don’t know how so many people became convinced that online interactions were pleasant and free of ragebait and propaganda prior to Facebook.
A lot of the old internet spaces were toxic cesspools. Most of my favorite forums eventually succumbed to ragebait and low effort content.
I remember a thread a while ago where someone was claiming that Hacker News comments were much more civilized and on topic in the early days.
So someone pulled up Wayback Machine archives of random dates for HN pages. The comments were full of garbage, flame wars, confidently incorrect statements, off topic rants, and all the other things that people complain about today.
It was the same thing, maybe even slightly worse, just in a different era
I think the people who imagine that social media is worse today either didn’t participate in much online socialization years ago or have blocked out the bad parts from their memory.
But Serdar was relatively easy to ignore, because it was just one account, and it wasn't pushed on everyone via an algorithm designed to leverage outrage to make more money for one of the world's billionaires. You're right: pervasiveness and scale make a significant difference.
When video games first started taking advantage of behavioral reward schedules (eg: skinner box stuff such as loot crates & random drops) I noticed it, and would discuss it among friends. We had a colloquial name for the joke and we called them "crack points." (ie, like the drug) For instance, the random drops that happen in a game like Diablo 2 are rewarding in very much the same way that a slot machine is rewarding. There's a variable ratio of reward, and the bit that's addicting is that you don't know whenever next "hit" will be so you just keep pulling the lever (in the case of a slot machine) or doing boss runs. (in the case of Diablo 2)
We were three friends: a psychology major, a recovering addict, and then a third friend with no background for how these sorts of behavioral addictions might work. Our third friend really didn't "get it" on a fundamental level. If any game had anything like a scoreboard, or a reward for input, he'd say "it's crack points!" We'd roll our eyes a bit, but it was clear that he didn't understand that certain reward schedules had a very large effect on behavior, and not everything with some sort of identifiable reward was actually capable of producing behavioral addiction.
I think of this a lot on HN. People on HN will identify some surface similarity, and then blithely comment "see, this is nothing new, you're either misguided or engaged in some moral panic." I'm not sure what the answer is, but if you cannot see how an algorithmic, permanently-scrolling feed differs from people being rude in the old forums, then I'm not sure what would paint the picture for you. They're very different, and just because they might share some core similarity does not actually mean they operate the same way or have the same effects.
Thanks for this. I didn't realize until you said it why this issue might not be observable to a certain group of people. I think this is a cognitive awareness issue. You cant really see it until you have an awareness of it through experience. I came from a drug abuse background and my wife was never involved in the level of addiction I was involved in and she has a hard time seeing how algorithms like this effect behavior
>If any game had anything like a scoreboard, or a reward for input, he'd say "it's crack points!"
I don't think it's exactly wrong, you just have to look at it on a spectrum of minimal addictiveness to meth level addiction. For example in quarter fed games getting a high score displayed to others was quite the addictive behavior.
Why not, the term spectrum can go from 'not harmful at all' to 'kills you really damned fast. Life isn’t black and white and there are very wide ranges across people.
I would be intrigued by using an LLM to detect content like this and hold it for moderation. The elevator pitch would be training an LLM to be the moderator because that's what people want to hear, but it's most likely going to end up a moderator's assistant.
I think the curation of all media content using your own LLM that has been tuned using your own custom criteria _must_ become the future of media.
We've long done this personally at the level of a TV news network, magazine, newspaper, or website -- choosing info sources that were curated and shaped by gatekeeper editors. But with the demise of curated news, it's becoming necessary for each of us to somehow filter the myriad individual info sources ourselves. Ideally this will be done using a method smart enough to take our instructions and route only approved content to us, while explaining what was approved/denied and being capable of being corrected and updated. Ergo, the LLM-based custom configured personal news gateway is born.
Of course the criteria driving your 'smart' info filter could be much more clever than allowing all content from specific writers. It could review each piece for myriad strengths/weaknesses (originality, creativity, novel info, surprise factor, counter intuitiveness, trustworthiness, how well referenced, etc) so that this LLM News Curator could reliably deliver a mix of INTERESTING content rather than the repetitively predictable pablum that editor-curated media prefers to serve up.
That's the government regulation I want but it's probably not the government regulation we will get because both major constituencies have a vested interest in forcing their viewpoints on people. Then there's the endless pablum hitting both sides, giving us important vital cutting edge updates about influencers and reality TV stars whether we want to hear about them or not...
We say we want to win the AI arms race with China, but instead of educating our people about the pros and cons of AI as well as STEM, we know more than we want to know about Kim Kardashian's law degree misadventures and her belief that we faked the moon landing.
Which is why you should cancel your Twitter account unless you're on the same page with the guy who owns it, but I digress.
if a site wants to cancel any ideology's viewpoint, that site is the one paying the bills and they should have the right to do it. You as a customer have a right to not use that site. The problem is that most of the business currently is a couple of social media sites and the great Mastodon diaspora never really happened.
Edit: why do some people think it is their god-given right that should be enforced with government regulation to push their viewpoints into my feed? If I want to hear what you guys have your knickers in a bunch about today, I will seek it out, this is the classic difference between push and pull and push is rarely a good idea.
My social media feeds had been reduced to about 30% political crap, 20% things I wanted to hear about, and about 50% ads for something I had either bought in the deep dark past or had once Google searched plus occasionally extremely messed up temu ads. That is why I left.
I suspect it got worse with the advent of algorithm-driven social networks. When rage inducing content is prevalent, and when engaging with it is the norm, I don't see why this behaviour wouldn't eventually leak to algorithms-free platforms.
Algorithm driven social media is a kind of pollution. As the density of the pollution on those sites increases it spills out and causes the neighbors problems. Think of 4chan style raids. It wasn't enough for them to snipe each other on their site, so they spread the joy elsewhere.
And that's just one type of issue. You have numerous kinds of paid actors that want to sell something or cause trouble or just general propaganda.
It is of course human nature. The problem is what happens when algorithms can reenforce, exaggerate, and amplify the effects of this nature to promote engagement and ad-clicks. It’s cancer that will at the very least erode the agency of the average individual and in the worst create a hive mind that we have no control over. We are living in the preview of it all I think.
The thing is, the people on those "algorithm-free" forums still get manipulated by the algorithm in the rest of their life. So it seeps into everything.
I guess, but I'm on quite a few "algorithm-free" forums where the same thing happens. I think it's just human nature. The reason it's under control on HN is rigorous moderation; when the moderators are asleep, you often see dubious political stuff bubble up. And in the comments, there's often a fair amount of patently incorrect takes and vitriol.