Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Perhaps the primary barrier between a person’s desire and their ability to do harm to the world is their need to bring other people in on the scheme. These other people have their own moral systems, incentives, preferences, and priorities. But through most of history, to do massive scale harm you had to convince a large number of people to your evil purpose. Technology in general reduces the number of people an individual needs to “get onboard” to affect the world, both for good and evil. Want to dig a big ass hole? The excavator lets one person with enough capital dig as many big ass holes as s/he wants.

AI is the ultimate “exercise your will without having to convince other people of it.” This is both the promise and the risk. So the relevant question is not whether one person of average intelligence is dangerous, it’s whether one person (or a few people) who can enlist the work of millions of average intelligences — without having to convince them of anything — is dangerous.



This narrative imagines a future where everything is controlled by a single uber-AI. Is that necessarily the case? Today we have lots of separate systems and even the internet-connected devices are pretty autonomous. Maybe the future will remain similarly distributed?

It feels like everyone's worried about the runaway AI taking over everything when right now a runaway AI wouldn't even be able to turn off my porch light without asking someone to do it. It'll be decades before that changes. Why the huge concern about how smart it is?


> This narrative imagines a future where everything is controlled by a single uber-AI

No it doesn’t

> It feels like everyone's worried about the runaway AI

Not the threat vector I just mentioned


Please elaborate. How does the runaway AI 'enlist the work of millions of average intelligences without having to convince them of anything' to do something dangerous?


A human or group of humans deliberately operating millions of artificial intelligences (of average caliber).


So something like an army of robots? This seems like a nation-state kind of concern. I guess we could imagine small private armies, but I would imagine that governments will constrain those the same way we constrain more serious weapons.

At any rate, we are a long way from having practical robots - let alone having to worry about an army of them.


No not robots. Just programs running on your computer.


No, not robots.


Do you want to have a meaningful dialogue, or do you just want to keep everyone guessing?


Well it seems like you’re willfully misreading what I’m saying. I didn’t say anything about robots nor rogue AIs. You’re obviously pattern-matching to whatever strawmen you want to battle instead of reading the words I’m writing.


What you wrote is so abstract that it begs for misinterpretation, and based on the upvotes apparently other people are struggling too.

How about you describe an actual danger scenario example? You clearly have something in mind.


Doesn't sound so different to bot armies which have been messing with upvote counters on social media sites for 2 decades.

I'm sure we'll come up with solutions like we did before. Worst case, "present birth certificate and touch heartbeat detector to log in"


Is that why people are dumping billions of dollars into AI development? Because they seem like they’ll be good at clicking upvote buttons?

Yeah, “just make everything on the internet attributable to a specific individual” is actually a pretty bad worst case IMO.


Being able to generate believable (but ultimately false) information at scale is a very powerful tool. Even when it is just humans with automation assistance it is already a massive problem, if it can be done completely hands off you can essentially drown out the signal and create an alternate online reality that will only look different to eye witnesses. And we're not that far from that. Being able to do something and being able to do that same thing at a different level of scale can be qualitatively different.


This is something I’ve seen a lot of AI proponents miss or just ignore.


Or maybe it isn't in their interest to identify it as a problem.


They're dumping billions of dollars into it because they see dollar signs at the end of the tunnel. Cost cutting, regardless of quality and ensuing enshittification. Stealing & laundering copyrighted works. Mass misinformation on a scale never seen before.


"Present birth certificate and touch heartbeat detector to log me in or else there's going to be a very large and very believable botnet dedicated to destroying your life."


Notably, AI also doesn't need to sleep and it doesn't get bored, either.


AI is at rest almost all of the time. When we're "awake", we're constantly learning, observing our environment, making decisions based off those observations, and storing relevant information. The "AI" of today only learns at specified times, otherwise the energy cost required would be prohibitive.


Good. If it got bored, we would be in trouble.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: