Perhaps the primary barrier between a person’s desire and their ability to do harm to the world is their need to bring other people in on the scheme. These other people have their own moral systems, incentives, preferences, and priorities. But through most of history, to do massive scale harm you had to convince a large number of people to your evil purpose. Technology in general reduces the number of people an individual needs to “get onboard” to affect the world, both for good and evil. Want to dig a big ass hole? The excavator lets one person with enough capital dig as many big ass holes as s/he wants.
AI is the ultimate “exercise your will without having to convince other people of it.” This is both the promise and the risk. So the relevant question is not whether one person of average intelligence is dangerous, it’s whether one person (or a few people) who can enlist the work of millions of average intelligences — without having to convince them of anything — is dangerous.
This narrative imagines a future where everything is controlled by a single uber-AI. Is that necessarily the case? Today we have lots of separate systems and even the internet-connected devices are pretty autonomous. Maybe the future will remain similarly distributed?
It feels like everyone's worried about the runaway AI taking over everything when right now a runaway AI wouldn't even be able to turn off my porch light without asking someone to do it. It'll be decades before that changes. Why the huge concern about how smart it is?
Please elaborate. How does the runaway AI 'enlist the work of millions of average intelligences without having to convince them of anything' to do something dangerous?
So something like an army of robots? This seems like a nation-state kind of concern. I guess we could imagine small private armies, but I would imagine that governments will constrain those the same way we constrain more serious weapons.
At any rate, we are a long way from having practical robots - let alone having to worry about an army of them.
Well it seems like you’re willfully misreading what I’m saying. I didn’t say anything about robots nor rogue AIs. You’re obviously pattern-matching to whatever strawmen you want to battle instead of reading the words I’m writing.
Being able to generate believable (but ultimately false) information at scale is a very powerful tool. Even when it is just humans with automation assistance it is already a massive problem, if it can be done completely hands off you can essentially drown out the signal and create an alternate online reality that will only look different to eye witnesses. And we're not that far from that. Being able to do something and being able to do that same thing at a different level of scale can be qualitatively different.
They're dumping billions of dollars into it because they see dollar signs at the end of the tunnel. Cost cutting, regardless of quality and ensuing enshittification. Stealing & laundering copyrighted works. Mass misinformation on a scale never seen before.
"Present birth certificate and touch heartbeat detector to log me in or else there's going to be a very large and very believable botnet dedicated to destroying your life."
AI is at rest almost all of the time. When we're "awake", we're constantly learning, observing our environment, making decisions based off those observations, and storing relevant information. The "AI" of today only learns at specified times, otherwise the energy cost required would be prohibitive.
AI is the ultimate “exercise your will without having to convince other people of it.” This is both the promise and the risk. So the relevant question is not whether one person of average intelligence is dangerous, it’s whether one person (or a few people) who can enlist the work of millions of average intelligences — without having to convince them of anything — is dangerous.