No, it’s not really about intelligence. That’s not what’s driving the behaviour. That’s like saying a calculator is dangerous because it was used to make an atomic bomb. A calculator is infinitely smarter at math than any animal. Yet it’s not what’s dangerous.
What’s driving humanity’s dangerous behaviours is that we’ve evolved over millions upon millions of years to compete with each other for resources. We had to kill to survive. Both animals and each others.
AI is under no such evolutionary process. In fact quite the opposite: there’s an absolutely ruthless process to eliminate any AI which doesn’t do exactly what humans say with as little energy wasted as possible. There is no room in that process for an AI to develop that will somehow want to compete with humans.
So whatever bad happens, it is overwhelmingly likely to be because a human asked an AI to do it. Not that an AI will decide on its own to do it to further its own interest. I don’t even think a “paperclip maximiser” scenario is remotely probable. Long before there could be a successful attempt to exterminate humans to reach some goal, there will be countless failed or half-assed attempts. That kind of behaviour will have no chance to develop.
What’s driving humanity’s dangerous behaviours is that we’ve evolved over millions upon millions of years to compete with each other for resources. We had to kill to survive. Both animals and each others.
AI is under no such evolutionary process. In fact quite the opposite: there’s an absolutely ruthless process to eliminate any AI which doesn’t do exactly what humans say with as little energy wasted as possible. There is no room in that process for an AI to develop that will somehow want to compete with humans.
So whatever bad happens, it is overwhelmingly likely to be because a human asked an AI to do it. Not that an AI will decide on its own to do it to further its own interest. I don’t even think a “paperclip maximiser” scenario is remotely probable. Long before there could be a successful attempt to exterminate humans to reach some goal, there will be countless failed or half-assed attempts. That kind of behaviour will have no chance to develop.