> Apart from nuclear scientists I don't know a field where participants are as conscious of the risks as AI research.
Great. Now some of these researchers preceived some risk with this technology. Not human extinction level risk, but risks. So they attempted to control the technology. To be specific: OpenAI is worried about deepfakes so they engineered guard rails into their implementation. OpenAI was worried about misinformation so they did not release the bigger GPT models.
Note: I’m not arguing either way if OpenAI was right, or honest about their motivations just observing that they expressed this opinion and acted on it to guard the risk.
Got this so far? Keep this in mind because I’m going to use this information to answer your question:
> If you know that image generators are not it, then why talk about it here?
Because it is a technology which were deemed risky by some practitioners and they attempted to control it, and those attempts to control the spread of the technology failed. This does not bode well towards our ability to restrain ourselves from picking up a real black ball, if we ever come across one. And that is why it is worth talking about black balls in this context.
Note it is unlikely that a black ball event will completely blindside us. It is unlikely that someone develops a clone of pacman with improved graphics and boom that alone leads to the inevitable death of humanity. It is much more likely that when the new and dangerous tech appears on our horizon there will be people talking about the potential dangers. What remains to be answered: what can we do then? This experience has shown us that if we ever encounter a black ball technology the steps taken by OpenAI doesn’t seem to be enough.
This is why it is worth talking about black ball technologies here. I hope this answers your question?
Great. Now some of these researchers preceived some risk with this technology. Not human extinction level risk, but risks. So they attempted to control the technology. To be specific: OpenAI is worried about deepfakes so they engineered guard rails into their implementation. OpenAI was worried about misinformation so they did not release the bigger GPT models.
Note: I’m not arguing either way if OpenAI was right, or honest about their motivations just observing that they expressed this opinion and acted on it to guard the risk.
Got this so far? Keep this in mind because I’m going to use this information to answer your question:
> If you know that image generators are not it, then why talk about it here?
Because it is a technology which were deemed risky by some practitioners and they attempted to control it, and those attempts to control the spread of the technology failed. This does not bode well towards our ability to restrain ourselves from picking up a real black ball, if we ever come across one. And that is why it is worth talking about black balls in this context.
Note it is unlikely that a black ball event will completely blindside us. It is unlikely that someone develops a clone of pacman with improved graphics and boom that alone leads to the inevitable death of humanity. It is much more likely that when the new and dangerous tech appears on our horizon there will be people talking about the potential dangers. What remains to be answered: what can we do then? This experience has shown us that if we ever encounter a black ball technology the steps taken by OpenAI doesn’t seem to be enough.
This is why it is worth talking about black ball technologies here. I hope this answers your question?