So, Eliezer has "AI could kill us all surprisingly soon", and Yann LeCun has "AI is not dangerous but concentrating access to it is" (which feels self-contradictory as I write it) -- who is on the side of "AI could kill us all surprisingly soon and also it is necessary to decentralize access to it as one step in risk mitigation"?