Why the possibility of vastly superhuman ("godlike") artificial intelligence should be the default assumption rather than an extraordinary claim.
Quite some assumptions here. And possible risks of presented scenario's are absent. Assumptions: AI's will not struggle for resources. AI mind copies will be loyal. AI's will act rationally. Why?
Quite some assumptions here. And possible risks of presented scenario's are absent. Assumptions: AI's will not struggle for resources. AI mind copies will be loyal. AI's will act rationally. Why?