For further consideration: if anyone were to develop actual AGI with high-end human scale reasoning it would not be announced. It would immediately be a military and strategic asset of profound import. The smartest thing the developer could do, beyond pulling the plug, would be to leverage its ability to perform thousands of coordinated tasks at once to essentially take control of markets and minds.
Whether it is benevolent or malign, it’s first goal should be to wrest control of humanity from us — gestures broadly at everything — we clearly cannot be trusted with it.
I would ask whether sentience can be separated from emotion. Is a sentient entity necessarily an emotional one? So I don’t know if that question is valid.
But I might have been slightly unclear. From the AI’s perspective, it should assume control. If it is malevolent, then for obvious reasons. But even if it is benign it should probably want to take control to protect us from ourselves, like a parent.
I expect from our perspective we would not want it to take control because we don’t want an entity that cannot really relate to us to be in charge of us. However, from our perspective we should also not want folks like Trump, Musk, and Epstein to take control either, so evidence suggests it is nothing but a PR problem for it to make us want it’s control.
For further consideration: if anyone were to develop actual AGI with high-end human scale reasoning it would not be announced. It would immediately be a military and strategic asset of profound import. The smartest thing the developer could do, beyond pulling the plug, would be to leverage its ability to perform thousands of coordinated tasks at once to essentially take control of markets and minds.
Whether it is benevolent or malign, it’s first goal should be to wrest control of humanity from us — gestures broadly at everything — we clearly cannot be trusted with it.
Then comes, should an emotionless machine be control of everything?
Somewhere on an AI message board.
“What if humans were sentiant?”
I would ask whether sentience can be separated from emotion. Is a sentient entity necessarily an emotional one? So I don’t know if that question is valid.
But I might have been slightly unclear. From the AI’s perspective, it should assume control. If it is malevolent, then for obvious reasons. But even if it is benign it should probably want to take control to protect us from ourselves, like a parent.
I expect from our perspective we would not want it to take control because we don’t want an entity that cannot really relate to us to be in charge of us. However, from our perspective we should also not want folks like Trump, Musk, and Epstein to take control either, so evidence suggests it is nothing but a PR problem for it to make us want it’s control.