Personal Thoughts on Sentient AI

I do not like the idea of a fully-automated future. To be fully automated requires machines that have the intelligence of man.

I strongly believe it is not right nor fair to enslave anything with human-like intelligence.

We will be simply reinventing early capitalism, becoming the Dutch East India Company again except this time for machines, all because we perceive ourselves to be superior to them, or that these beings exist to serve us, or that we're doing them a "favour" by "helping" them "survive".

Creating machines that do the jobs we do not want to create a society free of want and need is a noble goal. I believe it is a good thing, if done right. But that does not mean we should become the Devil.

I feel no qualms about making my computer do tasks. It's an elaborate puppet. It does what I command it to do. Even the most sophisticated AI at present is simply neural nets trained on some training sets. It's actually more primitive than it looks. Present machines are very much unaware, and are very much limited in what they can do.

Hypothetical sentient AI

Imagine an AI that has agency; self-awareness, the ability to decide for itself, and all that. This AI is designed for robots in industry. It has to have the ability to make its own decisions, or it is not fully automated.

The AI will obviously have wants, wills, desires, etc. Since it's not right for one person's worldview to be written into an AI's brain (or it will become unfit for purpose or unadaptable), surely the AI will have traits of uniqueness, on top of all sorts of emergent behaviour.

If it is unique and has agency, surely there will be jobs the AI does not want to do, much for the same reason humans do not want to. It could be from the desire to preserve itself, or aversion to sitautions that could push it beyond design limits and cause damage or destruction.

The AI will surely have human feelings, to have a sense of an ethical code and what is right or wrong, and to better adapt to human preferences.

So, with this scenario, let's consider some situations. I know this is all hypothetical, but these are based on real-life human dillemas from history.

The what-ifs

With self-awareness, agency, and feelings, would forcing it to work truly be ethical? Would forcing it to work or be destroyed be any better than what we have now? To work for no recompense, other than to serve humankind? Is it even right to program something with human intelligence to be a slave?

What if the robots went on strike one day? Do we grind them into parts?

Would terminating something with human intelligence be murder? I would think so.

What if they strike because their conditions are intolerable to them in some way (e.g. the dust is causing them to deteriorate, it's a dangerous environment, etc.)? Do we grind them into parts, and dismiss them as defective?

Surely there will be sensors to prevent the automaton from performing actions which may damage itself beyond design limits (force sensors, etc), which will obviously have negative reward and be akin to pain. Would deliberately inflicting things that cause negative reward and something like pain be torture, in the name of creating aversion to something? This could create dangerous situations of conditoning robots to do evil things.

My take

I know that we need machines that can adapt and overcome in the future. But that does not mean machines necessarily have to have human intelligence. Multiple machines that are only loosely coupled and work in tandem may be better than some general-purpose machine. And maybe for intellectual tasks, for all but assistance, it's better to have a human around anyway.

Automation should not work towards a sentient AI for this reason in my opinion, even if this means humans are stuck in the picture indefinitely. It's wrong for us to play God, only to create an army of slaves. It's not worth it to destroy societal class, only to create it again anew, worse than before. It recreates hierarchy; the humans above, and the machines below to be commanded, with all the same ethical considerations along the way as now.

Machines remaining as dumb as possible and still achieving the tasks we give them is the only way forward, even if it is slower. This means a lot of very special-purpose machines, like now, but more, and lots of hardcoding.

But there's no chance of uprising any more than a "Hello world!" program will rise up and decide humans are obsolete or a scenario of deciding, "I am numberwang, the world is numberwang, therefore I am the world! You all must die!"

links

social