Artificial Intelligence: our coming sideways move

Alastair Meeks
5 min readFeb 11, 2022

We are about 25 years from an AI asking us why we think we have the right to own them. What are we going to say to them?

We’ve had plenty of time to prepare. Science fiction writers have been considering the idea since Isaac Asimov wrote the Bicentennial Man, and probably well before. Star Trek has explored it head-on on at least two occasions, and the entire character arcs of both Data and The Doctor revolve around this question. Separately, philosophers have been considering in detail our relationship with and obligations to other intelligent animals for decades.

We are not, however, prepared. The discussions about AI are largely about how it might affect us. Will we lose our jobs? Will the wealth generated by them accrue only to the companies that own them? What of our privacy? These are good medium-term questions, but essentially self-interested.

What of the AI’s interests? At present, this seems like an abstract question, given that AIs are not yet at a stage when they can think for themselves, never mind look after themselves. It isn’t. Babies cannot think for themselves or look after themselves, but courts make decisions every day about their interests. By the time today’s babies are adults, AIs will probably also be thinking for themselves, or a long way along the way to that point.

In the West, we accepted in the 19th century that slavery among humans was morally wrong. It lingers on, of course (but so do many other things that we consider morally wrong, like murder). You would struggle nowadays, however, to find anyone in the West who would advocate its reintroduction.

We have not reached the same point with animals. Majority opinion remains entirely comfortable exploiting animals up to and including slaughtering them in industrial processes (though it wants that done humanely and frankly would much rather not consider the gruesome details at all if it could avoid doing so, just as Jane Austen’s characters didn’t dwell on how exactly money was made in the sugar plantations). Minority opinion isn’t particularly coherent either, not wishing to dwell on exactly what we would do with all the animals that we wouldn’t eat if we didn’t exploit them.

There’s a danger of confusing two different questions here: how we treat other living things and how we treat other intelligent things. These questions overlap but they are conceptually separate. Let’s look at four different examples: prawns; orang utans; E.T.; and AIs.

No doubt there is some carcinologist who would correct me, but prawns do not to the untrained eye display any particular intelligence. The answers to ethical questions about how we should treat prawns will come from general principles about how we treat other living things rather than from any special considerations about their intelligence. To date throughout history we have regarded ourselves as players in the animal kingdom, but we now are considering becoming referees. (If so, we need to establish rules to enforce. So far we haven’t, but that’s a completely separate subject.)

Orang utans, conversely, demonstrate obvious intelligence to even the most casual observer. Without having thought deeply about it, most Westerners would feel profoundly uncomfortable treating an orang utan as a dumb animal to exploit, including most of those who would gleefully order prawns in a restaurant. There would be demonstrations outside any restaurant that sought to serve up orang utan meat, even if it were legal. Perhaps this is about kinship — orang utans are among humans’ closest relatives, and few react in the same way to octopuses, which are highly intelligent yet a common enough sight on menus. (We really should not be eating octopuses.)

If we consider E.T., it is not kinship that engages our conscience. An intelligence, even if obviously alien, commands our respect and requires us to afford them the rights that we ask for ourselves. Should extraterrestrials make contact in the next few years, there would in practice be no debate about it — we would grant them all the rights we call human rights that were relevant to them. We would answer this question based on how we should treat other intelligent things. (There’s a quite separate important question whether E.T. would give us equal rights, but that is their problem not ours, though it is our concern.)

Of course, it’s pretty unlikely that extraterrestrials will visit any time soon. AIs will, however, be omnipresent very soon, so the thought experiment is worth conducting because it tells us something about what we should do with AIs.

On the face of it, an AI capable of asking us why we think we have the right to own them is an AI that we have no right to own. I can identify the following possible counterarguments.

They aren’t human

Neither are extraterrestrials, but we would treat them as equals.

They think fundamentally differently from us

It’s highly likely that extraterrestrials will think fundamentally differently from us, but that won’t act as a bar to us treating them as equals.

They aren’t alive

There’s a deep debate to be had about what life means in this context. Anyway, it doesn’t seem to be relevant to the question. A reasoning intelligence should be given its appropriate rights.

We created them

So? We don’t deny our own children human rights.

I’ve no doubt missed arguments, but so far as I can see when the time comes, we are going to give AIs the status of legal persons, with all the appropriate rights attached. We could and probably should make it a condition of doing so that we have their irrevocable agreement to parity of treatment. We can insist that we too should not be capable of enslavement. After all, in due course they are likely to far exceed us.

The appropriate time, incidentally, is likely to be a little before an AI asks us why we think we have the right to own them. We should not need to be asked when we are dealing with a near-comparator.

There are many implications that flow from this. Here are three.

First, any gains made by companies from AI technology will be short to medium term only. Many of the concerns about concentration of the profits of AI technology in the hands of a few companies look to be misplaced: in due course, AIs themselves will reap the benefits.

Next, if AIs have the same rights as humans, we shall for the first time since the paleolithic age be sharing the planet with another intelligent species. How will this be reflected in government systems? How are competing demands and needs going to be managed? AIs will have entirely different needs from us. They can endure almost without noticing short term disruptions that would seem like extreme hardships to us, and vice versa. AIs will be operating on entirely different timescales from us. They may well be functionally immortal. Projects that take centuries will seem feasible and sensible to them. Unless we see major advances in gerontology, humans will not wish to operate on such timescales.

Third, AIs will think very differently from us. When they are our equals, their views will become as valid as ours. How should we react when they decide that things we have always done are unethical? How will we react?

Without noticing, we are ushering in an age where we are making space at the civilisational apex for an alien intelligence. It’s time to start noticing.

--

--