The hardest part of creating conscious AI might be convincing ourselves it’s real

Leaf your prejudices at the door. Black Salmon

As far back as 1980, the American philosopher John Searle distinguished between strong and weak AI. Weak AIs are merely useful machines or programs that help us solve problems, whereas strong AIs would have genuine intelligence. A strong AI would be conscious.

Searle was sceptical of the very possibility of strong AI, but not everyone shares his pessimism. Most optimistic are those who endorse functionalism, a popular theory of mind that takes conscious mental states to be determined solely by their function. For a functionalist, the task of producing a strong AI is merely a technical challenge. If we can create a system that functions like us, we can be confident it is conscious like us.

Illustration of a human talking to a robot
Anyone there?
Littlestar23

Recently, we have reached the tipping point. Generative AIs such as Chat-GPT are now so advanced that their responses are often indistinguishable from those of a real human – see this exchange between Chat-GPT and Richard Dawkins, for instance.

This issue of whether a machine can fool us into thinking it is human is the subject of a well-known test devised by English computer scientist Alan Turing in 1950. Turing claimed that if a machine could pass the test, we ought to conclude it was genuinely intelligent.

Back in 1950 this was pure speculation, but according to a pre-print study from earlier this year – that’s a study that hasn’t been peer-reviewed yet – the Turing test has now been passed. Chat-GPT convinced 73% of participants that it was human.

What’s interesting is that nobody is buying it. Experts are not only denying that Chat-GPT is conscious but seemingly not even taking the idea seriously. I have to admit, I’m with them. It just doesn’t seem plausible.

The key question is: what would a machine actually have to do in order to convince us?

Experts have tended to focus on the technical side of this question. That is, to discern what technical features a machine or program would need in order to satisfy our best theories of consciousness. A 2023 article, for instance, as reported here in The Conversation, compiled a list of 14 technical criteria or “consciousness indicators”, such as learning from feedback (Chat-GPT didn’t make the grade).

But creating a strong AI is as much a psychological challenge as a technical one. It is one thing to produce a machine that satisfies the various technical criteria that we set out in our theories, but it is quite another to suppose that, when we are finally confronted with such a thing, we will believe it is conscious.

The success of Chat-GPT has already demonstrated this problem. For many, the Turing test was the benchmark of machine intelligence. But if it has been passed, as the pre-print study suggests, the goalposts have shifted. They might well keep shifting as technology improves.

Myna difficulties

This is where we get into the murky realm of an age-old philosophical quandary: the problem of other minds. Ultimately, one can never know for sure whether anything other than oneself is conscious. In the case of human beings, the problem is little more than idle scepticism. None of us can seriously entertain the possibility that other humans are unthinking automata, but in the case of machines it seems to go the other way. It’s hard to accept that they could be anything but.

A particular problem with AIs like Chat-GPT is that they seem like mere mimicry machines. They’re like the myna bird who learns to vocalise words with no idea of what it is doing or what the words mean.

Myna bird
‘Who are you calling a stochastic parrot?’
Mikhail Ginga

This doesn’t mean we will never make a conscious machine, of course, but it does suggest that we might find it difficult to accept it if we did. And that might be the ultimate irony: succeeding in our quest to create a conscious machine, yet refusing to believe we had done so. Who knows, it might have already happened.

So what would a machine need to do to convince us? One tentative suggestion is that it might need to exhibit the kind of autonomy we observe in many living organisms.

Current AIs like Chat-GPT are purely responsive. Keep your fingers off the keyboard and they’re as quiet as the grave. Animals are not like this, at least not the ones we commonly take to be conscious, like chimps, dolphins, cats and dogs. They have their own impulses and inclinations (or at least appear to), along with the desires to pursue them. They initiate their own actions on their own terms, for their own reasons.

Perhaps if we could create a machine that displayed this type of autonomy – the kind of autonomy that would take it beyond a mere mimicry machine – we really would accept it was conscious?

It’s hard to know for sure. Maybe we should ask Chat-GPT.

The Conversation

David Cornell does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Scroll to Top