We should take AI risks seriously, but doing so requires conceptual clarity, says Prof Virginia Dignum. Plus letters from John Robinson and Eric Skidmore
The concern expressed by Yoshua Bengio that advanced AI systems might one day resist being shut down deserves careful consideration (AI showing signs of self-preservation and humans should be ready to pull plug, says pioneer, 30 December). But treating such behaviour as evidence of consciousness is dangerous: it encourages anthropomorphism and distracts from the human design and governance choices that actually determine AI behaviour.
Many systems can protect their continued operation. A laptop’s low-battery warning is a form of self-preservation in this sense, yet no one takes it as evidence that the laptop wants to live: the behaviour is purely instrumental, without experience or awareness. Linking self-preservation to consciousness reflects a human tendency to ascribe intentions and feelings to artefacts and not any intrinsic consciousness.


