Google DeepMind Paper Claims LLMs Will Never Achieve Consciousness

Original: Google DeepMind Paper Argues LLMs Will Never Be Conscious | Philosophers said the paper’s argument is sound, but that “all these arguments have been presented years and years ago.”

Why This Matters

Challenges industry hype about conscious AI from within Google DeepMind itself

Google DeepMind scientist Alexander Lerchner published a paper arguing that no AI or computational system will ever become conscious due to their dependence on human agents to organize data and lack of physical embodiment.

Alexander Lerchner, a senior staff scientist at Google DeepMind, published 'The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness,' arguing that AI systems are fundamentally 'mapmaker-dependent,' requiring humans to organize continuous physics into meaningful states. The paper contends that the belief AI can achieve consciousness through data manipulation is a fallacy, as AI lacks physical embodiment and intrinsic motivations. Philosophers welcomed the argument from a major AI company but noted these points have been made for decades. Johannes Jäger, an evolutionary systems biologist, emphasized that unlike humans who must eat and breathe, 'an LLM doesn't do that. It's just a bunch of patterns on a hard drive.' The paper contrasts with narratives from AI executives like DeepMind's Demis Hassabis, who promotes artificial general intelligence as revolutionary.

Source

404media.co — Read original →