Opinion/»Nature« vs. Turing on »Can Machines Think?«
»Nature« vs. Turing on »Can Machines Think?«
A recent article in Nature provocatively pronounced in its title that »evidence« to the question — »Does AI already have human-level intelligence?« — is »clear,« claiming that »by any reasonable criteria, the vision of human-level machine intelligence laid out by Alan Turing […] is now a reality.« 1
In October 1950, Alan Turing »proposed to consider the question, ‘Can machines think?'« in a contribution to »Mind«, a quarterly review of psychology and philosophy. 2 While pointing out that any attempt to formulate a coherent answer to the question »should begin with definitions of the meaning of the terms ‘machine’ and ‘think’,« Turing recognised, correctly, that the meaning of these words is vague, and an examination of »how they are commonly used« can only be »sought via a statistical survey such as a Gallup poll [which] is absurd.« Instead, he suggested, tentatively, to »replace the question by another, which is closely related to it and is expressed in relatively unambigous words.« He dubbed the reformulation of the original question as »The Imitation Game«, essentially asking whether there »are […] imaginable digital computers which would do well in the imitation game,« where »the object of the game for the interrogator [(i.e., the player)] is to determine which of the other two is the [machine] and which is the [human]« by asking questions via a text based user interface. Turing then went on to »conjecture« that »in about fifty years’ time it will be possible to programme computers […] to make them play the imitation game so well that an average interrogator will not have more than 70 per cent. [sic] chance of making the right identification after 5 minutes of questioning.« Turing’s »imitation game«, a reformulation of the question »Can machines think?«, has since been relabeled as the so called »Turing Test«, and treated as a litmus test (i.e., a decisively indicative test) »of a machine’s ability to exhibit intelligent behaviour equivalent to that of a human« in its »standard« interpretation. 3 Referring to Turing’s »imitation game« as »Turing Test«, thereby tacitly implying there is a fixed test, is highly misleading, though. »There is no fixed Turing test; rather, a battery of devices constructed on this model.« 4
The Nature article points out that »in March 2025, the large language model (LLM) GPT-4.5, developed by OpenAI, […] was judged by humans in a Turing Test 5 setup, to be human 73% of the time — more often than actual humans were,« concluding that »insofar as individual humans have general intelligence, current LLMs do, too.« To put this accomplishment into perspective it is instructive to point out that a previous version GPT-3.5, not only did not pass the »Turing Test,« but was surpassed by ELIZA, one of the first chatbots developed in the mid-1960s simulating a psychotherapist of the Rogerian school (in which the therapist often reflects back the patient’s words to the patient), to trick participants into thinking it was human. 6 Given that the training of GPT-4.5 codename Orion was, according to the Wall Street Journal, »crazy expensive« with »a six-month training run [estimated to] cost around half a billion dollars in computing costs alone,« 7 situates the accomplishment into a slightly less sensational context.
Although Turing’s paper was titled »computing machinery and intelligence,« Turing hasn’t bothered to articulate what he means by the term »intelligence.« A rather astonishing omission, considering the word »intelligence« is only used thrice, once in the title, and twice in section »7. Learning Machines,« 2 presupposing readers share whatever notion of »intelligence« Turing had in mind. In line with Turing, the authors of the Nature article »assume, […] that humans have general intelligence.« 1 (emphasis mine) Whilst recognising that »some think general intelligence does not exist at all« 89 and conceding that »this view is coherent and philosophically interesting,« they simply disregard it as being »too disconnected from most AI discourse« in what can only be described as an act of wilful ignorance.
»Rather than stipulating a definition« of »human-level general intelligence,« the authors »draw on both actual and hypothetical cases of general intelligence — from Einstein to aliens to oracles — to triangulate the contours of the concept and refine it more systematically,« essentially engaging in “hand waving”. Whilst this analogical reasoning 10 uses metaphors and heuristic similarities to establish an experimental guess about the nature of “intelligence” based on previous experience of known and apparently similar objects, arguments by analogy are considered weak or fallacious in formal logic as they rely on induction; assuming that because two objects are similar in some aspects, they are similar in others. This type of reasoning has frequently been challenged, particularly when comparing the human mind’s language faculty to other cognitive processes 11 or when using behavioral analogies to explain language acquisition. 12 That said, it seems premature to celebrate a victory for LLMs achieving »general intelligence.« There are alternative explanations of the current data, and in fact, as the authors admit, yet discard as »too disconnected«, there are ample grounds to believe that »artifical general intelligence« doesn’t even exist.
The most astonishing omission, though, is the fact that the authors elide to even allude to Turing’s own beliefs in the matter, namely that »the original question, “Can machines think?” [is] too meaningless to deserve discussion.« Now if the original question, according to Turing, is »too meaningless to deserve discussion,« the same must hold for its reformulation in the form of the »imitation game,« later relabeled as a “test” to »see whether a jury can be fooled into thinking that a human is carrying out the observed performance« in order to claim that it has been “empirically established” that a computer can “think.” »There is a great deal of often heated debate about these matters in the literature of the cognitive sciences, [computer science,] artificial intelligence, and philosophy of mind, but it is hard to see that any serious question has been posed,« 4 as Turing himself rightfully points out.
»The question of whether a computer is playing chess, or doing long division, or translating Chinese, or “thinking” is like the question of whether robots can murder or airplanes can fly — or people; after all, the “flight” of the Olympic long jump champion is only an order of magnitude short of that of the chicken champion. These are questions of decision, not fact; decision as to whether to adopt a certain metaphoric extension of common usage. There is no answer to the question whether airplanes really fly (though perhaps not space shuttles[, or ski jumpers although in German ski jumpers do fly]). Fooling people into mistaking a submarine for a whale doesn’t show that submarines really swim; nor does it fail to establish the fact. There is no fact, no meaningful question to be answered, as all agree, in this case. The same is true of computer programs, as Turing took pains to make clear […] pointing out that the question whether machines think “may be too meaningless to deserve discussion,” being a question of decision, not fact, though he speculated that in 50 years, usage may have “altered so much that one will be able to speak of machines thinking without expecting to be contradicted” — as in the case of airplanes flying (in English, at least), but not submarines swimming. Such alteration of usage amounts to the replacement of one lexical item by another one with somewhat different properties. There is no empirical question as to whether this is the right or wrong decision.« 4
Finally, what the Nature article does prove is that despite being off by two decades, Turing was right to believe that:
»At the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.«
Notes
• While unrelated, Turing’s conjecture regarding the discourse »about machines thinking« reminds me of Moore’s law, a frequently invoked prediction strategically deployed to justify extraordinary capital investments into silicon manufacturing and data centre construction. Analogous to Moore’s law, the discussion around Turing’s conjecture, relabeled as the »Turing Test« in conversations about »machine intelligence,« is deployed as part of a »corporate propaganda« effort to distract from the astronomical investments and subsidies into transnational high-tech corporations such as Microsoft, Amazon, Meta, Alphabet…
• Jeffrey Watumull,A Turing Program for Linguistic Theory, lingbuzz/001550, Jun., 2012.
-
Eddy Chen, Mikhail Belkin, Leon Bergen, David Danks, “Does AI already have human-level intelligence? The evidence is clear”, Nature, Vol 650, Feb.5, 2026. ↩︎
-
Alan Turing, Computing Machinery and Intelligence, Mind, New Series, Vol.59, No.236, pp.455-460, Oct, 1950. ↩︎
-
“Turing Test”, Wikipedia, retrieved Feb.26, 2026. ↩︎
-
Noam Chomsky, “Powers and Prospects: Reflections on Nature and the Social Order”, Haymarket Books, 2015. ↩︎
-
Cameron Jones, Benjamin Bergen, “Large Language Models Pass the Turing Test”, arXiv preprint Xiv:2503.23674, Mar.31, 2025. ↩︎
-
Benj Edwards, “1960s chatbot ELIZA beat OpenAI’s GPT-3.5 in a recent Turing test study”, ars Technica, Dec.1, 2023. ↩︎
-
Deepa Seetharaman, “The Next Great Leap in AI Is Behind Schedule and Crazy Expensive”, The Wall Street Journal, Dec.20, 2024. ↩︎
-
Jobst Landgrebe, Barry Smith, There is no Artificial General Intelligence, arXiv preprint arXiv:1906.05833, Nov.27, 2019. ↩︎
-
Richie Etwaru, “There Could Never Be An Artificial General Intelligence”, Forbes, Jul.01, 2024. ↩︎
-
Svend Erik Larsen, Translation and analogical reasoning, Orbis Litterarum, 79, pp. 211–224, 2024. ↩︎
-
Jesse Prinz, “Resisting the linguistic analogy: A commentary on Hauser, Young, and Cushman”, Moral Psychology, Volume 2, 2008. ↩︎
-
David King, Large Language Models and the Rationalist-Empiricist Debate, arXiv preprint arXiv:2410.12895, 2024. ↩︎