Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's been a while since I read the original article,

https://courses.cs.umbc.edu/471/papers/turing.pdf

> The Imitation Game

> "Can machines think?"

We may need to redefine what these words mean.

Whether machines can think, and whether we believe so, are distinct.

If we're just a kind of machine, we can just say that LLMs have not surpassed us.

While LLMs still give away obvious signatures (their flaws are LLM-y and not humanlike), they play the imitation game better than any other machines before them. I lived long enough and played enough with simpler chatbots to believe that the simple text-based imitation game that Turing suggested will be won by computers before machines achieve reasoning ability that surpass the smartest humans.

> [...] the Turing Test is AI-complete and is a test for AGI.

Since Turing never used the term AGI, this is an interpretation.

Since we cannot determine whether a machine can think by comparing their anatomy to ours and drawing a conclusion that whatever we do, they must do the same, we are bound to compare their ability to imitate us. Which means we can't ultimately know, we can just feel ourselves convinced.

So there's no objective point at which to place the goalpost.

So constantly moving it is an expression of acclimatisation.

I'm impressed by LLMs, in spite of the hype and the homework cheating.

If anything, the ability of LLMs to imitate human labor shows how far from thinking a lot of human behavior is.



> If we're just a kind of machine

That is not true. Even our cells, with features that look a lot like machinery such as a proton pump, is on the whole several orders of magnitude more complex than any machine. Even a single human cell is more like an ecosystem than a machine. Let alone entire humans or even just the human brain. Consider that both cells and humans are capable of reproduction.

> If anything, the ability of LLMs to imitate human labor

Within extremely narrow confines and quite often going over into just conjuring up nonsense.


> That is not true.

Why? I'd say it's been self-evidently true for at least three decades now.

> Even our cells, with features that look a lot like machinery such as a proton pump, is on the whole several orders of magnitude more complex than any machine.

How many is several? I think that, at the level of a proton pump, individual components comes close to the order of complexity humanity deals with in man-made machines.

Also, at this level, things really look like machines, act like machines, quack like machines - there's no reason to not call them machines, given they obviously are. It's naturally originating molecular nanotech.

> Even a single human cell is more like an ecosystem than a machine.

Certainly. But then, an ecosystem is defined as a system made of bunch of varied stuff interacting with each other, finding balance through a set of feedback loops. An ecosystem of machines is still an ecosystem, and is arguably a machine in itself, too - after all, the term "machine" also applies to self-balancing / feedback driven systems, ever since we invented control theory and formalism to describe feedback loops.

> Let alone entire humans or even just the human brain.

Complex machines don't stop being machines when you keep adding moving parts to increase complexity. Or, at which point between a protein pump and a human being you believe the assemblage of molecular nanotech stops being a machine?

> Consider that both cells and humans are capable of reproduction.

What's that supposed to tell us? Human reproduction involves cell reproduction.


> a single human cell is more like an ecosystem than a machine

Good point. Not to mention the massive reliance on organisms like mitochondria and bacteria that don’t even share the host’s DNA.

> [imitate human labor] within extremely narrow confines and quite often going over into just conjuring up nonsense

Much like human labor.

So the intelligent, reflective and thoroughly iterated work is hardly replicated at all, and the poorly imitated, easily repeatable coursework and boring paper sludgework excellently so.

So we don’t just get to criticise LLMs for not actually being intelligent. We similarly get to criticise humans for not being so when we might think we are, either.


> Good point. Not to mention the massive reliance on organisms like mitochondria and bacteria that don’t even share the host’s DNA.

So? Does your car stop being a machine just because it's a complex systems of moving parts, many of which are dynamically balanced through feedback loops, which involve components sourced from different vendors, and substances that are not part of the original manufacturing data sheet?

Exactly what insight does this give us? I feel this is trying to contrast a single machine "unit" against a complex system, while also sneakily committing a naturalistic fallacy by using "machine" vs. "ecosystem" to imply "machine" vs. "life" in the magical sense (i.e. as if life was something beyond a physical process).


Yes, the Turing Test tries to give an operational definition of 'think'- can a computer give the same response as a thinking human.

Further, we can even disallow decisions based on the human/machine signature. The computers don't need to imitate human frailty and examiners decide based on the quality of the output, not whether it feels human or machine. I think that this is also why one of the stipulations is that the human/machine is not physically visible.

But even then, this goal is far away. We are not asking just about simple short chat imitation games but whether a computer can operationally do everything humans are capable of in the textual medium. This often requires experts.

The chat game version can be solved. But, the foundational goal is far away.


> Yes, the Turing Test tries to give an operational definition of 'think'- can a computer give the same response as a thinking human.

So it's quite neat that we've managed to beat the test by brute-forcing the right NN architecture on a goal function that literally is just "give the same response as a thinking human", in full generality of that phrase.

> But even then, this goal is far away. We are not asking just about simple short chat imitation games but whether a computer can operationally do everything humans are capable of in the textual medium. This often requires experts.

That is moving the goalposts. Or just another game entirely - which is fine on its own, as a further milestone.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: