Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> What are the implications for society when general thinking, reading, and writing becomes like Chess?

“General thinking” is much more than token prediction. Hook it up to some servos and see if it can walk.



> “General thinking” is much more than token prediction. Hook it up to some servos and see if it can walk.

Honestly, at this rate of improvement, I would not at all be surprised to see that happen in a few years.

But who knows, maybe token prediction is going to stall out at a local maxima and we'll be spared from being enslaved by AI overlords.


When it does exactly that you will find a new place to put your goalposts, of course.


No, the robot will do that for them.


Goalposts for AGI have not moved. And GPT-4 is still nowhere near them.


Yeah, I'm not sure if the problem is moving goalposts so much as everyone has a completely different definition of the term AGI.

I do feel like GPT-4 is closer to a random person than that random person is to Einstein. I have no evidence for this, of course, and I'm not even sure what evidence would look like.


Talk about moving the goalpost!


There are already examples of these LLMs controlling robotic arms to accomplish tasks.


https://youtu.be/NYd0QcZcS6Q

"Our recent paper "ChatGPT for Robotics" describes a series of design principles that can be used to guide ChatGPT towards solving robotics tasks. In this video, we present a summary of our ideas, and experimental results from some of the many scenarios that ChatGPT enables in the domain of robotics: such as manipulation, aerial navigation, even full perception-action loops."


We already have robots that can walk better than the average human[1], and that's without the generality of GPT-4

[1] https://www.youtube.com/watch?v=-e1_QhJ1EhQ


Imagine citing walking as a superior assay of intelligence than an LSAT.


Dogs can walk, doesn’t mean that they’re capable of “general thinking”


Are’t they? They’re very bad at it due to awful memory, minimal ability to parse things, and generally limited cognition. But they are capable of coming up with bespoke solutions to problems that they haven’t encountered before, such as “how do I get this large stick through this small door”. Or I guess more relevant to this discussion, “how can I get around with this weird object the humans put on my body to replace the leg I lost.”


> see if it can walk

Stephen Hawking : can't walk


We already have robots that can walk.


Yeah, but my money is on GPT5 making robots “dance like they got them pants on fire, but u know, with like an 80s vibe”


They don't walk very well. They have trouble coordinating all limbs, have trouble handling situations where parts which are the feet/hands contact something, and performance still isn't robust in the real world.


Poor solutions do that, yes, but unlike ML control theory has a rich field for analysis and design.

You guys are talking about probably one of the few fields where an ML takeover isn’t very feasible. (Partly because for a vast portion of control problems, we’re already about as good as you can get).

Adding a black box to your flight home for Christmas with no mathematical guarantee of robustness or insight into what it thinks is actually going on to go from 98%-> 99% efficiency is…..not a strong use case for LLMs to say the least


Seems the humans writing the programs for them aren't very intelligent then.


I'm not sure if you're joking. Algorithms for adaptive kinematics aren't trivial things to create. It's kind of like a worst case scenario in computer science; you need to handle virtually unconstrained inputs in a constantly variable environment, with real-world functors with semi-variable outputs. Not only does it need to work well for one joint, but dozens of them in parallel, working as one unit. It may need to integrate with various forms of vision or other environmental awareness.

I'm certainly not intelligent enough to solve these problems, but I don't think any intelligent people out there can either. Not alone, at least. Maybe I'm too dumb to realize that it's not as complicated as I think, though. I have no idea.

I programmed a flight controller for a quadcopter and that was plenty of suffering in itself. I can't imagine doing limbs attached to a torso or something. A single limb using inverse kinematics, sure – it can be mounted to a 400lb table that never moves. Beyond that is hard.


I believe you’re missing some crucial points. *There is a reason neural network based flight controls have been around for decades but still not a single certified aircraft uses them.*

You need to do all of these things you’re talking about and then be able to quantify stability, robustness, and performance in a way that satisfies human requirements. A black box neural network isn’t going to do that, and you’re throwing away 300 years of enlightenment physics by making some data engorged LLM spit out something that “sort of works” while giving us no idea why or for how long.

Control theory is a deeply studied and rich field outside of computer science and ML. There’s a reason we use it and a reason we study it.

Using anything remotely similar to an LLM for this task is just absolutely naive (and in any sort of crucial application would never be approved anyways).

It’s actually a matter of human safety here. And no — ChatGPT spitting out a nice sounding explanation of why some controller will work is not enough. There needs to be a mathematical model that we can understand and a solid justification for the control decisions. Which uh…at the point where you’re reviewing all of this stuff for safety , you’re just doing the job anyways…


I was pointing out a double standard.

First there was a comment that GPT wasn't intelligent yet, because give it a few servos and it can't make them walk.

But that's something we can't do yet either.


Oh, my bad. I agree completely.

Though I do wonder if AI — in some form and on some level of sophistication — will be a huge asset in making progress here.


AGI is not required for walking.


And also walking is not required for AGI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: