Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The comments on this thread are proof of the AI effect: People will continually push the goal posts back as progress occurs.

“Meh, it’s just a fancy word predictor. It’s not actually useful.”

“Boring, it’s just memorizing answers. And it scored in the lowest percentile anyways”.

“Sure, it’s in the top percentile now but honestly are those tests that hard? Besides, it can’t do anything with images.”

“Ok, it takes image input now but honestly, it’s not useful in any way.”



Exactly. This is an early version of a technology that in short time span might wipe out the need of a vast amount of knowledge workers who are mostly still unaware of this or in denial about it.

There are two mistakes people make with this:

1) assuming this is the definite and final answer as to what AI can do. Anything you think you know about what the limitations are of this technology is probably already a bit out of date. OpenAI have been sitting on this one for some time. They are probably already working on v5 and v6. And those are not going to take that long to arrive. This is exponential, not linear progress.

2) assuming that their own qualities are impossible to be matched by an AI and that this won't affect whatever it is they do. I don't think there's a lot that is fundamentally out of scope here just a lot that needs to be refined further. Our jobs are increasingly going to be working with, delegating to, and deferring to AIs.


I’m one of these skeptics, but it’s not moving the goalposts. These goalposts are already there, in some sort of serial order that we expect them to be reached. It is good that when tech like this satisfied one of the easier/earlier goalposts, that skeptics refine our criticism based on evidence.

You will see skepticism until it is ubiquitous; for example, Tesla tech - it’s iterative and there are still skeptics about its current implementation.


It’s one thing to be skeptical of the state of art and only believe something when you actually see it working (a useful antidote against vapor ware)

It’s another to keep making wrong assertions and predictions about the pace of advancement because of a quasi-religious belief that humans with meat-brains are somehow fundamentally superior .


Expecting what we collectively call “artificial intelligence” to mimic our own intelligence, which is continuously being refined, does not seem like a quasi-religious belief.

Intelligence and consciousness are at the fringe of our understanding, so this skeptical approach seems like a reasonable and scientific way to approach categorizing computer programs that are intended to be called “artificial intelligence”. We refine our hypothesis of “this is artificial intelligence” once we gain more information.

You’re free to disagree of course, or call these early programs “artificial intelligence”, but they don’t satisfy my crude hypothesis above to a lot of folks. This doesn’t mean they aren’t in some ways intelligent (pattern recognition could be a kind or degree of intelligence, it certainly seems required).


The part I push back on is the confidence with which people claim these LLMs “are definitely not intelligent / thinking”.

We can’t even define clearly what human thinking is, yet so many folks claim “nope, LLMs are just pattern matching. Wake me up when it actually has a thought.”

And there are two points to make on that: the first is again, we can’t even explain our own thoughts or rational thinking. And second, I’ve yet to see how it even matters .

The output of GPT-4, for example, is pretty much on point with your average person on certain topics. Whether or not it’s “truly thinking” under the hood is irrelevant, imo, if it gives a really good illusion of it.


> We refine our hypothesis of “this is artificial intelligence” once we gain more information.

You're basically saying skepticism is the correct approach and it doesn't matter if we make confident yet wrong predictions about the (lack of) future potential of AI.

I mean, sure, that works too. But I think that's basically admitting the goalposts are moving.


You can call it that if you want, but it’s not the same as goalpost shifting for well-definable things like “universal healthcare” or “trans rights”. We don’t collectively agree on what artificial intelligence is, so it makes sense that it is constantly refined, and efforts that fall short are called out as such.


There isn't and was never any movement of goalposts. They have been exactly the same for 70 years. We want creative systems (in the Deutschian sense) that can create new explanatory theories, which lead to actual new knowledge. When an AI is capable of creating new explanatory theories that are GOOD (not world salad), we will have human-like AGI. GPT is no closer to this goal than ELIZA (though it is much more useful).


Bro what???!!?? GPT-4 is already being used as a personalized tutor on Kahn Academy. It’s personally helped me understand difficult Algorithms and CV applications in my undergrad classes. GPT-4 is about to revolutionize the world.


It’s about to revolutionize the world, yes. What you described is what this sort if approach is good at: acting as a repository and reformatter for already existing human knowledge. But that doesn’t mean it’s an AGI, because as the person you’re responding to said, to be sure we have one of those requires making something that can create something beyond current human knowledge. (Or, at least, beyond just the logic that was contained in its training set)


What it kind of boils down to is: is it a tool, or an entity? One could argue that IDE's and Compilers each revolutionized the world.


Your average person has no idea what an IDE or compiler is. Many more people already know what ChatGPT is right now than will probably ever know what either of those two words mean.


That's because people haven't been imaginative enough to use them that way (they're too busy jailbreaking it to say racist things or proselytizing on social media). Even in past 24 hours some people have already found it use in drug discovery using its ability to synthesize and relate different types of knowledge. One of the main ways new knowledge arises is through connecting knowledges in disparate areas and finding relationships among them, and LLMs (especially GPT-4) has been demonstrated to be quite good in this area.


Seems like you're responding to a comment completely unrelated to mine...not sure what happened here. I never said otherwise.


You’re confusing AGI with useful AI. AI doesn’t have to become an AGI to change the world. I also haven’t seen anybody claiming the recent breakthroughs are AGI.


> I also haven’t seen anybody claiming the recent breakthroughs are AGI.

If you time travel back 50 years ago and told them in the future that a computer could ace almost any exam given to a high school student, most people would consider that a form of AGI.

Now, the goalpost has shifted to “It’s only AGI if it’s more intelligent than the totality of humans”.

If you haven’t heard anyone claim that we’ve made advances in AGI, you heard me here first: I think GPT3+ is a significant advancement in humanity’s attempts to create AGI.


>If you time travel back 50 years ago and told them in the future that a computer could ace almost any exam given to a high school student, most people would consider that a form of AGI.

The problem is that these sorts of things were thought to require some sort of understanding of general intelligence, when in practice you get solve them pretty well with algorithms that clearly aren't intelligent and aren't made with an understanding of intelligence. Like, if you time travel back 100 years and told them that in the future a computer could beat any grandmaster at chess, they might consider that a form of AGI too. But we know with hindsight that it isn't true, that playing chess doesn't require intelligence, just chess prowess. That's not to say that GPT4 or whatever isn't a step towards intelligence, but it's ludicrous to say that they're a significant advancement towards that goal.


That's another way to state the same thing actually.

One can adopt a static definition of "general intelligence" from a point in history and use it consistently. In this case, GPT3+ is a leap in humanity's quest for AGI.

One can also adopt a dynamic definition of "general intelligence" as you described. In this case the equivalent statement is that in hindsight GPT3+ shows that language ability is not "AGI", but rather, "merely" transformer models fed with lots of data. (And then humanity's goal would be to discover that nothing is "AGI" at all, since we'd have figured it all out!)

The fact that we see things differently in hindsight is already strong evidence that things have progressed significantly. It proves that we learned something that we didn't know/expect before. I know this "feels" like every other day you experienced, but let's just look at the big picture more rationally here.


I will continually push back at the concept of 'Artificial Intelligence'. It's a science fiction conceit, a fantasy, and I don't think it is ever possible to achieve (creation of an actual artificial intelligence). And people who do think tat are, imo, fantasists.

That being said, in the field of machine learning there are significant things being achieved. I was wowed by DeepMind's AlphaZero and its achievements in 'teaching itself' and playing Go, at a level never seen before. I'm impressed by what Tesla is doing with self-driving. I'm less impressed by OpenAI's GPT-x because I don't think it's very useful technology (despite all the, imo, foolish talk of it doing away with all sorts of knowledge jobs and being able to 'tutor' ppl) but I do recognise that it also marks a step up in machine learning in the area of LLMs. None of this is 'Artificial Intelligence' however, and it is both silly and dangerous to conceptualise it as such.


> It's a science fiction conceit

What is the human brain then? I'm afraid you are bound to push so far that humans are no longer qualify as intelligent.


You can kind of prove it is possible, can't you? I mean, we have ourselves, which we're sort of claiming is the ground truth comparison to "intelligence". You can then see that the average human actually have limited intelligence, when you look at say, savants or hyper-intelligent people. Then it must be that some physical structure of people's bodies enables this higher degree of intelligence, and removes the "limit" so-to-speak. The average brain has 86 billion neurons, which we know are mostly responsible for piecing together consciousness.

We also have extensive studies on all the ways we are actually really bad at processing input (a by-product of our primate ancestral heritage). There are entire textbooks on all of the different biases we have built-in. And there are clear and obvious limits to our perception, as well (I'm thinking of the five senses here).

Imagine you're neither constrained on the input side or the processing side of this equation. It becomes kind of a mathematical inevitability that we will be able to create artificial intelligence. When anything can be tokenized and act as an "input", and we can run that through something that can process it in the same way that our brains can, only scaled up 10-fold (or more)...

If there is one thing we're good at, it is thinking that we are the center of the universe. I think that is blinding people to the possibility of AI. We can't fathom it, for lots of good and bad monkey reasons.


> I'm less impressed by OpenAI's GPT-x because I don't think it's very useful technology

Living in that sort of bubble must be very uncomfortable. Companies from virtually every category are pouring money in OpenAI starting with Microsoft. Just go and take a look at their partners and which field they belong to.


This area - so-called 'AI' - has a long history of malinvestment.

And remarkable that you cite Microsoft's involvement as some sort of standard of significance. A company that has a long history of non-innovation, alongside its disgraceful history of suffocating and extinguishing actual innovation. Founded by one of the most remarkably unimaginative and predatory individuals in the software industry. I'd suggest seeing Microsoft investing in anything is only a good sign of a potential future rort (Gates' whole history of making money).


It's remarkable that you skipped past "companies from every category" and jumped straight into your ramblings about Microsoft. I don't care about Microsoft, I used them as an example to disprove your bollocks claim about GPT-x not being useful. As I said, take a look at the list of their current partners. One option would certainly be that you're the only one with insight and all these companies are led by morons with no vision or foresight or the alternative could be that you're just wrong and too egotistic to admit that a piece of technology is better than you in many aspects.


Suggest you lay off the personal attacks and review the site's Guidelines [1]. I'd already passed over and chosen to ignore your opening attack in your first comment but you've ramped it up with this reply.

Finally, maybe rather than just saying that 'all sorts of entities are investing in this hoopla', try to make an actual case yourself as to what the value case in this technology is. We should try to think for ourselves here, not just follow the (investing) herd.

[1] https://news.ycombinator.com/newsguidelines.html


Could you please elaborate on the distinction that you see between "artificial" intelligence and whatever it is that we as humans possess? Furthermore, what specific aspects of this intelligence are unachievable by an AI? Is it a "human intelligence is non-computational" line of thinking?


Machines are not alive, they are constructed and for them to develop intelligence the capacity would either need to be constructed too (how?) or it would need to appear as an 'emergent quality'. I think the latter is the line that believers in the concept of 'AI' mostly take but I see it as magical thinking as we have had no indications of such emergent behaviour in our experience with the machines we have constructed, nor are there any good reasons as far as I can see as to why we might hope or expect it to appear. I see it only as a part of the long history of humans and human cultures projecting their own intelligence and agency onto inanimate objects. Again, 'magical thinking'.

I acknowledge and am mostly fine with the idea that machines can 'learn'. But they learn (the game of Go, navigating a car in the real world, etc) under our direction and training (even if they potentially go on to surpass our abilities in these tasks). They don't have any agency; they don't have any curiosity; they don't have any 'spirit of consciousness'; they are not intelligent. They have simply been trained and learnt to perform a task. It's a great mistake to confuse this with intelligence. And the field itself is acknowledging this mistake as it matures, with the ongoing change of nomenclature from 'Artificial intelligence' to 'machine learning'.


This begs several questions -- one of which being, "what is intelligence, then?"


Here is what it thinks of the shifting goal posts https://raw.githubusercontent.com/soheil/fileshare/main/The%...


This is a good example of “this is great, so I’m gonna settle here”-type of people. They just stick to what’s popular today, without understanding it will become a past anyway.

GPT is limited by its own design. The network is crude on the architectural level - which is easy to copy - but is only scaled to an unusual level - which is the factor behind the recent development. The current situation is almost like running BFS on a cluster during a chess match. Certainly, the AI will be able to beat human, but that can hardly change anything in real life, because it’s just BFS.

I find the real problem with AI is that there are people who freak out and extrapolate from select few examples. Meh, let GPT do that - because it can’t by design. We still have a lot of things to do until AIs become generally applicable.


Yeah, but can GPT4 be a hypocrite?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: