Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The AI Bubble of 2023 (thereformedbroker.com)
60 points by yarapavan on Feb 13, 2023 | hide | past | favorite | 140 comments


The other day, I was trying to build a web scraper, but I kept getting an error that the data wasn't utf-8 encoded. I gave chatgpt the error along with the first 10 bytes (e.g. b'\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x03') and it informed me that this was a gzip-encoded binary file, and then it wrote me working code to decompress the file.

This is extremely valuable and saved me hours/was the difference between me stopping or not.

There will surely be people that overhype this, but there is real value as well.


The first result of searching for your string on Google is a StackOverflow link [1] that mentions that this is a gzip header. There is another link [2] below it showing how to decompress it.

[1] https://stackoverflow.com/questions/58552645/what-exactly-is...

[2] https://stackoverflow.com/questions/57328953/trying-to-decod...


This strikes me as comically similar to patterns I see in tech: people not knowing what came before, and fixating on overly complex new solutions to problems that were trivially solved with prior tools.

I fully expect a node.js server in a docker container to query GPT for error messages found in logs. They'll give it a cutesy name like ChattyDev, and before you know it, every coding school will be pumping out coders who think it's an essential development tool, and HN threads filled with people who will defend the practice to the death, referring to examples such as left-pad for prior art.

"yeah, well...people made fun of left-pad, but we see how it has completely automated the padding of strings, removing one more chance for error!"


GPTalloc: "Why worry about your own memory?"


I love that.


I hate to say it, but this is exhibit A in the "AI will just make some developers even more lazy than SO did"


To be fair, you could google "1f 8b" or even that entire hex string and Google will tell you the same thing.


Wow. Pretty amazing that chatGPT knows how to use Google.


This seems to really be where we're at now. Not a/better AI but a better search.


Better search because what exactly?


Context. I can never get Google to know the context of what I'm looking for, and the more context I add the more random 'important' stuff it decides to show me matching less significant terms. It was right in the article:

> Wall Street loves to play “Who wins, who loses” when a new technology explodes onto the scene. The fingers were all pointing at Google as a potential loser.


It's not just a play. It's business. There are so many people to invest on either Google or Microsoft, if more people wants to invest on Microsoft, some will do it at the expense of Google shares.


the difference is that with google you can trace the source of the statement, and with chatgtp they only option is to believe.


Also that google is able to give me live results whereas an LLM is trained on data from some point in the past. And just maybe I want to actually go to a site I'm searching for and not just wanting a summary or conversation about the topic. I just don't see how an LLM is going to replace search unless it can do all the other useful things search does right now.

Augment search, yes. Replace it with the current chatGPT interface? No way.


And you have comments underneath the StackOverflow answer by people who've tested it.


Which can be extremely useful, and people aren't always going to know to prompt an LLM to give them the info in those comments.


Bubbles can be based in real value.

The dotcom bubble was a bubble because there were a lot of junk, overvalued companies. Doesn't mean websites aren't useful.


I know what you mean but the word "bubble" implies no value. A bubble is mostly empty space of no value, the surface of the bubble hides the emptiness.

When the bubble pop as it inevitably does the emptiness is exposed.

The meaning you're going for here is that although there's a "bubble" it's likely a smaller bubble, not a large bubble like the dotcom bubble or the housing bubble.


In finance a bubble means an economic cycle where asset prices rise faster than their actual value. It is a term of art/analogy but it does not map 1:1 with physical bubbles such as soap bubbles. It does not mean that the underlying asset has no actual value.

The housing example is a great one. The 2008 housing bubble was due (in part) to overvalued houses. Those houses still have value though (they weren't scam houses that only existed on paper).


It is a 1:1 mapping to physical bubbles. Like soap bubbles.

I never said the bubble implies the underlying asset has no value. The bubble encases the difference in value and current value. The bubble does not encase the actual asset.

When a bubble pops that difference disappears. But the asset value remains because the intrinsic value was not what the bubble was referring to.


My main point was that just because there is an AI bubble does not mean that AI is not useful/valuable. Seems like we agree on that.


It's nice and all but simply putting "b'\x1f\x8b\x08\x00\x00\x00\x00\" in google search gives you the same answer (first link even)


I started out reading the article thinking the author was going to say that it's all vaporware, but he doesn't at all.


Tom Scotts video today re where we are on the AI's impact to society curve was interesting : https://www.youtube.com/watch?v=jPhJbKBuNnA&ab_channel=TomSc...

I think he's probably right that we're very early on the adoption curve.

(even though I always get downvote and call a dumbass when I mention it here, I still think progress in quantum computing (predictive) and AI go hand in hand towards AGI.)


It is what Nvidia's boss Jensen Huang calls [1] The iPhone moment of AI.

Basically every AI ( so to speak ) before ChatGPT is like Smartphone 1.0. Blackberry, Sony Ericsson, Nokia Symbian etc. ChatGPT is the Smartphone 2.0, aka iPhone era.

It look iPhone many years to reach mass adoption. MKBHD, as tech savvy as he is only had his first Smartphone in iPhone 4S era. ( If I remember correctly ). And then Phablet ( Any Screen larger than 5" ) came. I think we are at the end of that curve, 15 years after the iPhone introduction.

I think ChatGPT will follow a similar path. And if it was iPhone that pushed TSMC to sustain our current semiconductor improvement for the past ~10 years, then it will be ChatGPT that pushes us towards 1A ( 1nm ) node for another decade.

[1]https://wccftech.com/nvidia-ceo-calls-chatgpt-as-one-of-the-...


Unrelated, I've never seen this dude without a leather jacket on.


>(even though I always get downvote and call a dumbass when I mention it here, I still think progress in quantum computing (predictive) and AI go hand in hand towards AGI.)

You get downvoted because it's not a coherent position. It's like saying I'm really bullish on renewable energy powered by web3 blockchains. It's just a mishmash of buzzwords. The reality is Quantum Computing has limited use cases in general computing and AI isn't really one of them. I don't think there's even an argument that e.g. a quantum chip offers any real advantage over a TPU or even GPU for AI tasks.


I've heard some people cite quantum physics to explain free will and consciousness, although these arguments always seemed completely unsupported and a form of "god of the gaps" reasoning to me.

But being charitable I suppose if you subscribe to the idea that creativity and consciousness arise from quantum behaviours then perhaps it makes sense. It would suggest we likely have the architecture of an AGI completely wrong currently though, so I'm not sure how this really relates to AIs like ChatGPT.


> I've heard some people cite quantum physics to explain free will and consciousness, although these arguments always seemed completely unsupported and a form of "god of the gaps" reasoning to me.

Certainly a kind of god of the gaps reasoning, but actually quite relevant to our discussions of ai. If the world is fully deterministic, there's little room for the independent thought and free will we thinks makes our human existence special. We are then maybe just relatively sophisticated prediction engines, and even an LLM like ChatGPT isn't fundamentally dissimilar to us.


It's not relevant at all and quantum woowoo for consciousness is just that. There don't seem to be any structures small enough in the brain to allow for quantum effects and the temperature in the brain is so high that decoherence ensures that there are likely no quantum effects present at all.


Wigner's Friend is still a paradox that might be solved by a quantum computing AGI.


Ok most of these "Paradoxes" are due to interpretation of QM. We don't have to discuss that as several interpretations resolve the so called paradox.

But "Quantum Computing AGI" is a complete non sequitur. It's just not a thing that needs to exist, as I already said quantum computing doesn't bring anything new to that table for current AI technology(Deep Learning).



The downvotes on QC are likely somewhat understandable. Unlike chatgpt there are no results that give hope that QC will look like "computing" any time soon unless there is a breakthrough on the horizon.


AGI doesn't need quantum computing, but I'm sure it will be useful when we finally figure it out.

Maybe we need AI to figure out quantum computing.


Here's my killer app theory for stuff like GPT, just one example...

MS Teams gets the GPT treatment. It watches all your chats, email, calendars, code, wiki, meetings, spreadsheets, documents. Where is Slack now?

If you want to know absolutely anything from engineering domain knowledge to product strategy through to aiding you in sales, Slack has no answer. Ask Teams to splat you a database table, refine an SQL statement, brief you on a meeting, remind me of who a particular customer is and what sort of sales pitch would appeal to them.

I mean, not seeing the potential in GPT is really being intentionally blind to world changing technology. Just the fact that it can scan your whole codebase, find potential security holes, suggest performance blind-spots and indeed write code, or at least suggest code, this alone is such a big change it's hard to get your head around the opportunity. All of that in a chat window or IDE. It's revolutionary.


I was saying the same thing to a colleague a couple of weeks back. The power of ChatGPT will become obvious when it's sucking in all of an orgs data.

At that point instead of your boss asking you to send an email to someone or asking the data team to pull some stat, they'll just ask the chatbot to do it for them.

My guess is 80%+ of the work most people in corporate jobs do could fairly easily be automated with the next generation of GPT being fully integrated with an organisations data and tools.

It's so obvious the power of this technology. Those saying ChatGPT is still making a few programming errors with their crappy prompts are missing the point. Wait until a slightly more advanced version of GPT has access to your dev documentation + all your repos + your jira ticket board + your dev environment.

You won't even need to ask it to do anything. Your boss is going to quickly wonder why they need a team of 20 devs when a team of 2 devs reviewing ChatGPT pull requests is 10x more efficient.


If you thought tech debt was bad with human programmers...


When the funding horizon for your company or project is shorter than the tech debt horizon, the biz pick tech debt every time.

Color me jaded, but I don’t care if the crud feature du jour is full of tech debt when we’re gonna throw it out in 3 months anyway.


Just ask ChatGPT to refactor the code and have it pay down the tech debt.


Managing outsourced programmers is hard enough...


> Wait until a slightly more advanced version

Yes, and therein lies the issue - that last crucial 1%, 0.1% may be impossible to achieve (sort of like attaining lightspeed travel)


The last 1% isn't crucial when doing something as non-critical as sending an email or changing an input field on a website. I write bad emails and deploy awful code all the time. For 99% of problems it just needs to be good enough.

The fact some people argue ChatGPT already is already good enough for a lot of use cases should indicate we're not that far from something just as reliable as a human programmer.


... Or self driving cars. At some point they were only a couple years away


The Internet...

I predict the Internet will soon go spectacularly supernova and in 1996 catastrophically collapse.

- Robert Metcalfe, in InfoWorld, 1995

Most things that succeed don’t require retraining 250 million people.

- Brian Carpenter, in the Associated Press, 1995

Tim Berners-Lee forgot to make an expiry date compulsory . . . any information can just be left and forgotten. It could stay on the network until it is five years out of date.

- Brian Carpenter, in the Associated Press, 1995


to be fair, looks like we can circumvent the issue by leapfrogging into faster-than-light speeds. I've seen somewhere that real (albeit tiny) warp bubbles were achieved in a lab recently, so its not too far off into SciFi land... probably only a few decades (tm) until its here :D


> Wait until a slightly more advanced version of GPT has access to your dev documentation + all your repos + your jira ticket board + your dev environment.

why you are so confident chatgpt ever will be able to work as independent dev and won't hit the limit of it's abilities?


Why would it hit the limit of its abilities?

Almost all tech has room for progress, ChatGPT is no different, it's just a question of how fast that progress will be.

You can roughly predict the rate of future progress by looking at how quickly advancements have been made in the recent past. In my opinion AI looks like computers in the 70s/80s or cars in the 40s. The tech isn't completely new, but significant improvements are being made every year.

I think the burden of proof would be on you to explain why advancements in AI would stop right here. Conveniently right around the time when AI is able to write code fairly well but with a few bugs.


> Almost all tech has room for progress, ChatGPT is no different, it's just a question of how fast that progress will be.

all tech have also its limits, we could use internal combustion engines in the car, and even in some airplanes, but we have problems with this tech in building fighter jets and then spaceships.

The same is about ChatGPT, it is good to spill texts it seen during the training, or do slight transformation on it, but there were multiple attempts to teach NN do some algorithmic work, and it always failed miserably afaik.


And just like ICEs and jets, each tech increasingly became otherworldly and revolutionary at doing what it does well.


> each tech

lol, of course not each tech. You picked cars as example, but it was one of the most transformative advancements, and for each such advancements there are hundred thousands with moderate impact, and tens millions failures.


> became otherworldly and revolutionary at doing what it does well

also, it is yet to be proven that ChatGPT will do well in any job requiring significant reasoning skills.


Good thing that not all of the use cases that are finding footholds today require skills it does not yet have. Why are "significant reasoning skills" needed to help you write a song, or help rewrite your resume, or find an answer to a niche piece of knowledge, or fuck... this is exhausting.

I'd have thought that people on HN would have far more vision than this. For every single "well it can't do that yet" observation there are dozens of use cases people are finding TODAY. It's already widely useful and this is year one.


> Why are "significant reasoning skills" needed to help you write a song, or help rewrite your resume, or find an answer to a niche piece of knowledge, or fuck... this is exhausting.

discussion started with person claiming that soon chatgpt will be trained on docs, code and jira tickets and completely replace engineers, which imo will require significant reasoning skill.

I agree that for those tasks you described, chatgpt may find its niche.


@riku_iki

That was actually me. And it already can write Jiras, scaffold code, and help write docs. Doesn't need any significant reasoning, just a well trained model and contextual data.

However, I didn't say it was going to 100% replace anyone. The horizon looks like GPT will become a significant assistant tool for numerous tasks. I think it will need humans to review and approve output and direct it for the foreseeable future.


> And it already can write Jiras, scaffold code, and help write docs.

yes, and my hypothesis that's where it will stop, because core meaningful engineering work on complex product/system requires way more reasoning abilities.


> and my hypothesis that's where it will stop

That's like saying junior developers will never take your job because they just don't know enough context or have enough experience.

Why do you think there is any reason for progress to stop? When has progress ever stopped? Even just looking at the simplistic Github Copilot from ~3 years ago I could see the writing on the wall for jnr devs. When these models have your entire codebase consumed it's quite apparent that the gestalt has changed in a way I think you are underestimating.

Complex product/systems is exactly where you'll need an AI code writer. It knows all of everything in your massive codebase. It will be able to suggest efficiencies you couldn't possibly be aware of unless you'd read every line of code yourself.

Remember when AlphaGo made that one move that showed it was far, far superior than a human at improvising and seeing ahead leagues of moves? It shocked the whole community, even the DeepMind engineers were shocked. That's going to be you one day. There is absolutely no reason I can see for this not to happen bar some sort of as yet undiscovered logical limit.


> Why do you think there is any reason for progress to stop?

I described you reasons: jun developer has proven abilities to learn reasoning and context. and chat gpt does not have proven abilities to learn to reason.


New things arouse suspicion, even if they've been seen before. Mobile phones were plenty smart before Smartphones. The new thing occupies inhabited space so something has to be sacrificed.

All well and good when it's usually your enemy's space on the chopping block but mainstream media haven't yet told people if this kind of progress is generally desirable. The flock is flying blind so they default back to conservatism... for now.


Because it's not AGI and all technology has limitations. Also because it's easy to imagine a hyped tech doing anything. Much harder to make that work.


Or all twenty devs stay and eighty more get hired because DevGPT can do 100x more work in the same amount of time.


> The power of ChatGPT will become obvious when it's sucking in all of an orgs data.

I am having a hard time seeing this.

Intranet/corporate search has forever been awful in comparison to internet search.


And that's why traditional search is going to give way to GPT search. It solves this problem more or less already. Feed it web pages and you can talk to it, guide it with a conversation etc. Here is a realistic example for how searching Intranet could look in 5 years time:

-----------

> Employee: Hey CorpGod; where is that document that had some information about our new project management processes? You know, the one I had open like 2 weeks ago

> CorpGod: Oh, you mean the one your boss asked you to read by Friday? It's here [link]

> Employee: No, that's not it. It's the one that we copied and made edits to after that, I can't find it.

> CorpGod: That got deleted after Employee2 made a copy and published it. It's here on the Wiki [link]

-----------

GPT already does stuff like that with data from the Internet.


What makes you think GPT can be trained that fast so that it keeps up with newer documents, discussions, etc? It takes a long time for GPT models to come out. chatGPT is 2021 internet data, and you had a lot of people involved that made it possible. Do you envision the LLM constantly being fed updates by crawlers and indexers, with no need for human review or intervention?


Yes. If I was on OpenAI's board or product team, this is exactly where I'd be trying to go. It's a purely technical challenge with nothing standing in its way beyond regular engineering problem solving.

1) Scaling 2) Efficiency 4) Quality 5) Security

Some of these may be super-duper hard problems, but hard problems worth solving = massive opportunity.

If IBM could put space age technology into corporate offices in the 50s, we can put dystopian era technology into offices in the 2020s. At first it will be stupidly expensive, and only the big players will be able to take advantage of the cost to benefit ratio, but in time it will more than likely be on your phone.


Yes. This is what MS will do with Teams.


> I mean, not seeing the potential in GPT is really being intentionally blind to world changing technology. Just the fact that it can scan your whole codebase, find potential security holes, suggest performance blind-spots. All of that in a chat window or IDE. It's revolutionary.

Except that we are just left with outputs that are untrustworthy. All of these GPT products; ChatGPT, Copilot, Bard, Bing AI are still frequently hallucinating answers and often very incorrect solutions. We have already seen this with Copilot writing vulnerable code.

What this current AI hype cycle fails to realize is that given that you still cannot trust the generated output, it cannot be used safely in serious and highly regulated industries such as finance, law and medical professions all of which require trust and have been subject to AI disruption for years all with the same problem of trust being unsolved in AI. It is not enough to even disrupt search engines.

There is nothing new or revolutionary about a AI SaaS business with an API with a chatbot generating nonsense. I expect the hype around AI LLM chatbots to subside just like the hype around social spaces apps like Clubhouse did.


An anonymous clubhouse is back on the cards with realtime a.i. voice synth.

Trust in information is for people who outsource their every opinion, all they want to know is if it will keep them in high esteem for re-stating it and blinding following it. Well, for law, health, finance, that depends what year you got your opinion since the best information changes. Which is what we want if we want better.

A.I. output is just Words on a screen, it only promises coherence. How well a technology assists you is up to you or else we'd call it a torture device.


> outputs that are untrustworthy

Seems like this is an interesting engineering or product problem worth solving. History is littered with big problems that were solved. Go look at flight, within 50 years we went from "it's not possible", to it's too fragile, to jets, to international airports and mass transit never before possible.

If you work in tech I think you've slept through your life.


> Seems like this is an interesting engineering or product problem worth solving.

Trust is a social problem, Not an engineering problem. Without a fundamental breakthrough in neural networks in transparency, the use-case for LLMs as search engines will always be eternally untrustworthy.

Not even the Bing AI that Microsoft released is even a trustworthy search engine [0] . In fact it is less trustworthy than Google.

[0] https://news.ycombinator.com/item?id=34775853


The real application is increasing bureaucracy massively. Nothing that doesn't get an opinion filed for or some "memo" attached to it. No request for comment ever unanswered, nothing that doesn't deserve some written specs or an risk assessment. No process not documented in lengthy prose etc.


Pretty cynical opinion.


Not even meant to be very cynical. Looking at big organizations, the desire to increase control and oversight a lot will be pretty strong. A lot of technology is already devoted to that end anyway.

There will be other, interesting use cases for sure, but control, compliance, documentation, etc. will be big.


Not everything is a big org that spends its time and money chasing compliance for no reason.

It's cynical because you are viewing everything through one personal perspective that is loosely grounded in reality, and choosing to not see the rest of the world, which only serves to validate your cynical mindset.


Yes, it is based on my observations. I also see the cool, fast moving smaller companies (and even some big ones) that use AI and technology in fascinating ways - but I would caution against overlooking what a lot of the new technologies might wind up to be used for.

For example, modern controlling functions couldn't exist without a certain level of technology. How much technology spend these days is dedicated to some form of administrative tasks? Those $$$ will have an effect.


That won’t work until somebody designs a reliable security model for LLM’s.

In the real world, you can’t have a model that slurps up every bit of information in a company and then just lets anybody ask open ended questions about it.

But the security solutions for these technologies are far from maturity. They’re almost certainly addressable, but its going to be a whole industry in itself, will take years to take shape, and will probably involve underlying architectures that are designed very differently from what we see in this generation of models.


So you agree it's possible, desirable, and game changing? I think those are hugely positive indicators this will happen and people sitting comfortably in the status quo are going to lose out.

These corporate AI instances will take huge configuration, and like when IBM started adding infrastructure to large corps it was a huge effort. But the reward and advantage it provided was worth the millions of dollars and years of work. Eventually it all permeated down to consumers.

How long this will take it up for debate, but I don't think we can easily dismiss that it's inevitable.


Imagine the prompt hacking possibilites, this time not to generate outputs, but to reveal private discourses. Imagine that when this hallucinates, you will have wrong impressions on your colleagues or the state of projects.


Incidentally, it's amazing how terrible MS Teams is at code blocks, both the creation of them and the copy/pasting/using the code from one somebody else created.


Basically everyone I know is on either extreme. Either they say "It just makes stuff up and it's totally useless!" and some how not even admitting it's very cool, or acting like it's going to lead to all of us just being homeless and 1% of of the population that owns the AI having even more wealth.


The reality, as always, is somewhere in the middle


sounds like a possible way to detect who will be on which end of the spectrum /s


You should see https://www.microsoft.com/en-us/microsoft-teams/premium .

It may not be what you describe but they are definitely flirting.


yes, i've been trying to drive this home to folks around me also.

AI is only as good as the data it's learned from. There is little to no value in a really super awesome prompt. "Prompt engineering" is not a real thing. There is, however, enormous value in an AI that has learned on some specific set of data that nobody else has access to.

IOW, data is still the currency.


> Just as we saw companies adding the suffix “dot com” to their names in the 90’s and announcing “blockchain” initiatives in 2017, so too will we now see an endless parade of AI announcements in 2023.

Correct. Everyone and their cats are now an AI company again. Hyping and parading about a hallucinating chatbot is going to change the world and take over search engines and kill Google. It won't. It needs to do more than that to even challenge Google.

We are already looking at its limitations and after looking at both Bing AI, ChatGPT and Google Bard, they all fall short at reliability and it is all fundamentally rooted to the black-box nature of neural networks.

The hype and mania will go on just as long as how Clubhouse was hyped on for in 2020 and like what happened to GPT-3 after that AI hype cycle died.


At least with the crypto bubble, VCs could easily drop their bags on the general public. Things are going to be interesting when all these GPT wrappers being billed as “AI companies” start to centralize (how many AI copy writers do we really need?). It’s going to be a race to the bottom and the real winners here are going to be GPU sellers.


To that end, this weekend I wrote a GPT-Neo-1.3b based Discord bot, and ran it on an Oracle "Always Free" instance. It costs me nothing, and I'm able to play with a surprisingly large/competent AI model.

As the technology progresses and models are optimized/shrunk, I'm not sure if these "AI companies" will ever stand a chance. Even cheap Android smartphones can run the smallest GPT-Neo model, eventually the need for the SAAS wrappers for the technology will be cannibalized.


You are assuming that copy writing, or even text generation is all that GPT-like AI is good for. These tasks just the scratch the surface of the technology's >existing< abilities. There are tranformers already capable of operating pretty much any desktop GUI.


No I just didn’t mention it. Mostly because I see companies like Google and Microsoft rolling anything remotely useful for businesses into their offerings.


Bubble of... 2023? This has been going on for a while. Since at least 5 years founders have known the way to get funding is to claim your product has "machine learning" or "artificial intelligence"


chatGPT is more then a bubble. It's a societal inflection point. The reaction to it is appropriate.

However as usual people will get too excited and the hype will outpace the actuality of the technology. There will be a slight "bubble" but this isn't anything like the "housing bubble".

Prior to chatGPT though, ai could be characterized as something along the lines of a housing bubble. I would say almost all lines of research in ai save llms are over hyped bubbles.

Not saying these lines of research are useless or inconsequential. Far from it. Ai outside of llms is amazing. But these Ais are definitely inside huge bubbles.


That's good because the OP didn't compare it to the housing bubble at all. He just said "bubble."


Am I not allowed to make an independent comparison unless the OP made the same comparison?


The AI Bubble of 2023 is the Generative AI Bubble of 2023


But the pure AI companies like OpenAI and Tenstorrent haven't gone public yet.


I believe that the crucial test of real AI will be if a group of such AIs invent their own language to allow them to cooperate - much like intelligent animal species such as ourselves. The richness of the language indicates the depth of intelligence.

Any such language would necessarily be limited to their 'domain of existence' - you can't invent words for colours if your world has no light. Thus we'd need to give the AIs a full domain of existence for a full AI. They would need eyes (and ears?) and locomotion so they have a real world to reason about and talk about to each other (and us?)

The point being that emergent language is the only general way to gauge intelligence (above and beyond the somewhat anachronistic Turing test). I also conjecture that human language (in fact any human language) is complete in the sense that any and everything can be described within it (e.g. you could explain General Relativity to any human from any time, using their own language, as the basic concepts are already present, you just build on them - whereas this might not be possible with bird calls).

Furthermore, if our language is indeed complete, then we could suggest that any intelligent alien species we might encounter will also have at best a similarly complete language - and thus cannot be meaningfully more intelligent than us, as there is no 'higher' language, no concepts inaccessible to us.


Where was this guy warning of a crypto bubble in 2020-2021? Instead he was hyping crypto. If this is any indication, AI will probably continue to do well. I don't think GPT be as big as Google, but it's not going to pop either as he is expecting.


But HN will love it a lot more than Blockchain. Won’t downvote every mention of AI :)


Yes, AI is getting a lot of hype right now. No, this is nothing like blockchain.

For something to be overhyped, it must have a hype level disproportionate to its value. Deep learning is generating a huge amount of real value and is solving real problems. When we were getting Superbowl ads for crypto, the value proposition wasn't there.


I didn't watch the superb owl... did we get AI ads this time?


I didn't watch the whole thing, but I remember one NFT ad and zero AI ads


None, but it's a little too early for them. Next year's Super Bowl could be peak AI ad.


Because it can actually generate some interesting and/or useful content, unlike 99% of Blockchain proposals that amount to "a database, but worse" in the best case and "fraud, but obscured by technobabble" in the worst case.


When the barrier was lowered, 99% of stuff in the Web 1.0 space was bullshit (remember <blink> and <marquee> tags?), and Web 2.0 was bullshit also (PHP developers making lots of crappy insecure stuff)... the 94th social network for cats and toasters, where they vacuum all your data and sell ads, or charge you hundreds of dollars for a feature that consists of changing one variable somewhere.

Sometimes it was fraud, other times it was just money-losing economics ("traction") where VCs would prop it up and then hoped to dump the stock on an eager public in an IPO before making a dollar in profit (SoftBank took this to the extreme). Other times it just led to monopolies that sucked up everyone's data, engaged in surveillance capitalism, distracted them at dinner and made everyone have the attention of a goldfish. Oh yeah and made them depressed and insecure, especially teenagers.

I'm not sure that's much better than Web3.


Bubbles are hype awaiting delivery but what we're seeing is delivery before expectation; before that hype.

Don't get me wrong, there definitely is excitement and hype, but it's all off the back of what's available today. There is no bubble to pop. If ChatGPT and diffusion models are the upper limit of the tech, it can still be applied today to disrupt whole industries.

That's not too say there won't be waves of grifters —some of them AI— fluffing products that never materialise or under-deliver, but I think our imagination is going to lag behind the tech, not the other way around.


Is this wild speculation the material for creating bubbles?

FTA:

"Walmsley adds that he’s heard venture-capital investors speculate that the market for generative AI applications could be as large as $1 trillion. He notes that the world has over one billion knowledge workers; OpenAI charges $42 a month for the professional version of ChatGPT. If you assume every one of those people gets two accounts—one general, and one specialized—you get close $1 trillion."


What about assuming every one of those people gets 1 trillion accounts


That kind of "well if we as a single company manage to take over this entire sector that's a $XX Billion/Trillion dollar market" is how the tech world evaluation has been completely broken. It's part of what drove Tesla stock to the insane valuation it's still not coming down from, "well if self driving ever delivers they could take over the car, bus/public transit AND taxi businesses at the same time!" without even demonstrating a real hands off autopilot.


I especially like the entirely random factor of two thrown in there -- everybody will get two accounts because why?


Lol I know, like let's just 2X the TAM here for zero reason


It's OK speculation, but the economics is bad. If everyone is willing to pay for ChatGPT, the price is clearly far too low. The optimal price for a product is almost never the price where the audience is 100% saturated.


ChatGPT just absolutely can't ever own the market. It only works by siphoning off the intellectual output of humans. There will be a tipping point where it's just going to recycling it's own output until it's content becomes like "blurry JPEG of the internet" as the New Yorker put it. It can still be extremely valuable in some niches and even opening up new market opportunities that maybe didn't exist before, but it really just can't overtake any markets without eating itself.


That's even worse because that's saying the trillion dollar industry were every knowledge worker pays $84 a month for ChatGPT accounts is the bottom end of the range.


I think the 168$ per knowledge worker will be paid by the companies instead, with some free excepmtions like university/school programms to get the kids hooked into the tech


how about one knowledge worker charging a $trillion and then going home with it?


Well, if I owe my bank 1 million dollars, I have a problem. If I owe the bank 1 billion dollars, the bank has a problem.


After investing and building through the crypto bull market and now experiencing this bear market, the sentiment from more revered tech figures on AI's incoming prowess and impact is definitely higher-signal than those who preached the crypto gospel for the last few years. Think that's worth something in forming a view on this. I still think there will be a bubble, but it will be different.


So, what is the "best" ETF to invest in the new tulip mania craze.

All "AI" ETF that i see are filled with generic IT companies...

Anyone found a good one yet ?


These days it's difficult and expensive for a company to go public. So when it's early enough that you could make a good return just going long on this kind of bubble blowing up, there aren't yet companies in the public markets to invest into.

So what happens is that awful companies that happen to be already public will try to reinvent themselves as part of the trend. Some past examples:

- Fish-oil company Zapata happened to own the domain zap.com, so in 1999 they became an Internet portal: https://www.forbes.com/1999/04/13/mu11.html?sh=5d90ad3b65d8

- In 2017 a beverage company called Long Island Iced Tea changed its name to Long Blockchain Corp: https://www.cnbc.com/2017/12/21/long-island-iced-tea-micro-c...

With mostly companies of this quality to pick from, anybody who built an Internet portfolio in 1999 or a blockchain portfolio in 2017 probably lost 99% of their money.


Probably look up the hardware that is being used at Azure.


so far my best bet is Nvidia (As a proxy to the AI vertical)


Talking to AI chat bots is the equivalent of talking to a parrot. I can't wait for the next market crash to wipe these ai and bitcoin companies off the face of the earth. Such a gigantic waste of human effort ( and carbon budget, in the case of bitcoin)


No, no it isn't. I've already used ChatGPT to help me with dozens of management tasks. I got it to help me write a presentation. I have it help me curate 1:1 conversations that are not boring, with my team members context and levels. It can write a song with melody, chords, lyrics, all novel and creative. And remember this is just ~6 months into ChatGPT.

BitCoin had no real use-cases beyond betting and pump and dump and niche trading that the general public could not get in on. ChatGPT is being used creatively today by millions. It does your homework.

How is this classed as a bubble or hype. How can you possibly see it as a waste of effort or equivalent to talking to a parrot??


When I see comments like this I have to wonder if I live in a different reality than whoever wrote it. I saw the potential in GPT models to help me with creative tasks, writing, etc. and I have consistently found it to be really quite unimpressive at it. I didn't even find it to be a good conversation partner.

I can't use it to generate anything without thorough review for soundness either, so it doesn't really have potential to automate much of anything as far as I'm concerned.

About the only really good use of it I have seen is "Nothing, Forever", an entertainment/art piece that thrives on the jank of what it produces.

I have no doubt that technology will eventually get there, but I certainly don't see it as there today.


I gave GPT your comment and asked it to write a reply, first it was too formal so I asked it to make the response more casual and less verbose. I think you and I do live in a separate world, to me this is such a large leap in what was previously possible I can't help but think of it as revolutionary. Not perfect, but I'd have a very, very hard time distinguishing this from a real human reply.

"Yeah, I hear what you're saying about the limitations of language models like GPT-3, especially when it comes to more creative tasks like writing. But it's still pretty impressive what these models can do! I mean, think about all the time they can save us by handling repetitive tasks, like customer service or even generating news articles. And who knows, maybe someday they'll be even better at the creative stuff too.

I know it's frustrating when you have to review everything the model outputs, but that's not just a problem with language models. People make mistakes too, y'know? So let's use these models as tools to help us, instead of trying to replace us completely.

All in all, while they may not be perfect yet, language models have a ton of potential to make our lives easier and more efficient. And that's pretty cool, if you ask me!"


This response is exactly my point. It's a bunch of word salad either saying nothing, repeating what I said, and/or using poor reasoning to make vapid arguments.

This is probably helpful to someone, presumably someone with a great need for an automated bullshit generator, but it isn't helpful to me.

> I mean, think about all the time they can save us by handling repetitive tasks, like customer service or even generating news articles.

Something robots already do and people already hate them for it[0]. I doubt anyone will empower the robot to do anything meaningful, so while it can shape a word salad with a high degree of sounding like a human being, it only sounds like a useless human being (for the thing I care about). I'd be better off looking at whatever data it has that it is supposed to be using to help me.

> People make mistakes too, y'know? So let's use these models as tools to help us, instead of trying to replace us completely.

I'm perfectly capable of making mistakes on my own, thanks. And as anyone who's ever had to train anyone can tell you, looking over someone else's work constantly doesn't exactly help you get anything done. Give me a GPT I can train myself to eventual competency and maybe it'll pay dividends, but that doesn't realistically exist today.

[0] And of course another application is to use another GPT instance to summarize the bullshit news articles that were themselves generated from a summary, or interact with the usless customer service chatbot, thereby closing the cycle of waste that has become the cornerstone of modern civilization.


> It's a bunch of word salad either saying nothing

This is demonstrably not true. It's saying quite a bit, yes it's rather PR-speak like, but that's how it's been trained.

> And as anyone who's ever had to train anyone can tell you, looking over someone else's work constantly doesn't exactly help you get anything done

WTF? I think the way you are thinking about this seems fundamentally bogged down by cynicism. Proof reading an article written for you is far, far speedier than writing it yourself. That's why newspaper editors a) edit, b) get someone to write for them to edit. I can review a PR far more quickly than writing the code.

> I'm perfectly capable of making mistakes on my own, thanks

But now I don't need to pay you to make mistakes, GPT will make them instead and whoever was checking on your mistakes is still useful.

> Give me a GPT I can train myself to eventual competency and maybe it'll pay dividends, but that doesn't realistically exist today.

And GPT didn't exist in the wild last year. You have just admitted this is WIP, and I see no reason not to expect progress will continue. Maybe one day we'll all have personally trained AI models that live through our whole lives with us. They'll be able to give personal life advice, around to bounce ideas off of, help you remember everything. I mean it's trivial to find revolutionary use cases for this tech. The only question is "is it possible or is there a hard wall in front of us?" I am yet to be convinced there is anything standing in the way and billions of dollars is backing that up right now.

Honestly GPT here has more insight than you do, I think that's miraculous.


> This is demonstrably not true. It's saying quite a bit, yes it's rather PR-speak like, but that's how it's been trained.

It has used many words, it has said very little, none of it useful or insightful. If you see something insightful in there, I must chalk it up to very different perception of reality.

> WTF? I think the way you are thinking about this seems fundamentally bogged down by cynicism. Proof reading an article written for you is far, far speedier than writing it yourself.

Not if you keep screwing it up. Not if I can't ever trust that you even tried to source correct information. Not if your creative output is trite at best.

> You have just admitted this is WIP, and I see no reason not to expect progress will continue.

I clearly said several times that I expect the technology will get better.


This whole moment of exuberance just reminds me of Bitcoin in 2020, or NFTs in 2021 or so, just a bunch of hype and a few grifters jumping immediately to the biggest thing, but when you peel back the layers and look at the actual thing, it underwhelms. Again, AI will get to being interesting and useful but I too don't see it there today myself.


Except Bitcoin and NFTs are still looking for real world adoption beyond the grifter pump and dumpers. GPT has many, many existing real world use cases. It can answer legal questions, write your homework, write code, I really don't understand the similarities. It's even better than Google at getting specific answers that you are having a hard time finding on Google. You can iterate conversationally with it. If you don't like the answer you can guide it, that's just ridiculously powerful.


That's fine, I agree with you, somewhat. I still think chatGPT et al are at the brink of something potentially revolutionary although they aren't quite there yet, but there is an obvious use case for LLMs whereas the crypto stuff never really found an actual real world use beyond speculation.


In ~10 minutes of asking for an outline first and then asking it to fill in functions it left unimplemented in the first replies, I got ChatGPT to provide the basics of a DNS server from scratch that I only had to correct about two lines of to be able to run and execute queries against.

This isn't a hard problem (it's a few dozen lines of code to do the bare minimum, maybe a couple of hundreds to add a reasonably full featured parser), but it involved a conversation of refining it that wasn't just "fill in this missing detail", but e.g. asking it to implement parsing of the query package in a specific language (Ruby), using a specific API, and using a specific format. Ruby ships with a DNS implementation; the ChatGPT solution looks nothing like either the standard Ruby implementation or the alternative Ruby DNS implementations I've checked.

You must know parrots with better reasoning skills than I do.

[EDIT: Note that I specifically picked this exactly because I was sceptical of how well it'd do, it's something I've done before myself, with a protocol I know well enough to be able to evaluate the code without much effort]


A parrot can’t play 20 questions, emulate theory of mind, or implement merge sort with pirate themed variable names.


> A parrot can’t play 20 questions,

Not, but a tree can.

> emulate theory of mind,

Neither can Chat GPT. "Failing a Theory of Mind at the same level a 7 year old does" is no more Theory of Mind than being really good at chess is.

> or implement merge sort with pirate themed variable names.

True.


A tree can only play 20 questions if it’s asking the questions. ChatGPT can go the other way.


Just a month ago you would get voted down for saying chatGPT understands the queries you are giving it and is more then just an AI that parrots everything you tell it.

Now we are hitting an inflection point. People are realizing chatGPT is more then that.

The people getting voted down are the people saying chatGPT is just some trivial word generator. The masses are realizing that chatGPT is different.

There's just been years and years of AI buzz so people are used to downplaying it. So much so that when something genuinely intelligent is built they just fall into the same generic pattern of saying that these things are just statistical word generators because "oh it got my specific question wrong." It's biased because they are dismissing everything it gets right.


Differently from crypto currencies, AI is actually solving real problems.


Yes and no. A pet parrot repeats what you yourself trained it with, or at least the people in the residence. ChatGPT was apparently trained on the web en masse; the parrot can speak back with the sum of human knowledge, to some approximation. That's quite useful. (And also problematic sometimes, as discussed elsewhere.)


The value of text completion goes far beyond chat interfaces. LLM's are computers.


To add to the last bit: It's trivial to get ChatGPT to evaluate actual code it can't possibly have seen before and correctly produce the expected output. Only limits on the size of context and output stops it from being turing complete.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: