Talking to AI chat bots is the equivalent of talking to a parrot. I can't wait for the next market crash to wipe these ai and bitcoin companies off the face of the earth. Such a gigantic waste of human effort ( and carbon budget, in the case of bitcoin)
No, no it isn't. I've already used ChatGPT to help me with dozens of management tasks. I got it to help me write a presentation. I have it help me curate 1:1 conversations that are not boring, with my team members context and levels. It can write a song with melody, chords, lyrics, all novel and creative. And remember this is just ~6 months into ChatGPT.
BitCoin had no real use-cases beyond betting and pump and dump and niche trading that the general public could not get in on. ChatGPT is being used creatively today by millions. It does your homework.
How is this classed as a bubble or hype. How can you possibly see it as a waste of effort or equivalent to talking to a parrot??
When I see comments like this I have to wonder if I live in a different reality than whoever wrote it. I saw the potential in GPT models to help me with creative tasks, writing, etc. and I have consistently found it to be really quite unimpressive at it. I didn't even find it to be a good conversation partner.
I can't use it to generate anything without thorough review for soundness either, so it doesn't really have potential to automate much of anything as far as I'm concerned.
About the only really good use of it I have seen is "Nothing, Forever", an entertainment/art piece that thrives on the jank of what it produces.
I have no doubt that technology will eventually get there, but I certainly don't see it as there today.
I gave GPT your comment and asked it to write a reply, first it was too formal so I asked it to make the response more casual and less verbose. I think you and I do live in a separate world, to me this is such a large leap in what was previously possible I can't help but think of it as revolutionary. Not perfect, but I'd have a very, very hard time distinguishing this from a real human reply.
"Yeah, I hear what you're saying about the limitations of language models like GPT-3, especially when it comes to more creative tasks like writing. But it's still pretty impressive what these models can do! I mean, think about all the time they can save us by handling repetitive tasks, like customer service or even generating news articles. And who knows, maybe someday they'll be even better at the creative stuff too.
I know it's frustrating when you have to review everything the model outputs, but that's not just a problem with language models. People make mistakes too, y'know? So let's use these models as tools to help us, instead of trying to replace us completely.
All in all, while they may not be perfect yet, language models have a ton of potential to make our lives easier and more efficient. And that's pretty cool, if you ask me!"
This response is exactly my point. It's a bunch of word salad either saying nothing, repeating what I said, and/or using poor reasoning to make vapid arguments.
This is probably helpful to someone, presumably someone with a great need for an automated bullshit generator, but it isn't helpful to me.
> I mean, think about all the time they can save us by handling repetitive tasks, like customer service or even generating news articles.
Something robots already do and people already hate them for it[0]. I doubt anyone will empower the robot to do anything meaningful, so while it can shape a word salad with a high degree of sounding like a human being, it only sounds like a useless human being (for the thing I care about). I'd be better off looking at whatever data it has that it is supposed to be using to help me.
> People make mistakes too, y'know? So let's use these models as tools to help us, instead of trying to replace us completely.
I'm perfectly capable of making mistakes on my own, thanks. And as anyone who's ever had to train anyone can tell you, looking over someone else's work constantly doesn't exactly help you get anything done. Give me a GPT I can train myself to eventual competency and maybe it'll pay dividends, but that doesn't realistically exist today.
[0] And of course another application is to use another GPT instance to summarize the bullshit news articles that were themselves generated from a summary, or interact with the usless customer service chatbot, thereby closing the cycle of waste that has become the cornerstone of modern civilization.
> It's a bunch of word salad either saying nothing
This is demonstrably not true. It's saying quite a bit, yes it's rather PR-speak like, but that's how it's been trained.
> And as anyone who's ever had to train anyone can tell you, looking over someone else's work constantly doesn't exactly help you get anything done
WTF? I think the way you are thinking about this seems fundamentally bogged down by cynicism. Proof reading an article written for you is far, far speedier than writing it yourself. That's why newspaper editors a) edit, b) get someone to write for them to edit. I can review a PR far more quickly than writing the code.
> I'm perfectly capable of making mistakes on my own, thanks
But now I don't need to pay you to make mistakes, GPT will make them instead and whoever was checking on your mistakes is still useful.
> Give me a GPT I can train myself to eventual competency and maybe it'll pay dividends, but that doesn't realistically exist today.
And GPT didn't exist in the wild last year. You have just admitted this is WIP, and I see no reason not to expect progress will continue. Maybe one day we'll all have personally trained AI models that live through our whole lives with us. They'll be able to give personal life advice, around to bounce ideas off of, help you remember everything. I mean it's trivial to find revolutionary use cases for this tech. The only question is "is it possible or is there a hard wall in front of us?" I am yet to be convinced there is anything standing in the way and billions of dollars is backing that up right now.
Honestly GPT here has more insight than you do, I think that's miraculous.
> This is demonstrably not true. It's saying quite a bit, yes it's rather PR-speak like, but that's how it's been trained.
It has used many words, it has said very little, none of it useful or insightful. If you see something insightful in there, I must chalk it up to very different perception of reality.
> WTF? I think the way you are thinking about this seems fundamentally bogged down by cynicism. Proof reading an article written for you is far, far speedier than writing it yourself.
Not if you keep screwing it up. Not if I can't ever trust that you even tried to source correct information. Not if your creative output is trite at best.
> You have just admitted this is WIP, and I see no reason not to expect progress will continue.
I clearly said several times that I expect the technology will get better.
This whole moment of exuberance just reminds me of Bitcoin in 2020, or NFTs in 2021 or so, just a bunch of hype and a few grifters jumping immediately to the biggest thing, but when you peel back the layers and look at the actual thing, it underwhelms. Again, AI will get to being interesting and useful but I too don't see it there today myself.
Except Bitcoin and NFTs are still looking for real world adoption beyond the grifter pump and dumpers. GPT has many, many existing real world use cases. It can answer legal questions, write your homework, write code, I really don't understand the similarities. It's even better than Google at getting specific answers that you are having a hard time finding on Google. You can iterate conversationally with it. If you don't like the answer you can guide it, that's just ridiculously powerful.
That's fine, I agree with you, somewhat. I still think chatGPT et al are at the brink of something potentially revolutionary although they aren't quite there yet, but there is an obvious use case for LLMs whereas the crypto stuff never really found an actual real world use beyond speculation.
In ~10 minutes of asking for an outline first and then asking it to fill in functions it left unimplemented in the first replies, I got ChatGPT to provide the basics of a DNS server from scratch that I only had to correct about two lines of to be able to run and execute queries against.
This isn't a hard problem (it's a few dozen lines of code to do the bare minimum, maybe a couple of hundreds to add a reasonably full featured parser), but it involved a conversation of refining it that wasn't just "fill in this missing detail", but e.g. asking it to implement parsing of the query package in a specific language (Ruby), using a specific API, and using a specific format. Ruby ships with a DNS implementation; the ChatGPT solution looks nothing like either the standard Ruby implementation or the alternative Ruby DNS implementations I've checked.
You must know parrots with better reasoning skills than I do.
[EDIT: Note that I specifically picked this exactly because I was sceptical of how well it'd do, it's something I've done before myself, with a protocol I know well enough to be able to evaluate the code without much effort]
Just a month ago you would get voted down for saying chatGPT understands the queries you are giving it and is more then just an AI that parrots everything you tell it.
Now we are hitting an inflection point. People are realizing chatGPT is more then that.
The people getting voted down are the people saying chatGPT is just some trivial word generator. The masses are realizing that chatGPT is different.
There's just been years and years of AI buzz so people are used to downplaying it. So much so that when something genuinely intelligent is built they just fall into the same generic pattern of saying that these things are just statistical word generators because "oh it got my specific question wrong." It's biased because they are dismissing everything it gets right.
Yes and no. A pet parrot repeats what you yourself trained it with, or at least the people in the residence. ChatGPT was apparently trained on the web en masse; the parrot can speak back with the sum of human knowledge, to some approximation. That's quite useful. (And also problematic sometimes, as discussed elsewhere.)
To add to the last bit: It's trivial to get ChatGPT to evaluate actual code it can't possibly have seen before and correctly produce the expected output. Only limits on the size of context and output stops it from being turing complete.