When I see comments like this I have to wonder if I live in a different reality than whoever wrote it. I saw the potential in GPT models to help me with creative tasks, writing, etc. and I have consistently found it to be really quite unimpressive at it. I didn't even find it to be a good conversation partner.
I can't use it to generate anything without thorough review for soundness either, so it doesn't really have potential to automate much of anything as far as I'm concerned.
About the only really good use of it I have seen is "Nothing, Forever", an entertainment/art piece that thrives on the jank of what it produces.
I have no doubt that technology will eventually get there, but I certainly don't see it as there today.
I gave GPT your comment and asked it to write a reply, first it was too formal so I asked it to make the response more casual and less verbose. I think you and I do live in a separate world, to me this is such a large leap in what was previously possible I can't help but think of it as revolutionary. Not perfect, but I'd have a very, very hard time distinguishing this from a real human reply.
"Yeah, I hear what you're saying about the limitations of language models like GPT-3, especially when it comes to more creative tasks like writing. But it's still pretty impressive what these models can do! I mean, think about all the time they can save us by handling repetitive tasks, like customer service or even generating news articles. And who knows, maybe someday they'll be even better at the creative stuff too.
I know it's frustrating when you have to review everything the model outputs, but that's not just a problem with language models. People make mistakes too, y'know? So let's use these models as tools to help us, instead of trying to replace us completely.
All in all, while they may not be perfect yet, language models have a ton of potential to make our lives easier and more efficient. And that's pretty cool, if you ask me!"
This response is exactly my point. It's a bunch of word salad either saying nothing, repeating what I said, and/or using poor reasoning to make vapid arguments.
This is probably helpful to someone, presumably someone with a great need for an automated bullshit generator, but it isn't helpful to me.
> I mean, think about all the time they can save us by handling repetitive tasks, like customer service or even generating news articles.
Something robots already do and people already hate them for it[0]. I doubt anyone will empower the robot to do anything meaningful, so while it can shape a word salad with a high degree of sounding like a human being, it only sounds like a useless human being (for the thing I care about). I'd be better off looking at whatever data it has that it is supposed to be using to help me.
> People make mistakes too, y'know? So let's use these models as tools to help us, instead of trying to replace us completely.
I'm perfectly capable of making mistakes on my own, thanks. And as anyone who's ever had to train anyone can tell you, looking over someone else's work constantly doesn't exactly help you get anything done. Give me a GPT I can train myself to eventual competency and maybe it'll pay dividends, but that doesn't realistically exist today.
[0] And of course another application is to use another GPT instance to summarize the bullshit news articles that were themselves generated from a summary, or interact with the usless customer service chatbot, thereby closing the cycle of waste that has become the cornerstone of modern civilization.
> It's a bunch of word salad either saying nothing
This is demonstrably not true. It's saying quite a bit, yes it's rather PR-speak like, but that's how it's been trained.
> And as anyone who's ever had to train anyone can tell you, looking over someone else's work constantly doesn't exactly help you get anything done
WTF? I think the way you are thinking about this seems fundamentally bogged down by cynicism. Proof reading an article written for you is far, far speedier than writing it yourself. That's why newspaper editors a) edit, b) get someone to write for them to edit. I can review a PR far more quickly than writing the code.
> I'm perfectly capable of making mistakes on my own, thanks
But now I don't need to pay you to make mistakes, GPT will make them instead and whoever was checking on your mistakes is still useful.
> Give me a GPT I can train myself to eventual competency and maybe it'll pay dividends, but that doesn't realistically exist today.
And GPT didn't exist in the wild last year. You have just admitted this is WIP, and I see no reason not to expect progress will continue. Maybe one day we'll all have personally trained AI models that live through our whole lives with us. They'll be able to give personal life advice, around to bounce ideas off of, help you remember everything. I mean it's trivial to find revolutionary use cases for this tech. The only question is "is it possible or is there a hard wall in front of us?" I am yet to be convinced there is anything standing in the way and billions of dollars is backing that up right now.
Honestly GPT here has more insight than you do, I think that's miraculous.
> This is demonstrably not true. It's saying quite a bit, yes it's rather PR-speak like, but that's how it's been trained.
It has used many words, it has said very little, none of it useful or insightful. If you see something insightful in there, I must chalk it up to very different perception of reality.
> WTF? I think the way you are thinking about this seems fundamentally bogged down by cynicism. Proof reading an article written for you is far, far speedier than writing it yourself.
Not if you keep screwing it up. Not if I can't ever trust that you even tried to source correct information. Not if your creative output is trite at best.
> You have just admitted this is WIP, and I see no reason not to expect progress will continue.
I clearly said several times that I expect the technology will get better.
This whole moment of exuberance just reminds me of Bitcoin in 2020, or NFTs in 2021 or so, just a bunch of hype and a few grifters jumping immediately to the biggest thing, but when you peel back the layers and look at the actual thing, it underwhelms. Again, AI will get to being interesting and useful but I too don't see it there today myself.
Except Bitcoin and NFTs are still looking for real world adoption beyond the grifter pump and dumpers. GPT has many, many existing real world use cases. It can answer legal questions, write your homework, write code, I really don't understand the similarities. It's even better than Google at getting specific answers that you are having a hard time finding on Google. You can iterate conversationally with it. If you don't like the answer you can guide it, that's just ridiculously powerful.
That's fine, I agree with you, somewhat. I still think chatGPT et al are at the brink of something potentially revolutionary although they aren't quite there yet, but there is an obvious use case for LLMs whereas the crypto stuff never really found an actual real world use beyond speculation.
I can't use it to generate anything without thorough review for soundness either, so it doesn't really have potential to automate much of anything as far as I'm concerned.
About the only really good use of it I have seen is "Nothing, Forever", an entertainment/art piece that thrives on the jank of what it produces.
I have no doubt that technology will eventually get there, but I certainly don't see it as there today.