I keep trying to use it for code and it keeps leading me up the garden path with suggestions that look really reasonable but don't work.
Off the top of my head - a python app for drawing over a macos screen, but it used an API which didn't support transparent windows, I could draw over a black screen which was so close in code (even set the background alpha) but miles from the desired application. And a Java android app for viewing an external camera, which it seems used an API that doesn't support external cameras.
Of course, because it's not sentient when a couple of days later I figure out from searching elsewhere why it's effort would never work and tell it why, it just apologises and tells me it already knew that. As I'm going along though telling it what errors I'm getting it keeps bringing up alternative solutions which again look like exactly what I want but are completely broken.
I haven't had it produce a single thing that was any use to me yet, but so often it looks like it's done something almost magical. One day I'm sure it'll get there, in the meantime I'm learning to loathe it.
Separately, I've asked it to create a job advert for a role in my wife's business and it did a decent job of that, but it's far easier to walk a path there from what it provides to an acceptable solution. Programming is hard.
It never gives me perfect code, but it gets me 90% there.
For example, I just read the 2017 Google attention paper a few days ago, and with ChatGPTs help I was able to build a complete implementation using only numpy.
It took a full day to generate and organize the code and unit tests. Then two days of debugging and cross referencing.
But, this was impossible before. I barely knew anything about transformers or neural network implementations.
I can’t even imagine what truly motivated people are doing with it.
Completely agree. It gives me the headstart I need that would otherwise take hours of careful searching + crawling through docs, source code, examples, issue trackers, etc.
Do you not feel like you're losing something here, though? You haven't learned anything new or improved your understanding as you would have if you did the research.
This is my concern as well, since I learn so much by incidentally reading docs and example code. That said, many people copy-paste Stack Overflow code without reading or understanding it. So, this is not a new problem.
Not necessarily, no. I still need to refer to the docs, but now I get some additional context. As a simple example, if there’s some library that needs initialization I can ask “how do I initialize a new <library thing>?”. Then it’ll spit out a maybe correct example of initialization, but most importantly it’ll have some new keywords/functions/buzzwords that provide extra context for my manual search through documentation & source code.
If I were a student, yeah it could probably do my homework. As a professional, I find that it greatly helps me navigate through related ideas.
I definitely learned a lot. Since I had to cross reference to text books, Wikipedia, and github, I gained a good grasp of the material. It helped that I had a math and stats background, though. But, I imagine learning linear algebra this way would work well too.
For code using it like that, I live or as well, no need to try to understand the API docs (if even usable ), just be pointed in the right direction.
But code has a pretty strict check on correctness afterwards, thinking about it its pretty scary how chatgpt results can be used without proper validation. And they will, the step from "looks good to me" to "it is good" is a small one when you just want an answer.
I've been using it learn Python and produced Conway's Game of Life with pygame and I had never touched Python before that!
Now I'm working on another python project to try and split the audio of songs into individual lines to learn the lyrics and one approach has been downloading lyrics videos of youtube and munging them with ffmpeg to detect frame changes which should give me the timestamps to then split the audio on etc. It gave me all sorts of wrong enough to be slightly annoying advice about ffmpeg, but in the end what it did give me was _ideas_ and a starting point! I've had to wind up hashing the images and comparing the hashes, but I've since learned that that produces false positives and it was able to give me advice on using ensemble methods to up the accuracy etc which have panned out and helped me solve the problem.
I think for me, this is stuff I could have accomplished using Google if I had been sufficiently motivated, but it's lowered the level of frustration quite significantly being able to ask it follow up questions in context.
In the end, I don't think I mind the random bullshit it makes up from time to time. If I try it and it doesn't work, then fine. It's the stuff I try and it does work that matters!
Still very much in-flight, but progressing pretty quickly. Just tonight I decided I wanted to try using OCR to extract the text of the lyrics out of the frames of the video so that I can match up the clips with the lines eventually, and so far I've been through a series of iterations of different techniques with increasing accuracy. All of this has just been driven by asking ChatGPT and working through its suggestions, asking probing questions, and even using it to debug. It's just mind blowing. This is a project I wanted to years and years ago and just didn't know where to start, or the effort simply would have been more than I was willing to put in, but I actually think I'm going to succeed at it and I don't think it'll take me too long. It's making programming fun again.
ChatGPT totally saved one of my side projects. I was getting close to that abandonment phase due to a blocker. I had the solution in paper form, but I couldn't make myself type it into a computer and experiment. Playing with things in a prompt requires a lot less mental effort. "Oh that looks promising..."
I taught myself how to architect a software DSP engine in about 30 minutes last night. "Now rewrite that method using SIMD methods where possible" was a common recurrence. It's incredible how much you can tweak things and stay on the rails if you are careful. Never before would I have attempted to screw with this sort of code of my own volition. Seeing the trivial nature by which I can request critical, essential information makes me reconsider everything. The moment I get frustrated or confused, I can pull cards like "please explain FIR filters to me like I am a child".
same here. often times it's not even _wrong_ per se, it doesn't do what i was _actually_ asking. it's like if you asked an enthusiastic intern to do a thing for you, except interns are smarter.
I have also tested it retroactively on some tricky debugging sessions that I had previously spent a lot of time on. It really goes down the wrong path. Without asking leading questions and, well, proper prompting, you may end up wasting a lot of time. But that's the thing - when you're investigating something, you don't know the root cause ahead of time, you _can't_ ask questions that'll nudge it in the right direction. It ends up being a case of blind leading the blind.
As an SRE, it's handling the tricky debugging (the kind with awful Google results) that would alleviate the most time for me. The trivial stuff takes less time than going through the slog of prompting the AI in the right direction.
I keep seeing people saying they are using it for technical work. It must be design-oriented and less about troubleshooting because most of my experience with the latter has been abysmal.
For me it's gotten a few right, but a few terribly wrong. The other day it completely hallucinated a module that doesn't at all exist (but should!), and wrote a ton of code that uses that module. It took me a little while to figure out that the module I was searching for (so I could install it into the project) wasn't real!
I've been using them in combination and that has been a real boost to me! Co-pilot is great for ideas. Sometimes I want an explanation of what something is doing or to ask about methods for accomplishing a certain task etc. I'm just using ChatGPT in the web browser at the moment, but in the IDE itself would be fantastic, though at the moment I don't feel confident about giving an API key to a 3rd party extension that I don't fully trust.
for code completion copilot is significantly better,
for high level api exploration of reasonably popular frameworks openai can offer something different, at times very valuable despite the very frequent hallucinations.
on the other hand sometimes I re-appreciate good old documentation; openai here has a psychological effect of removing 'api-fear'
Off the top of my head - a python app for drawing over a macos screen, but it used an API which didn't support transparent windows, I could draw over a black screen which was so close in code (even set the background alpha) but miles from the desired application. And a Java android app for viewing an external camera, which it seems used an API that doesn't support external cameras.
Of course, because it's not sentient when a couple of days later I figure out from searching elsewhere why it's effort would never work and tell it why, it just apologises and tells me it already knew that. As I'm going along though telling it what errors I'm getting it keeps bringing up alternative solutions which again look like exactly what I want but are completely broken.
I haven't had it produce a single thing that was any use to me yet, but so often it looks like it's done something almost magical. One day I'm sure it'll get there, in the meantime I'm learning to loathe it.
Separately, I've asked it to create a job advert for a role in my wife's business and it did a decent job of that, but it's far easier to walk a path there from what it provides to an acceptable solution. Programming is hard.