I'm always impressed how fast people get used to new things. couple of years ago something like chatgpt was completely impossible, and now people complain it something's does mit do what you told it to and sometimes lies. (not saying your points are not valid or you should not raise them)
Some of the points are just not fixable at this point due to tech limitations. A language model currently simply has no way to give an estimate of its confidence. Also there is no way to completely do away with hallucinations (lies). there need to be some more fundamental improvements for this to work reliably.
As a dev trying to create 3D games on my spare time, I actually am (I'm also eager to use stableprojector once I upgrade my 10yo pc)
It's really hard having to be a 3D modeler, animator, art director, developer and having good enough taste to know you're never going to have the time to build what you find acceptable
First time making something 3D and code wise, codex has been very useful, even creating a in-game 3D editor as well as netcode
The trickiest part is really using 3D and it comes with lot of extra scoping you normally take for granted: animation, uv texture, rigging for humanoids, making sure stuff doesn't clip through etc.
Still learning Blender but its very slow. I haven't tried the MCP for it yet but I want to get proficient at it to be able to produce psx graphic models and textures...
I'm kind of keen to see what mess Claude code could do with a small Unreal Engine 5 C++ project. Or what clever tricks it could actually pull off in that environment.
This person related that in the years prior to this one, artists were essential members of every Game Jam team. Getting art for the games was the hardest, most limiting step. Without an artist, the games were limited in scope. Not every team could get an artist.
This year, many teams didn't have artists at all, and yet they were able to create more complete games with good looking art assets.
This isn't "firing" artists. This is making engineers more effective.
I work in the video space now, and I can't even begin to tell you how effective AI models make editors. It's like a super power.
TypeScript with NextJS. I've also used AI tools with C and Zig, and AI is much better at writing TS. But even though TS works much better, it's still not that great. This is largely because the quality of the code that AI writes is not good enough, so then I have to spend a decent chunk of time fixing it.
Everyone I know trying to use AI in large codebases has had similar experiences. AI is not good enough at following the rules of your codebase yet (i.e., following structure, code style, library usage, re-using code, refactoring, etc...). This makes it far less useful for writing code changes and additions. It can still be useful for small changes, or for writing first drafts of functions/classes/interfaces, but for more meaningful changes it often fails.
That is why I believe that right now, if you want to maintain a large codebase, and maintain a high bar for quality, AI tools are just not good enough at writing most code for you yet. The solution to this is not to get AI to write even more code for you to review and throw out and iterate upon in a frustrating cycle. Instead, I believe it is to notice where AI is helpful and focus on those use-cases, and avoid it when it is not.
That said, AI labs seem to be focusing a lot of effort on improving AI for coding right now, so I expect a lot of progress will be made on these issues in the next few years.
This illustrates why I'm so skeptical of all these "end to end encrypted" closed source solutions like WhatsApp: yes, they're end to end encrypted so the server doesn't necessarily get to see what's going on, but what's the point in that when I can't trust the client?