Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I tried Cursor again recently. Starting with an empty folder, asking it to use very popular technologies that it surely must know a lot about (Typescript, Vite, Vue, and Tailwind). Should be a home run.

It went south immediately. It was confused about the differences between Tailwind 3 and 4, leading to a broken setup. It wasn’t able to diagnose the problem but just got more confused even with patient help from me in guiding it. Worse, it was unable to apply basic file diffs or deletes reliably. In trying to diagnose whether this is a known issue with Cursor, it decided to search for bug reports - great idea, except it tried to search the codebase for it, which, I remind you, only contained code that it had written itself over the past half hour or so.

What am I doing wrong? You read about people hyping up this technology - are they even using it?

EDIT: I want to add that I did not go into this antagonistically. On the contrary, I was excited to have a use case that I thought must be a really good fit.



My recent experience has been similar.

I'm seeing that the people hyping this up aren't programmers. They believe the reason they can't create software is they don't know the syntax. They whip up a clearly malfunctioning and incomplete app with these new tools and are amazed at what they're created. The deficiencies will sort themselves out soon, they believe. And then programmers won't be needed at all.


Most people do not have the talent and/or discipline to become good programmers and resent those who do. This alone explains a lot of the current argument.


Just trying to help explain the issues you've been hitting, not to negate your experience.

First, you might've been using a model like Sonnet 3.7, whose knowledge cutoff doesn't include Tailwind 4.0. The model should know a lot about the tech stack you mentioned, but it might not know the latest major revisions if they were very recent. If that is the case (you used an older model), then you should have better luck with a model like Sonnet 4 / Opus 4 (or by providing the relevant updated docs in the chat).

Second, Cursor is arguably not the top tier hotness anymore. Since it's flat-rate subscription based, the default mode of it will have to be pretty thrifty with the tokens it uses. I've heard (I don't use Cursor) in Cursor Max Mode[0] improves on that (where you pay based on tokens used), but I'd recommend just using something like Claude Code[1], ideally with its VS Code or IntelliJ integration.

But in general, new major versions of sdk's or libraries will cause you a worse experience. Stable software fares much better.

Overall, I find AI extremely useful, but it's hard to know which tools and even ways of using these tools are the current state-of-the-art without being immersed into the ecosystem. And those are changing pretty frequently. There's also a ton of over-the-top overhyped marketing of course.

[0]: https://docs.cursor.com/context/max-mode

[1]: https://www.anthropic.com/claude-code


I had some success doing two front-end projects. One in 2023 using Mixtral 7b local model and one just this month with Codex. I am an experienced programmer (35 years coding, 28 professionally). I hate Web design and I never cared to learn JavaScript.

The first project was a simple touch based control panel that communicates via REST/Websocket and runs a background visual effect to prevent the screen burn-in. It took a couple of days to complete. There were often simple coding errors but trivial enough to fix.

The second is a 3D wireframe editor for distributed industrial equipment site installations. I started by just chatting with o3 and got the proverbial 80% within a day. It includes orbital controls, manipulation and highlighting of selected elements, property dialogs. Very soon it became too unwieldy for the laggard OpenAI chat UI so I switched to Codex to complete most of the remaining features.

My way with it is mostly:

- ask no fancy frameworks: my projects are plain JavaScript that I don't really know, makes no sense to pile on React and TypeScript atop of it that I am even less familiar with

- explain what I want by defining data structures I believe are the best fit for internal representation

- change and test one thing at a time, implement a test for it

- split modules/refactor when a subsystem gets over a few hundred LOC, so that the reasoning can remain largely localized and hierarchical

- make o3 write an llm-friendly general design document and description of each module. Codex uses it to check the assumptions.

As mentioned elsewhere the code is mediocre at best and it feels a bit like when I've seen a C compiler output vs my manually written assembly back in the day. It works tho, and it doesn't look to be terribly inefficient.


> It was confused about the differences between Tailwind 3 and 4

I have the same issue with Svelte 4 vs 5. Adding some notes to the prompt to be used for that project helps sort of.


It didn’t seem like it ever referred to documentation? So, obviously if it’s only going to draw on its ”instinctual” knowledge of Tailwind, it’s more likely to fallback on a version that’s been around for longer, leading to incompatibilities with the version that’s actually installed. A human doing the same task would probably have the setup guide on the website at hand if they realized they were feeling confused.


It would be nice if you could download some minified-for-LLM doc file that gave the LLM the public interface of the lib(s) you were using.


https://context7.com/jj-vcs/jj?tokens=55180 is supposed to be this, I have yet to try it though.


Tailwind 4 has been causing Claude a lot of problems for me, especially when upgrading projects.

I managed to get it to do one just now, but it struggled pretty hard, and still introduced some mistakes I had to fix.


> What am I doing wrong?

Wrong tool..




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: