14 Comments
User's avatar
Alex's avatar

Awesome article!

Expand full comment
Lucas's avatar

Great Article. I wonder how much "Code comments and doc-strings" actually matter.

Overall, I love using Cursor/Claude 3.7, but it’s easy to become overly reliant on these tools. While LLMs can occasionally generate bull, they’re still fantastic for quickly grasping new concepts.

Expand full comment
Shrivu Shankar's avatar

Thanks! In my experience the "bull" % is inversely proportional to effort writing good docs , rules, and simplifying the hot paths. When I'm working on projects now, that's what I start with and AI does pretty much all the incremental work (w/o hallucinations or quality drop off).

Expand full comment
Reactorcore's avatar

Awesome article, the tips were really useful to me. I'm still trying to figure out how to go about making a data driven program/game where the AI sets up the backend and effectively leaves me mainly the job of a content developer of implementing game content through ini files and folders as if it was a modding system.

While its mostly easy to make a one-off web app, trying to figure out how to push the AI to make an game with multiple intertwined systems is still eluding me. Not sure how to start and not sure how to get from A to Z in such a complex project.

Expand full comment
Shrivu Shankar's avatar

Thanks! Had GPT4.5 write up some thoughts that might be useful: https://chat.sshh.io/share/2dQt6D7GDZu4Qo9mO5BXZ

Expand full comment
Samuel's avatar

its quite expensive compared to github copilot or gemini code assist, is it really worth it?

Expand full comment
Shrivu Shankar's avatar

Most of the cost imo is just the cost of running an LLM (i.e. if you built your own IDE on the APIs directly it will cost a similar amount). So to me it's definitely worth it and I even pay for both a personal account and a work account.

That being said Gemini has really stepped up in the last few weeks so I could see a world where if that becomes the primary flagship coding model at it's current costs, Cursor should reduce prices.

Expand full comment
Lucas's avatar

This is a great article. I'm come from a background of 20 years in software engineering and I've recently been using AI tools such as cursor to spin up scaffolding code which I can then further develop.

I've found using tools like prompt genie to help write specific detailed prompts which I then paste into Cursor.

Rules are effective if they're at a granular level such as the module or project rather than global. Write your rules like an encylopedia article focus on the "What" it needs to do rather than "How".

I'm a developer so I always review the generated code as you would do any other coding review for example before a branch merge.

Also spend some time exploring the various search tools provided by cursor such as codebase_search, read_file, grep_search. The deeper understanding has helped make my workflow more efficient.

If you're serious about your projects, it's worth considering the Pro plan of Cursor. Personally, I found an affordable option on platforms like Gamsgo.com for around $10 a month (using the discount code GIVEME5). It’s accessible and a great way to experiment further. While the basic plan is sufficient for some, the added features of Cursor Pro have become important for my work.

Expand full comment
Antonio's avatar

First of all thank for this post!

What are tools like "codebase_search”, “read_file”, “grep_search”, “file_search”, “web_search” I mean, those tool come with cursor? Where can I review all the tools that exist?

Thanks.

Expand full comment
Shrivu Shankar's avatar

Thanks! Yes Cursor includes these tools as text in the prompt and when the LLM uses them Cursor implements the actual logic to produce the results.

All the tools are in the linked gist https://gist.github.com/sshh12/25ad2e40529b269a88b80e7cf1c38084

Expand full comment
Dan Lucraft's avatar

I haven't got into using the Cursor directory of rules yet but I have been using .cursorrules to get started.

And I've noticed that while it might pay attention to some of what's in there, it regularly ignores loads of them. Like there's a rule about using icons from a particular set rather than inlining SVG, it ignores it. Or the rule about using Tanstack Query for all API calls, ignores that.

And it's not like there's loads in there, the whole file is currently only 101 lines, that's the entire extent of our ruleset so far.

My question is - what's the point of all this if most of the time half the rules are just ignored?

Expand full comment
Shrivu Shankar's avatar

> My question is - what's the point of all this if most of the time half the rules are just ignored?

The point is that when you finally get the rules working, AI can write nearly all of your code (: The "hard" part shifts from writing code to figure out the best way to one-time explain how your codebase should work.

100 lines could be a lot depending on how it's formatted but def recommend the project roles since you can be much more strategic about the context the agent has for different types of changes. There's also the bit in the article about simplifying things (where possible) that the agent consistently gets wrong.

Expand full comment
Antonio's avatar

> My question is - what's the point of all this if most of the time half the rules are just ignored?

> The point is that when you finally get the rules working...

How the rule its going to work if he is saying that half of the time are just ignored?

Expand full comment
Shrivu Shankar's avatar

They don't ignore my rules (:

The main takeaway from the post should be that if Cursor is acting weird it's often bc it doesn't work like just another engineer or any traditional IDE. Writing rules that actually work is a new skill (really just communicating with LLMs generally) that requires practice and knowing some of the tips in the article.

Expand full comment