This is one of the most cogent takes on the current state of AI coding tools. I instantly subscribed. Good work, man!
It seems like we're getting closer to another layer of abstraction emerging, where ultimately there will be, basically, an AI coding language designed to optimize token usage and minimize power consumption.
Until such time, were all just trying to figure out how best to use these tools, and honestly it's kind of an amazing time to be around to see this new tech emerging.
Looking forward to following you and learning more. I am curious if you have used the bmad method tools for a more orchestration approach to using basically any model. Projects like that are definitely going to be instrumental in the next wave of innovation.
Nice one, I got a lot out of this post. Took notes in Obsidian while picking through it. I didn't know you had used Gemini to help you write it till I saw the footnote so I can honestly say from my perspective ... No AI smell
Nice post, Shrivu! I learned a few new things, but I am not sure I agree all of them. Happy to be the first committer. I’ll just respond in the order you laid things out
1. “AI Cannot Read Your Docs” — catchy title, but not fully true. AI can read docs if you wire it up right: Context7 MCP, llmstxt, or even just letting an LLM search or fetch content. The bigger point, which I agree with, is that we should think about redesigning the software itself
2. Showing CLI output with object details and next steps is super useful. Not just for AI but for beginners trying to understand what’s happening or track the execution history
3. Error messages should always explain what went wrong and how to fix it, not just say “something went wrong”
4. With AI agents, good code comments are critical. The harder part is keeping them fresh when you refactor or redesign. Some people even go all the way, treating prompts as the only real “source” and having LLMs generate the actual Java/C/TypeScript code
5. CLI really is a great interface. That’s basically why Claude Code exists as a terminal tool instead of a web UI or IDE plugin. Even macOS apps are increasingly scriptable — maybe that’s a growing trend
6. Building interfaces that feel familiar, like pytest or pandas, helps adoption. Reinventing syntax for no reason is usually a barrier
7. Not sure about organizing code strictly by feature. In practice, backends might be Java/Go/Rust, frontends TypeScript, middleware in Go. You wouldn’t put those in the same folder like “addEmail.” Maybe with LLM coding, microservices could make more sense to have feature-oriented structure but today it feels messy
Overall, great post and worth a wide read. Would you be ok if I put together some visuals or infographics to share it further?
(1) Have to have catchy titles nowadays (: but I do mean it in the sense that sure you can wire up various retrieval systems or even use 1M context models but this in reality is far from what you'd expect of a human eng how has "read" the docs and accurately applies them. To say AI truly read docs it's not just context, but also instruction following.
(7) Totally agree and it does depend for different stacks where the optional path lies. It could be as simple as naming (parallel folder structured for features in different parent folders) or picking the same language for FE+BE (e.g. part of why Next.js can be very AI friendly as a full-stack library).
Feel free to share as long as we are linking back somewhere to this one!
Very insightful tips! Thank you for sharing these! I loved (7) as I have been pioneering this pre-AI times for my engineers.
And I wanted to discuss (2), "The Successful Output", how you described the output should be.
I understand this can be really useful for workflows. My concern is this will force individual steps to be used in limited purposes. As the example case, "deploy.sh", I probably want to use this as a reusable component in different workflows, in one I'll follow it up by testing in production, or in another email to a specific user, etc.
Didn't you have a need like this? Maybe since the time of writing?
I have been experimenting Claude Code not for too long, so looking for tips to define workflows that consist of reusable components.
It's important to note you aren't hardwiring the next steps but giving hints to the agent on syntax for future actions. I trust the agent to ignore if thats not part of its goal.
Also the promptable outputs don't need to be code/explicit syntax -- it could be a suggestion to test in production, etc.
I almost think about it like the new SKILLs paradigm just that the progressive disclosure is happening in the output of a command.
This is one of the most cogent takes on the current state of AI coding tools. I instantly subscribed. Good work, man!
It seems like we're getting closer to another layer of abstraction emerging, where ultimately there will be, basically, an AI coding language designed to optimize token usage and minimize power consumption.
Until such time, were all just trying to figure out how best to use these tools, and honestly it's kind of an amazing time to be around to see this new tech emerging.
Looking forward to following you and learning more. I am curious if you have used the bmad method tools for a more orchestration approach to using basically any model. Projects like that are definitely going to be instrumental in the next wave of innovation.
Nice one, I got a lot out of this post. Took notes in Obsidian while picking through it. I didn't know you had used Gemini to help you write it till I saw the footnote so I can honestly say from my perspective ... No AI smell
Can't wait to try this approach it seems promising.
Nice post, Shrivu! I learned a few new things, but I am not sure I agree all of them. Happy to be the first committer. I’ll just respond in the order you laid things out
1. “AI Cannot Read Your Docs” — catchy title, but not fully true. AI can read docs if you wire it up right: Context7 MCP, llmstxt, or even just letting an LLM search or fetch content. The bigger point, which I agree with, is that we should think about redesigning the software itself
2. Showing CLI output with object details and next steps is super useful. Not just for AI but for beginners trying to understand what’s happening or track the execution history
3. Error messages should always explain what went wrong and how to fix it, not just say “something went wrong”
4. With AI agents, good code comments are critical. The harder part is keeping them fresh when you refactor or redesign. Some people even go all the way, treating prompts as the only real “source” and having LLMs generate the actual Java/C/TypeScript code
5. CLI really is a great interface. That’s basically why Claude Code exists as a terminal tool instead of a web UI or IDE plugin. Even macOS apps are increasingly scriptable — maybe that’s a growing trend
6. Building interfaces that feel familiar, like pytest or pandas, helps adoption. Reinventing syntax for no reason is usually a barrier
7. Not sure about organizing code strictly by feature. In practice, backends might be Java/Go/Rust, frontends TypeScript, middleware in Go. You wouldn’t put those in the same folder like “addEmail.” Maybe with LLM coding, microservices could make more sense to have feature-oriented structure but today it feels messy
Overall, great post and worth a wide read. Would you be ok if I put together some visuals or infographics to share it further?
Thanks!
(1) Have to have catchy titles nowadays (: but I do mean it in the sense that sure you can wire up various retrieval systems or even use 1M context models but this in reality is far from what you'd expect of a human eng how has "read" the docs and accurately applies them. To say AI truly read docs it's not just context, but also instruction following.
(7) Totally agree and it does depend for different stacks where the optional path lies. It could be as simple as naming (parallel folder structured for features in different parent folders) or picking the same language for FE+BE (e.g. part of why Next.js can be very AI friendly as a full-stack library).
Feel free to share as long as we are linking back somewhere to this one!
Late to the party, I came here after reading https://blog.sshh.io/p/how-i-use-every-claude-code-feature.
Very insightful tips! Thank you for sharing these! I loved (7) as I have been pioneering this pre-AI times for my engineers.
And I wanted to discuss (2), "The Successful Output", how you described the output should be.
I understand this can be really useful for workflows. My concern is this will force individual steps to be used in limited purposes. As the example case, "deploy.sh", I probably want to use this as a reusable component in different workflows, in one I'll follow it up by testing in production, or in another email to a specific user, etc.
Didn't you have a need like this? Maybe since the time of writing?
I have been experimenting Claude Code not for too long, so looking for tips to define workflows that consist of reusable components.
Thanks!
It's important to note you aren't hardwiring the next steps but giving hints to the agent on syntax for future actions. I trust the agent to ignore if thats not part of its goal.
Also the promptable outputs don't need to be code/explicit syntax -- it could be a suggestion to test in production, etc.
I almost think about it like the new SKILLs paradigm just that the progressive disclosure is happening in the output of a command.