I was a bit skeptical about AI coding assistant at first. But since Skills came out, I can really feel the power of agentic workflow. My workflow is still heavy human in the loop but I can certainly get better result with better custom rule, templates.
I use Cursor but these features should also work in general agents e.g. Claude Code.
1. Plan Mode #
Plan Mode is extremely useful to explore the complexities between different solutions before making any changes.
Here is how I used it:
- Plan
- Seperation of concerns
SPEC.md: user story, requirements, etc. which will discuss with PM.TECHNICAL_DETAIL.md: architecture, data flow, api design, complexity analysis, etc. which will discuss with technical roles.IMPLEMENTATION.md: implementation plan, which will be updated throughout the implementation process.
- use
SPEC.mdandTECHNICAL_DETAIL.mdto Understand the problem / spec by asking- "Given the spec from jira ticket, understand the current user story with following references:
. Provide flow chart." - "Grill me with specs I need to clarify with product manager. List out possible edge cases."
- "Explain the current architecture of
, . Draw a diagram to visualize the flow and architecture."
- "Given the spec from jira ticket, understand the current user story with following references:
- use
IMPLEMENTATION.mdto Break down into smaller tasks by asking- "Break features into small, focused tasks"
- "What's the suggested priority of these tasks in order to minimize the risk of breaking existing features?"
- "What's the required changes for each task?" to understand the complexity for better estimation.
- Context: Always provide relevant context: specific code reference, document.
- Seperation of concerns
- Execute first few steps for quick validation: Execute the initial steps to validate the approach before committing to the full implementation.
- Implementation: Update the plan accordingly every round based on the real AI + human implementation.
- Futher usage: keep the plan for futher use: decision making reference, QA testing instructions, tech sharing materials, work log, etc.
2. Agent Skills #
open standard for extending AI agents with specialized capabilities.
I used #
- vercel-react-best-practices
- perform code review
3. Rules #
Project based rules to provide contex automatically in chat.
Use cases: general coding guidelines, code structure, testing style
examples: awesome-cursorrules
4. Normal Chat / Others #
- code trace
- debugging
- writing tests
- commit message generation
5. MCPs #
Other tools to try #
- Code review in CI: Code Review with Cursor CLI
- Hooks: https://cursor.com/docs/agent/hooks
- Analyze Chat history for personal Self-Insight.
What are still heavily human involved #
- clarify requirements: When product get complex, spec could be easily overlooked by both PM and Dev even with the help of AI.
- Integrating existing company knowledge as context when writing spec should help.
- debugging: Jumping between tickets, UI, logs, and codebase are still too complex for AI to handle alone.
Thoughts: Bottleneck shifted #
Bottleneck of delivery shifted from coding to review and other processes (testing, external dependency, release process), as stated in several posts (1, 2). New challenges already emerged: large amount of low quality PR from unexperienced engineer, human QA as relase blocker...