There is a very good chance that you are leaving most of the potential of your AI coding assistant on the table. It’s time to get practical and explore some of the best techniques that top agentic engineers use for AI-assisted coding. These are the individuals who have a real system in place for working with their coding agents.
This article assumes you have a basic understanding of how to use coding agents. Now, we will get specific, delving into the powerful unlocks that can transform your workflow. The best part? None of this requires new tools. It is simply a better way of working.
1. PRD-First Development
At the top of the list is PRD-first development. A PRD, or Product Requirement Document, can mean many things, but in this context, it is a single markdown document that defines the entire scope of work for your project.
For greenfield development—starting from scratch—the PRD contains everything you need to build to complete your proof of concept or MVP. This single document becomes the north star for your coding agent. It is the source of truth for everything you have to build. From the PRD, you derive all the individual features you will build out with your coding agent. It is crucial not to have your agent do too much at once, or it will fail. Use the PRD to split your project into more granular features, such as implementing the API, designing the user interface, or building the authentication system.
For brownfield development on an existing codebase, the PRD serves to document what you already have and what you want to build next. Either way, you are creating the definitive guide for your project. Many developers miss this step. They dive right into building the first feature without establishing a connection between the different iterations they perform with their coding agent.
The process involves having a conversation with your AI assistant about what you want to build. Once you and the agent are on the same page, you consolidate that conversation into a structured PRD.
Here is a sample structure for a PRD:
# Project: Habit Tracker Application
## 1. Target Users
- Individuals looking to build and maintain positive daily habits.
- Users who prefer a simple, clean, and motivating interface.
## 2. Mission
- To provide a frictionless and rewarding habit-tracking experience that encourages consistency.
## 3. Scope (In)
- User authentication (Sign up, Login, Logout).
- Create, Read, Update, Delete (CRUD) for habits.
- A calendar view to visualize habit completion streaks.
- Basic user profile management.
## 4. Scope (Out)
- Social features (sharing, leaderboards).
- Advanced analytics and reporting.
- Mobile-specific applications (web-first).
## 5. Architecture
- **Frontend:** React with Vite
- **Backend:** Node.js with Express
- **Database:** PostgreSQL
- **Authentication:** JWT
This document is now your north star. For all subsequent feature development, you will reference the PRD to guide your coding agent.
2. Modular Rules Architecture
The next big concept is a modular rules architecture. Most people make their global rules files (agents.md, claude.md, etc.) far too long. These are the constraints and conventions loaded into your coding agent’s context at the start of every conversation. If this file isn’t lightweight, you will overwhelm the LLM.
Keep your global rule file short and focused on rules that apply universally. This includes things like commands to run, your testing strategy, and your logging strategy.
For task-specific rules—like component guidelines for the frontend, deployment procedures, or API construction patterns—split them into separate markdown documents. Your primary global rules file can then reference these modules. This way, you only load these specific rules into the LLM’s context when they are relevant to the task at hand.
Consider this example of a global rules file:
# Global Rules: agents.md
## Core Tech Stack
- Frontend: React, TypeScript
- Backend: Node.js, Express
- Testing: Jest, React Testing Library
## Project Structure
- `src/components`: Reusable UI components
- `src/pages`: Page-level components
- `src/services`: API interaction logic
## Core Commands
- `npm run dev`: Starts the development server.
- `npm run test`: Runs all unit and integration tests.
## Logging Standards
- Use the `pino` logger for all server-side logging.
- Log errors with `level: 'error'` and include a `stack` trace.
## [reference]
- When working on API endpoints, read `./reference/api-development.md`
- When creating new React components, read `./reference/component-design.md`
The [reference] section is key. It tells the coding agent when to load additional, more detailed context. The api-development.md file might be a thousand lines long, but it’s only loaded when working on the API. This protects the agent’s precious context window, a factor whose importance is severely underestimated.
3. Commandify Everything
This technique may seem obvious, but its importance cannot be overstated: commandify everything. Anytime you send a prompt to your coding agent more than twice, it’s an opportunity to turn it into a command or a reusable workflow.
These commands are simply markdown documents that define a process for your agent. Making a git commit, performing a code review, or loading context from your codebase—nearly any part of your development workflow can be turned into a command. This will save you thousands of keystrokes over time and allows you to share these workflows with others.
By packaging repetitive prompts into defined workflows, you create a powerful and efficient system for interacting with your AI.
4. The Context Reset
Another technique related to context management is the context reset. Between your planning and execution phases, you should always restart the conversation window with your coding agent.
This is only possible because you end your planning session by outputting a detailed plan to a document, typically in markdown. This document contains all the context needed for the execution phase. When you start the new conversation to build the solution, you don’t need to prime the agent or explain what to build. You just feed it the plan document. That’s it.
The reason for this is to keep the context as light as possible during the actual coding. This leaves maximum room for the agent to reason about its task and perform self-validation.
The workflow looks like this:
- Plan: Start a conversation, prime the agent on the codebase, and discuss the next feature, using the PRD as a guide.
- Output: Create a structured plan in a markdown file.
- Reset: Wipe the context window or restart your coding agent.
- Execute: Start a new conversation and provide only the plan document as context.
Here’s what a simple plan document might look like:
# Plan: Improve Calendar Visuals
## Feature Description
Enhance the UI of the habit tracker's calendar view to be more intuitive and visually appealing.
## User Story
As a user, I want to see a clear and attractive representation of my habit streaks on the calendar so that I feel motivated to continue.
## Context Files to Reference
- `src/components/Calendar.jsx`
- `src/styles/calendar.css`
## Task Breakdown
1. Modify `Calendar.jsx` to apply different CSS classes for completed days, missed days, and future days.
2. Update `calendar.css` with new styles for these classes. Use a green shade for completion and a light gray for future dates.
3. Ensure the component remains responsive on mobile screen sizes.
4. Add a new test case to `Calendar.test.jsx` to verify that the correct classes are applied based on habit data.
This plan is comprehensive because it’s the only context the agent will have during execution.
5. System Evolution
Perhaps the most important technique is system evolution. This is the most powerful way to use coding agents: treat every bug as an opportunity to make your agent stronger.
Instead of just encountering a bug, fixing it manually, and moving on, look into your AI agent’s system. What can you fix to ensure this issue doesn’t happen again? This approach is especially powerful when you notice recurring patterns in the errors your agent makes.
Typically, the fix will involve updating one of three areas:
- Global Rules: Your core
agents.mdfile. - Reference Context: Your specialized markdown documents.
- Commands/Workflows: Your defined processes.
Here are a few examples:
- Bug: The agent uses the wrong import style.
- Fix: Add a new one-line rule to your global rules specifying the correct format.
- Bug: The AI forgets to run tests after making changes.
- Fix: Update the template for your structured plan to include a mandatory “Testing” section.
- Bug: The agent doesn’t understand the authentication flow.
- Fix: Create a new
authentication.mdreference document and update your global rules to reference it when working on auth-related tasks.
- Fix: Create a new
The process is a form of self-reflection. After you build a feature and validate it, you might notice something is wrong. You then prompt your agent:
“I noticed that XYZ is not working correctly, and I had to make this fix. Please review the rules and commands we used. What could we improve in our process or rules so this issue doesn’t happen again?”
This is more than just a prompt; it’s a mindset. Don’t just fix the bug; fix the system that allowed the bug. Adopting this strategy will take you far, as your coding agent will become progressively more powerful and reliable over time.