Podcast Title

Author Name

0:00
0:00
Album Art

Building a Full-Stack App with AI Sub-Agents Explained in 10 Minutes

By 10xdev team August 03, 2025

A major update in the world of AI-assisted development has been the release of AI sub-agents. This new capability allows for the creation of individual, specialized agents, each with its own area of expertise. You can have one that focuses on UI design, another on front-end development, and a third on security.

The best way to demonstrate this is through an example. Starting with just a single Product Requirements Document (PRD), and utilizing several sub-agents, it's possible to generate a complete, full-stack application, including both the backend and the front end.

While it may appear to be a simple app, it is a fully functional full-stack application. This article will show you how we can get from a single PRD markdown file to a complete implementation across both the backend and front end.

Getting Started: Creating and Using Agents

First, let's cover how you can create and utilize agents within your development environment. Inside a project's .cloud folder, you can create an agents subdirectory to hold your custom agent definitions.

.
└── .cloud/
    └── agents/
        ├── system-architect.agent
        ├── ui-designer.agent
        ├── python-backend-dev.agent
        └── ...

The recommended way to create a new agent is to use the /agent command directly in the AI chat interface. You can create a new agent and save it within the project to share it with collaborators. It's highly recommended to use the "generate with cloud" option for this.

For instance, to create a fun example, you could specify: "an agent that tells a joke all the time."

The AI will then generate an agent definition file best suited for this task. The more specific you are about the agent's purpose, the better the result. You can select which tools the agent should have access to. For a simple joke agent, read-only tools would suffice.

A newer feature is the ability to specify different models for different agents. For a mundane task, a lighter model like Sonet or Haiku is efficient. For complex tasks, a more powerful model like Opus is preferable. You can also choose to "Inherit" the model from the main conversation.

Once created, the agent definition is ready. The description section is crucial because it's how the primary AI knows when and how to delegate tasks to this sub-agent.

You can invoke an agent directly using its tag (e.g., @joke) and provide a prompt. More interestingly, the system will automatically pick the best sub-agent from your list based on the task at hand. This is where the true power of sub-agents lies.

Building the Application: From PRD to Reality

Now, let's build an application using a team of sub-agents.

A word of caution: For this demonstration, we will use a setting that dangerously skips permissions. You should not do this for actual projects, as it could allow the AI to perform destructive actions like deleting files or wiping a production database. It is always best to configure settings to specify exactly what actions are permitted.

The starting point for any project should be a Product Requirements Document (PRD). This document outlines the project's goals. It's what you would receive from a client or produce based on their requirements, guiding the development process.

Here is a simplified version of our PRD:

Project: YouTube Video Showcase

Objective: Build a single-page, read-only web application to display a list of YouTube videos.

Key Requirements: - The primary focus is on a clean, responsive, and visually appealing front end. - The backend will serve static, hardcoded data. - The application will fetch and display video titles and thumbnails from a predefined JSON file.

The backend will be powered by a static JSON file containing a list of episodes.

[
  {
    "title": "Cloud Code Sub-Agents Explained",
    "thumbnail": "/images/thumb1.jpg"
  },
  {
    "title": "Advanced Development Techniques",
    "thumbnail": "/images/thumb2.jpg"
  }
  ...
]

With the initial setup complete, the process of building the app can begin.

Step 1: The System Architect

The first step is to use a System Architect agent. This is a specialized agent designed to create a high-level plan for an entire application, covering both front-end and backend architecture.

We can tag this agent specifically and instruct it:

"@system-architect, study the PRD document. Design a Python backend with FastAPI to serve the list of episodes from the episodes.json file via a single /episodes GET endpoint. Also, outline a basic structure for the front end."

By tagging the agent, we ensure that this specific specialist is chosen for the task.

Note on Sub-Agent Context: It's important to remember that when a sub-agent is invoked, it does not have the context of the main conversation. It operates on a blank slate, receiving only the context explicitly passed to it. This design keeps agents focused and reduces token usage, but it means you must be careful to provide all necessary information for the task.

The System Architect will produce a detailed design for both the backend and front-end architecture, along with key technical decisions.

Step 2: Parallel Processing with a UI Designer and Tech Writer

With the high-level architecture designed, we can now perform two tasks in parallel: 1. Write the detailed technical specification to a file. 2. Design the UI/UX wireframes for the application.

For this, we can use a UI Designer agent, which is prompted to create beautiful and intuitive user interfaces, often using ASCII for wireframing to help with implementation later.

The prompt would look something like this:

"Based on the system architect's design, formalize the specification, outline the project structure, and write this to an API_SPEC.md file.

In parallel, use the @ui-designer to design the UI wireframes for the project and write it to a separate UI_WIREFRAME.md file.

Use sub-agents to accomplish these tasks in parallel."

Here, something interesting happens. While we explicitly tagged the UI Designer, we didn't specify an agent for writing the specification. The AI intelligently selected a PRD Writer agent, determining it was better suited for creating detailed technical documentation than the System Architect. The PRD Writer is more like a technical writer, skilled at documenting technical considerations, API endpoints, status codes, schemas, and project structure.

Meanwhile, the UI Designer gets to work on the visual plan. The resulting wireframe might look like this:

Desktop View

+------------------------------------------------------+
| [Logo] Mastery Video Library                         |
|        Learn Advanced Development Techniques         |
+------------------------------------------------------+
|                                                      |
|  +-----------+  +-----------+  +-----------+         |
|  | [Thumb]   |  | [Thumb]   |  | [Thumb]   |         |
|  | Video 1   |  | Video 2   |  | Video 3   |         |
|  +-----------+  +-----------+  +-----------+         |
|                                                      |
|  +-----------+  +-----------+  +-----------+         |
|  | [Thumb]   |  | [Thumb]   |  | [Thumb]   |         |
|  | Video 4   |  | Video 5   |  | Video 6   |         |
|  +-----------+  +-----------+  +-----------+         |
|                                                      |
+------------------------------------------------------+
| (c) 2025 Publication Name                            |
+------------------------------------------------------+

Mobile View

+------------------+
| [Logo] Mastery   |
| Video Library    |
+------------------+
| +--------------+ |
| | [Thumb]      | |
| | Video 1      | |
| +--------------+ |
| +--------------+ |
| | [Thumb]      | |
| | Video 2      | |
| +--------------+ |
| ...              |
+------------------+

This process yields two comprehensive documents: a detailed API specification and a visual guide for the front end.

Step 3: Unleashing the Full Team to Build the MVP

Now, we can unleash the full power of our sub-agent team. By carefully crafting our agents, we can ask the AI to build the entire application with a single prompt, without tagging any specific agent.

"Implement an MVP version that works end-to-end. Base this on the API specification and the UI wireframes. Include simple, minimal tests to ensure functionality. Use sub-agents to accomplish this task in parallel."

With this prompt, the AI analyzes the request and assembles a plan. It correctly identifies the need for two specialists working in parallel: 1. A Python Backend Dev to build the FastAPI server. 2. A React TypeScript Specialist to build the Next.js front end.

Each of these agents has been prompted with best practices for their respective domains, such as using modern type hints in Python and following established conventions for NPM packages and TypeScript.

The AI team gets to work. The file structure begins to populate with a backend folder containing a FastAPI application and a frontend folder containing a standard Next.js app created with create-next-app.

The backend code will feature a clean structure, with routers for different endpoints and a main function that sets up the server, including a useful /health endpoint. The front end will have components for the episode cards, loading states, and error states, all written in TypeScript.

After some time, the agents will complete their tasks, including running simple tests to verify their work. The Python agent will run curl requests against its own endpoints, and the front-end agent will install dependencies and run tests to ensure it can interface with the backend.

The Final Result

The MVP is now complete. The AI provides instructions on how to run the application:

To run the backend: bash cd backend pip install -r requirements.txt python main.py

To run the front end: bash cd frontend npm install npm run dev

You can test the backend directly. FastAPI provides a Swagger UI for testing endpoints visually. Navigating to http://localhost:8000/docs in a browser allows you to execute the /episodes endpoint and see the JSON response.

Opening the front-end application at http://localhost:3000 reveals the final product: a clean, responsive web page displaying the video library, just as described in the wireframes. The loading and error states are also functional; killing the backend server and reloading the page will display a "Failed to load videos" message.

After a bit of final debugging to ensure assets like thumbnails load correctly, the application is fully functional.

This process demonstrates the incredible potential of AI sub-agents in modern development. While the technology is still in its early stages, it's clear that as patterns emerge, we will learn how to best utilize coordinated teams of AI specialists for even larger and more complex projects.

Join the 10xdev Community

Subscribe and get 8+ free PDFs that contain detailed roadmaps with recommended learning periods for each programming language or field, along with links to free resources such as books, YouTube tutorials, and courses with certificates.

Recommended For You

Up Next