How I Built My MCP CMS With Agent Flows
tech
MCP
AI Agents
Next.js
Supabase

How I Built My MCP CMS With Agent Flows

I built an MCP CMS inside Next.js to unify content, tools, and AI workflows into one fast, controlled publishing system.

Uygar DuzgunUUygar Duzgun
Mar 22, 2026
11 min read

I did not want another generic CMS bolted on top of my site. In my experience, that setup always creates friction. I wanted an mcp cms that felt like part of the product, not an external system I had to fight every day.

This article explains how I built my mcp cms with Next.js, Supabase, MCP, and agent flows. I cover the architecture, the tool layer, the admin dashboard, and the content pipeline so you can see how the whole system works in practice.

Why I Built An MCP CMS Instead Of Using A Generic Headless Stack

I built the site and the CMS in the same Next.js app because I wanted one codebase, one data model, and one publishing flow. I have used enough disconnected stacks to know how fast they turn into maintenance debt. When your front end, CMS, and automation layer live in separate places, every change costs more time than it should.

My mcp cms keeps the public site, admin UI, API routes, and AI tooling inside one product boundary. That means I can change SEO metadata, localization, or article logic without jumping across three systems. I also avoid the usual “sync problem” where content lives in one tool and the real logic lives somewhere else.

In practice, this gave me faster iteration and fewer broken edges. It also made the system easier to observe, because every write action flows through the same layer. If you want more context on how I think about operational content systems, my post on Multi-Agent Content Pipeline in Next.js With Search Console shows the research side of that workflow.

The core idea behind the system

The core idea is simple: the CMS should behave like a product, not a form builder. I wanted typed content, controlled writes, and an automation layer that could help me publish faster without losing oversight. That is why I kept the architecture strict and the boundaries clear.

What I changed from a traditional CMS

A traditional CMS usually separates content editing from the product codebase. That sounds flexible, but it often slows down teams once the site becomes more custom. My mcp cms removes that split. I can now manage content, SEO, and workflow logic in one place, which gives me more leverage with less overhead.

How The MCP CMS Uses MCP As The Control Layer

The most important decision was placing MCP in the middle of the system. I did not want AI features trapped inside one chat window or one dashboard. I wanted a reusable command layer that could talk to the same backend rules from different clients.

That is why I built both a standalone MCP server and an in-app MCP route. The tools exposed through the mcp cms include blog CRUD, publishing, SEO analysis, article generation, research, image generation, and sitemap checks. In other words, the agent layer does not guess what to do. It calls real tools with real state.

This approach lines up with the Model Context Protocol specification and with how I like to structure internal systems: one contract, multiple clients, no duplication. It also made it easy to reuse the same actions inside the admin dashboard and from external MCP clients.

Tools I exposed through MCP

The tool surface is broad on purpose. I wanted the agents to do useful work, not just chat.

create_post, update_post, publish_post, and rewrite_post handle content mutations
content_pipeline runs the full autonomous workflow
brave_search and scrape_url support live research
YouTube research and sitemap checks feed the SEO process
image generation adds media support without leaving the workflow

That tool design is the backbone of the mcp cms. It keeps the system composable and makes every operation observable.

Why this architecture matters in real work

In my work building websites and content systems, the biggest bottleneck is rarely writing. The bottleneck is coordination. MCP reduces that coordination cost because the same action can be called from the dashboard, the chat interface, or another agent flow. As a result, I spend less time repeating steps and more time improving output.

The Admin Dashboard Turns The MCP CMS Into A Control Room

I did not want the admin area to feel like a boring CRUD screen. I built it as a control room. It has post management for drafts, translations, images, SEO, and publishing, plus an operational layer where I can run AI workflows and watch them move.

Inside the dashboard, the MCP chat exposes the same content tools as the rest of the system. I can list posts, inspect a draft, trigger SEO analysis, launch research, or run the full pipeline with a natural language request. The important part is that the chat is not magic. It is grounded in the same typed actions the rest of the mcp cms uses.

That design saves time in practice. I do not need to rebuild interfaces for every new workflow. I just expose another tool or another control path. If you want to see how I think about automation at the system level, How I Built My CMS With MCP and Agent Flows is the broader article around this setup.

What I can do from the dashboard

The dashboard lets me move quickly without losing control.

Manage drafts, translations, and publishing states
Run SEO checks before an article goes live
Trigger research and content generation from one place
Review agent output before saving anything final
Keep all actions inside authenticated product boundaries

Why I use a control room instead of a simple editor

A simple editor works when you only need to write and publish. I needed more. I wanted a place where I could observe workflows, approve changes, and intervene when a model drifted off track. That is the real value of the mcp cms: it gives me operational control, not only content entry.

How The MCP CMS Agent Flow Actually Works

The agent flow is not one giant prompt. I split it into stages so I can control quality at each step. That matters because content work is messy. Research, SEO, structure, and editing each need different reasoning.

Here is the flow I use most often:

If the topic starts from YouTube, I extract the transcript first
Search Console snapshots identify real opportunities and content gaps
Brave search and scraping collect live research
The Research Agent validates the topic and proposes a focus keyword
The SEO Agent shapes the title, headings, keyword placement, and internal links
The Writer Agent drafts the article using research and constraints
The Editor Agent scores the draft and asks for revisions if needed
Image generation and tutorial discovery run near the end
The Publisher Agent cleans the markdown and saves the draft

I tested this flow repeatedly because I did not want a fake demo pipeline. I wanted a system where one stage can push feedback back to another stage. That feedback loop is one reason the mcp cms works better than a single-pass writing assistant.

Why I split the flow into stages

Each stage solves a different problem. Research reduces hallucination. SEO sharpens structure. Editing protects quality. Publishing cleans the final output. Splitting those tasks makes the whole system more reliable and easier to improve over time.

How I keep the agents from drifting

I keep the agents on a short leash with typed inputs, explicit outputs, and quality checks. That gives me a better result than a vague prompt chain. It also makes failures easier to diagnose, which matters when you rely on automation for production work.

Search Console Makes The MCP CMS Smarter

One of the strongest parts of the system is that I do not rely on prompts alone. I wire Search Console data into the research process, so the agents work from real demand signals instead of assumptions. That improves the topic selection step and helps me prioritize content that can actually rank.

The Search Console layer stores snapshots in Supabase and calculates keyword opportunities, low CTR pages, and content gaps. I use those insights inside the mcp cms to steer both research and SEO. That means the system is not just generating articles. It is helping me make better publishing decisions.

Google’s own Search Console documentation backs up why this matters: search performance data should guide optimization, not intuition alone. I use that principle directly in the pipeline.

What the data layer gives me

Real query data instead of guesswork
Better topic selection based on demand
Clearer SEO priorities for existing pages
Faster identification of weak content opportunities

Why data beats intuition here

I like creative work, but content strategy needs evidence. When I use Search Console signals, I can see which pages need help and which topics deserve new articles. That makes the mcp cms much more useful than a static admin panel.

Why I Split Read Paths And Write Paths

I intentionally separated the read side and the write side of the blog. Public pages only read through the content layer, while admin actions write through the mutation layer. That keeps rendering fast and publishing safe.

This split matters because each side has a different job. The read layer needs caching, metadata, and clean fallback behavior. The write layer needs authentication, validation, and controlled mutations. In the mcp cms, both layers share the same contracts, but they never blur together.

That separation also makes future interfaces easier to add. If I build another dashboard, or if I connect another MCP client, I do not need to redesign the whole system. I just point new tools at the same write boundary.

What this separation protects

public performance
publishing reliability
authenticated content changes
cleaner debugging when something breaks

Why I prefer this structure

I have seen systems fail because everything talks to everything. This architecture reduces that risk. It gives the mcp cms a stable core and keeps the product easy to evolve.

Real Results And Practical Tradeoffs

The biggest result was not a fancy dashboard. It was leverage. I can now move from idea to researched outline to draft to SEO review without bouncing between tools. That saves time and keeps the context intact.

I also get better visibility into what the system is doing. I can watch the pipeline, inspect agent decisions, and adjust the flow when needed. That matters more than raw automation speed. A good mcp cms should make your process stronger, not more opaque.

There are tradeoffs, though. A custom system takes more engineering effort than a ready-made CMS. You own the bugs, the features, and the maintenance. For me, that cost is worth it because the workflow fits my site and my content strategy.

What I gained

faster content operations
tighter SEO control
reusable agent tools
cleaner publishing logic
better observability

What I still have to manage

custom maintenance
edge cases in agent output
ongoing improvements to tool design

How I Think About Content Operating Systems Now

This project changed how I think about content infrastructure. I do not see CMS software as a place to store posts anymore. I see it as an operating system for content creation, review, and publishing. That mindset shift is why this mcp cms feels useful instead of decorative.

If you are building something similar, start with the workflow first. Then define the write boundary. After that, add MCP tools only for actions that deserve automation. That order keeps the system sane.

I also recommend reading my related articles on AI agents, scaling e-commerce with Next.js, and my SEO dashboard. Those pieces explain the building blocks behind this setup and show how the pieces connect.

Conclusion

The main lessons from this build are simple:

I built the mcp cms inside the same Next.js app as the public site
MCP gives me a reusable control layer for real backend actions
Search Console data makes the system more strategic and less guess-driven
The admin dashboard acts like a control room, not a basic editor
Splitting read and write paths keeps the system faster and safer

I built this mcp cms to give myself leverage, better visibility, and a cleaner publishing flow. If you are designing your own content system, start with the workflow and then shape the tools around it.

If you want, read the related posts above or leave a comment with how you would structure your own CMS.

Frequently Asked Questions

What is an MCP CMS?+
An MCP CMS is a content management system that uses Model Context Protocol as a control layer for tools and workflows. In my setup, MCP exposes real actions like drafting, updating, publishing, and SEO checks through typed interfaces, so the system stays consistent across the dashboard and agents.
Why build a custom MCP CMS instead of using WordPress or a headless CMS?+
I built a custom MCP CMS because I wanted one product boundary, one write path, and tighter automation. WordPress and headless platforms can work well, but custom workflows often need deeper integration. My setup removes sync issues and lets agents operate directly on the same backend rules.
How does the MCP CMS improve content quality?+
The MCP CMS improves quality by splitting the workflow into research, SEO, writing, editing, and publishing stages. Each stage checks the work before the next one starts. That reduces weak drafts, improves internal linking, and makes the final article easier to refine.
Is Search Console data important in an MCP CMS?+
Yes. Search Console data gives the CMS real performance signals, which helps prioritize topics and fix weak pages. I use it to find content gaps, low CTR pages, and keyword opportunities. That makes the system more strategic than a prompt-only writing setup.