Automate 404 Redirects on Vercel with AI Agents
tech
AI
Automation
Cloud
Next.js

Automate 404 Redirects on Vercel with AI Agents

I used Vercel logs, Claude Code, and bulk 301 rules to automate 404 redirects after a WordPress-to-Next.js migration and protect rankings.

Uygar DuzgunUUygar Duzgun
Mar 24, 2026
16 min read

I used automate 404 redirects to clean up a WordPress-to-Next.js migration, and it saved me hours of manual fixes. The goal was simple: find broken URLs fast, map them to the right destinations, and protect rankings before search traffic dropped. In this case study, I’ll show you the detection pipeline, how I used Claude Code to analyze redirect opportunities, and how I deployed and validated the fixes safely.

Table of contents

Why 404s explode after a WordPress-to-Next.js migration

A WordPress-to-Next.js migration almost always creates URL drift. Old category archives, dated permalinks, attachment pages, pagination paths, and trailing-slash variants all start producing 404s the moment your routing logic changes. If you do not catch them quickly, you lose link equity, confuse crawlers, and create a mess for users arriving from Google, social shares, and old backlinks.

Common causes of broken legacy URLs

The biggest source of broken URLs is usually not one dramatic mistake. It is dozens of small routing differences.

Typical examples include:

`/2023/08/my-post/` moving to `/blog/my-post`
`/category/news/page/2/` losing its pagination path
attachment URLs with no real replacement page
old author archives or tag pages that no longer exist
mixed trailing slash rules between WordPress and Next.js

I saw this firsthand during migration work tied to content systems and SEO cleanup. WordPress happily generates many URL shapes. Next.js gives you more control, but that also means you must define the redirect behavior yourself. If you skip that step, 404 monitoring becomes a fire drill instead of a controlled process.

During one migration, I found that the oldest WordPress category URLs kept surfacing in Google Search Console even after the new site had launched. That is why I treat every migration as a URL mapping problem, not just a design or framework change. I used that same mindset to automate 404 redirects instead of manually patching each broken path.

Why manual redirect cleanup does not scale

Manual cleanup looks manageable when you have 10 broken URLs. It falls apart at 100, and it becomes impossible when logs keep surfacing new variants every week. You can patch obvious cases by hand, but you will miss patterns like slug changes, category rewrites, and old feed URLs.

That is why I prefer a pipeline. I want to automate 404 redirects for recurring patterns, then let human review handle edge cases. In practice, that gives me speed without sacrificing accuracy.

How I automate 404 redirects with a detection pipeline

The detection pipeline matters more than the redirects themselves. If you collect bad data, you build bad rules. My workflow starts with Vercel logs, then ranks URLs by impact, then clusters them into pattern groups that I can evaluate quickly.

Pulling 404 logs from Vercel via CLI

I use Vercel as the deployment layer, so I start there. The first step is to pull recent request data and isolate 404 responses from production traffic. The exact CLI commands can vary depending on your project setup, but the idea stays the same: export logs, filter on status code, and save the results in a format Claude Code can parse.

A simple process looks like this:

Export recent production logs from Vercel.
Filter for 404 responses.
Deduplicate by URL.
Count hits per URL.
Join the results with referrer and user-agent data if available.

That last step matters. A URL with 3 hits from random bots is not as important as one with 200 hits from Googlebot or an active backlink. This is where 404 monitoring turns into SEO triage.

Recommended reading

If you want more context on deployment decisions, I recommend my breakdown of Vercel deployment and preview workflow.

I also like keeping the log export process repeatable. In my own stack, that means treating Vercel logs as a source of truth, then feeding them into an analysis step instead of manually scanning dashboards. It keeps the workflow fast and makes it much easier to automate 404 redirects at scale.

Identifying the highest-impact URLs from logs

Once I have the raw log data, I rank it by business impact. I care about volume, referrer quality, and whether the missing URL sits on a page that still receives authority.

I usually sort the list by:

hit count
organic referrer presence
backlinks or inbound mentions
whether the missing URL maps to an obvious replacement
whether the path belongs to a high-value content cluster

This step cuts the problem down fast. In one migration, the top 20 broken URLs accounted for most of the meaningful search traffic loss. The long tail still mattered, but the top slice gave me the fastest ROI.

For example, a dead `/blog/` permalink with 180 requests from Googlebot matters more than a random `/wp-content/uploads/` asset hit. I prioritize the URL that can recover traffic or authority, then I use the lower-impact list only if it reveals a broader pattern. That ranking is what lets me automate 404 redirects without overfitting rules to noise.

Grouping broken URLs by pattern

After ranking, I group broken URLs by pattern. This is where the work becomes efficient.

Example grouping:

`/blog/old-post-title/` → new slug format
`/category/x/page/2/` → archive pagination
`/2022/11/title/` → date-based permalink migration
`/tag/product-updates/` → tag archive removal
`/feed/` → obsolete WordPress feed endpoint

When I cluster URLs this way, I can solve 50 broken paths with 5 redirect rules. That is the difference between cleanup and a real migration system. It also creates the right input for Claude Code, which is where I can automate 404 redirects with much less manual sorting.

Using Claude Code to analyze redirect opportunities

Claude Code helps me move from raw logs to redirect candidates much faster than manual review alone. I do not ask it to make final decisions. I use it to surface patterns, suggest likely destination URLs, and flag cases that need human judgment.

Recommended reading

I have built similar multi-agent systems before, and the same logic applies here. If you want to see the broader approach, I wrote about agent flows for building AI-assisted systems.

Prompting AI to detect URL patterns

My prompt is direct. I give Claude Code a table or CSV with broken URLs, hit counts, referrers, and any known content structure. Then I ask it to group the URLs into likely redirect families and explain why each family belongs together.

The best outputs usually include:

exact pattern matches
slug-only variations
category or archive rewrites
old permalink structure translations
URLs that should stay 404 because they have no real replacement

This step saves time because Claude Code can scan hundreds of rows in seconds. However, I still inspect every group before I trust it. AI speeds up pattern detection, but it does not know your site history unless you give it enough context.

Separating true redirects from dead links

Not every 404 deserves a redirect. Some URLs are dead ends, crawler noise, or outdated endpoints that should stay gone. Redirecting everything creates clutter and can lead to irrelevant destinations.

I split candidates into three buckets:

True redirects — the old page clearly maps to a new equivalent.
Soft matches — the old page has a related destination, but not a perfect one.
Dead links — no useful replacement exists, so I leave the 404 in place.

That distinction matters for SEO. Redirecting a removed utility URL to the homepage sends the wrong signal. In contrast, redirecting an old post to the closest updated article preserves user intent and link value. That is how I automate 404 redirects without damaging relevance.

Human review before deployment

Claude Code gets me to a draft map quickly, but I always do human review before I ship anything. I check category structure, slug changes, and whether the destination page actually satisfies the original query.

This is also where production judgment matters. If a broken URL receives backlinks, I weigh that differently than if it only appears in old bot logs. My rule is simple: AI can propose, but I approve. That is the safest way to automate 404 redirects in a live migration.

Generating bulk 301 redirects

Once the redirect map looks correct, I turn it into bulk 301 rules. The key is to keep the rules readable and maintainable. If you cannot explain the rule in one sentence, it is probably too broad.

Building the redirect map

I build the map in a spreadsheet or JSON file first. That gives me a clean review layer before I touch deployment config.

A good redirect map includes:

source path
destination path
redirect type
reason for the mapping
notes on edge cases or exclusions

In practice, I often end up mapping the top broken URLs in minutes once the pattern clusters are clear. That is where the time savings appear. Instead of writing one-off fixes, I can automate 404 redirects across whole URL families.

Writing vercel.json redirect rules

For Vercel projects, I prefer explicit redirect rules in `vercel.json` when the list stays manageable. The rules should be easy to read, easy to test, and easy to remove later if the site structure changes again.

A rule set usually needs to account for:

exact matches
wildcard or pattern-based paths
trailing slash normalization
category path changes
legacy WordPress permalinks

I also keep an eye on redirect chains. If `/old-page` goes to `/new-page` and then `/new-page` goes elsewhere, I fix the chain before launch. Clean nextjs 301 redirects beat messy multi-hop logic every time.

Recommended reading

For more detail on the hosting side, I recommend Next.js redirect rule examples if you want to compare deployment constraints before you ship changes.

Handling trailing slashes, categories, and post slugs

Trailing slashes cause more pain than most people expect. WordPress often normalizes them one way, while Next.js or your deployed routing may prefer another. I handle that by standardizing destination URLs and then mapping the old variants into a single canonical path.

Category paths deserve the same attention. If the old site used `/category/news/` but the new site uses `/blog/news/`, I do not leave it ambiguous. I write the rule once and make the destination explicit.

I also treat slug changes carefully. If the content moved and the topic stayed the same, I redirect to the closest equivalent. If the article changed topic completely, I leave it alone or point it to a stronger category page. That discipline is what lets me automate 404 redirects without creating relevance problems.

Deploying and validating fixes

Shipping redirects is not the finish line. I always validate in production, because a rule that looks good in a spreadsheet can still behave badly under real traffic.

Pushing redirects to production

I deploy the updated redirect rules with the normal Vercel workflow. That keeps the rollout fast and reversible. If I need to compare behavior across environments, I check preview deploys before merging to production.

Recommended reading

I prefer this because it keeps the migration tight. I can review the redirect map, deploy, and verify without waiting on a separate release process. If you want a broader look at that hosting model, see Vercel deployment and preview workflow.

Verifying status codes and destination URLs

After deployment, I test both the HTTP status code and the final destination. A redirect that returns 301 but lands on the wrong page still fails the job.

My validation checklist is simple:

confirm the source URL returns 301
confirm the destination URL loads a 200
check that there is no redirect chain
verify canonical tags on the target page
test both trailing-slash and non-trailing-slash variants

This is where automation helps again. I can batch-check a list of URLs instead of clicking each one by hand. That makes it easier to automate 404 redirects while still keeping quality control.

Spot-checking Search Console and server logs

After deployment, I watch Search Console and server logs to make sure the fixes are working. Search Console tells me which URLs still surface as errors or excluded pages. Logs tell me whether bots and users are hitting the new destinations cleanly.

Recommended reading

I also compare the new data with my original 404 list. If a URL still appears, I check whether I missed a variant, a case-sensitive path, or an alternate referrer. That feedback loop is exactly why I built Search Console-aware multi-agent workflows in Next.js.

What the migration taught me

The biggest lesson was not that AI can write redirect rules. It can. The real lesson was that pattern recognition, ranking, and verification all matter more than the final rule format.

Which redirect patterns delivered the biggest wins

The biggest wins came from the obvious patterns:

old blog posts with changed slugs
WordPress category archives with new structure
pagination URLs that still received crawl traffic
legacy attachment and feed URLs with clear replacements

Those patterns accounted for most of the traffic recovery. The long tail mattered less than I expected. That is useful because it means you can spend your time where the payoff is highest and still automate 404 redirects effectively.

Where AI helped most

Claude Code helped most with clustering and explanation. It was excellent at finding repeated URL shapes and highlighting the probable source of each pattern. That let me move from a messy export to a structured map much faster.

It did not replace judgment. It simply made the first pass faster, which is exactly where AI should help in production workflows. The result was a cleaner redirect plan and fewer wasted rules.

Where manual judgment was still necessary

I still needed manual judgment for edge cases. That included topic changes, merged articles, and URLs that had backlinks but no natural replacement. I also reviewed anything that might create a poor user experience, even if the pattern looked tempting.

That is the difference between a script and a reliable migration process. Scripts can move data. Judgment protects the site.

A repeatable workflow for future site migrations

Once this process worked, I turned it into a repeatable system. That matters because migrations never end. Old URLs keep showing up, and new content changes can reintroduce broken paths later.

Weekly 404 review process

I now review 404s on a weekly cadence during and after migrations. That keeps the backlog small and prevents surprises from building up.

My weekly loop looks like this:

Export the latest 404 logs.
Rank URLs by impact.
Cluster new patterns.
Add or adjust redirect rules.
Revalidate the top changes.

This keeps the site clean without overengineering the process. It is a practical way to automate 404 redirects while still staying close to the data.

When to automate vs when to archive

Not every URL should live forever. Some pages belong in the archive, some belong in redirects, and some should disappear.

I automate when:

the old page has a clear replacement
many URLs share the same pattern
the topic still matters to users or search engines

I archive or leave 404s when:

the page had no meaningful traffic
the content is obsolete
redirecting would mislead users
there is no relevant destination

That decision matrix keeps the site healthier than blindly redirecting everything. It also prevents a redirect swamp later.

Checklist for SEO-safe migrations

Before I call a migration done, I run this checklist:

export 404s from production logs
map the top broken URLs first
cluster patterns before writing rules
test each redirect in production and preview
check Search Console after deployment
remove chains and irrelevant targets

If you follow that process, you can automate 404 redirects without turning your site structure into a maintenance problem.

FAQ

How do I find 404s in Vercel?

I export production logs from Vercel, filter for HTTP 404 responses, and rank the URLs by hit count and referrer quality. That gives me a clean list of broken paths to review before I write redirect rules.

Should all broken URLs get redirected?

No. I only redirect URLs with a relevant replacement or a strong SEO reason. If a URL has no clear destination, I leave it as a 404 rather than sending users to an unrelated page.

Is 301 always the right redirect?

For permanent migration changes, yes, a 301 is usually the right choice. It passes signals to the new URL and matches the intent of a site move. I only use other codes when the change is temporary or operational.

How do I avoid redirect chains?

I test the final destination of every rule and remove any intermediate hops. If an old URL points to a page that later redirects again, I collapse the rule so the source goes straight to the final canonical URL.

How do I bulk create 301 redirects in Next.js?

I first build a redirect map from the 404 logs, then convert the highest-value patterns into explicit rules. In Vercel-based Next.js setups, that usually means adding structured redirect entries and testing them before production rollout.

Can AI agents help manage redirect mapping?

Yes. AI agents can cluster broken URLs, suggest likely destinations, and surface edge cases fast. I still review the output myself, but AI speeds up the first pass and helps me automate 404 redirects with less manual effort.

Conclusion

A WordPress-to-Next.js migration gets messy fast if you ignore broken URLs. The workflow I used kept the cleanup focused: pull logs, rank impact, cluster patterns, generate bulk 301 rules, and validate everything before moving on.

The big wins came from the top broken URLs, not the long tail. Claude Code helped me find the patterns faster, and manual review kept the rules accurate.

If you are doing a migration now, start with your logs, map the top 20 broken URLs, and then build from there. That is the fastest way to automate 404 redirects without losing control of your SEO.

Frequently Asked Questions

How do I find 404s in Vercel?+
I export production logs from Vercel, filter for 404 responses, and rank the URLs by hits and referrer quality. That gives me a clean migration list.
Should all broken URLs get redirected?+
No. I only redirect URLs with a relevant replacement or a strong SEO reason. If there is no useful destination, I leave the URL as a 404.
Is 301 always the right redirect?+
For permanent migration changes, 301 is usually correct. It matches a site move and passes signals to the new canonical URL.
How do I avoid redirect chains?+
I test the final destination of every rule and collapse multiple hops into one direct path. That keeps the redirect clean and fast.
How do I bulk create 301 redirects in Next.js?+
I build a redirect map from logs, cluster patterns, and convert the top families into structured redirect rules before production rollout.
Can AI agents help manage redirect mapping?+
Yes. AI agents can cluster broken URLs and suggest destinations quickly, but I still review every rule before deployment.

Recommended Articles

How I Built My MCP CMS With Agent Flows

How I Built My MCP CMS With Agent Flows

I built an MCP CMS inside Next.js to unify content, tools, and AI workflows into one fast, controlled publishing system.

11 min read
Multi-Agent Content Pipeline in Next.js With Search Console

Multi-Agent Content Pipeline in Next.js With Search Console

A practical look at a multi-agent content pipeline in Next.js, with Search Console, web research, revision loops, and publishing.

12 min read
Vercel vs Stormkit: Proven Deployment Guide for Teams

Vercel vs Stormkit: Proven Deployment Guide for Teams

I break down vercel vs stormkit so you can choose the right deployment platform for pricing, previews, control, and portability.

15 min read