Skip to content
Codedock
ServicesHow we workInsightsCase StudiesCareerContact
Back to all articles

·

9 min read

·

Written by Tomáš Mikeš

Migrating a Laravel website to Next.js 16 + .NET 9 with zero SEO loss

A dental clinic with a two-language Laravel site needed to reach patients in six languages without losing Google rankings. Here's how we did it in three weeks — and how the redirect AI matcher covered 97.6% of old URLs.

Next.js.NETSEOMigrationAI

The brief sounded routine at first: a successful Prague dental clinic wanted to rebuild their website. Their Laravel setup was five years old, technically fine, but limiting the business. They were getting inquiries in German, French, Russian and Arabic — but the site only spoke Czech and English.

The hidden constraint was the one that kills most migrations: we could not lose any Google rankings. They had five years of SEO equity on treatment-specific landing pages that were already ranking on page one. A typical “let's just rebuild it and redirect everything to the homepage” approach would have cost them months of organic traffic recovery.

The old site, measured

Before writing a single line of code, we ran a full audit of the Laravel site. Numbers, not opinions.

  • 539 indexed URLs across two locales. That was the baseline we had to preserve — every one of them a potential entry point from Google.
  • Fallback meta description on treatment detail pages saying “Stránka nebyla nalezena” (“Page not found” in Czech). A clear bug from a long-ago template tweak — but one we had to be careful not to carry forward.
  • Canonical URLs that contained /public/ — a classic Laravel symptom when the server is not configured to hide the public directory. Google had been politely confused for years.
  • Homepage-only JSON-LD. No structured data on treatment pages, FAQ, doctors, clinics, pricing — everywhere rich results could have shown up.
  • Hreflang limited to CS+EN, with 1,054 tags across 539 URLs. Incomplete and in some places incorrect.

The stack we landed on was deliberately boring for us, but bold for the client: Next.js 16 for the public website, .NET 9 for the backend API and admin portal, PostgreSQL on Azure, and Anthropic Claude for the translation pipeline. The whole thing ships as a single Docker image on Azure Container Registry — one .NET process hosts the API, serves the static React admin, and via a YARP reverse proxy routes root traffic to the Next.js SSR process.

The redirect problem

The migration's success was going to be decided by one specific engineering decision: how we'd map 539 old URLs onto the 1,333 new ones (six languages × more content types). Too many of those old URLs mattered for revenue to hand-map them. Too many to leave on autopilot.

So we built a tiny AI pipeline — not a product, a workflow:

  1. Ingest the old sitemap. Pull every indexed URL with its title, meta description and first paragraph of content.
  2. Ingest the new sitemap. Same for the new site, as generated from our database.
  3. Batch match with Claude. For each old URL, ask the model to find the best-matching new URL based on semantic overlap of content. Return one of: high-confidence / medium-confidence / no-match.
  4. Human-review edge cases. High-confidence matches went straight into the redirect table. Medium required approval. No-match triggered a decision: create content, or redirect to the closest category page.

Result: 97.6% of old URLs got a permanent redirect in the new system, with the remaining 2.4% manually reviewed. No blanket redirects to the homepage. No broken equity transfer chains.

Chain flattening — the unsexy detail that matters

When a CMS-backed site runs for a while, URL changes accumulate. Article “A” gets its slug updated to “B”, then someone edits it again to “C”. A naive redirect system creates a chain: A → B → C. Browsers follow it, but search engines penalize multi-hop redirects and bleed PageRank on every step.

We built chain flattening into the admin: any time a new redirect is created from X to Y, the system checks if Y itself already redirects somewhere, and if so, points X directly to the final destination. The user editing content never sees this — but the crawler always gets a single 308.

Paired with a 12-month retention policy (redirects persist for a year after the slug change, plenty of time for Google to reindex) and in-memory caching with a 3-minute TTL, redirect resolution in the middleware adds approximately zero latency to the request.

SEO as a first-class API

On the old site, SEO was something the template did badly. On the new site, we made it part of the data contract — every entity (treatment, doctor, clinic, article, book, FAQ) carries structured metadata that ends up both in Next.js's metadata API and in JSON-LD.

We ship 17 types of structured data: MedicalOrganization alongside LocalBusiness (dual because the clinic has both clinical and retail operations), Physician for each doctor, MedicalProcedure for each treatment, FAQPage for Q&A, JobPosting for careers, BreadcrumbList everywhere, Event for clinic events, AggregateRating pulled automatically from imported Google reviews — that one alone lights up star ratings in search results.

All of it is verified in Google's Rich Results Test before deploy. A failing JSON-LD blocks the production push via a small script in the CI pipeline.

The AI translation pipeline

With six target languages, manual translation was off the table — it would have cost more than the rest of the project combined and created a permanent coordination tax every time the editor changed a word.

Instead: every entity has a *Translation table keyed by locale. When the editor saves a Czech change, a TranslationBackgroundJob queues up a batch request to the Anthropic Claude Batch API with a domain-specific prompt (dental terminology, formal register, preserve markup). The job polls the batch every minute until it completes, then applies the translations.

Two details made this production-grade rather than a toy:

  • Translation audit dashboard in the admin. Editors see at a glance which entities have complete translations and which are partial or stale. This makes the usual “let me just check all 6 languages” step of content editing unnecessary.
  • Glossary override. Specific terms (doctor titles, treatment names) are forced to specific translations. The model respects the glossary even across batches.

After three weeks of production use, the translation quality audit found fewer than 0.5% of strings needed editor correction. Manual translation is dead for this kind of content.

What we'd do again

Three things, if we were starting over:

  • AI sitemap matching first, design second. We ran the redirect match in week one. Knowing the 2.4% of orphaned URLs early meant we could design those content pieces for the new site before they became a rush job.
  • Single Docker image with YARP reverse proxy. Fewer containers, cheaper hosting, one CI pipeline. For small-to-medium traffic, splitting frontend and backend into separate deployments just adds operational friction.
  • Structured data as a contract. Treating Schema.org types as part of the domain model — not something the template adds at the last minute — removes a whole category of “why isn't this showing up in Google” debugging.

What we'd reconsider

The single biggest debate was whether to put Czech content in the database with translations, or put the source in Markdown files. We went with the database (tied to the admin UI) because the client wanted non-developers to edit copy. If the content updates had been less frequent, Markdown + Git would have been simpler.

Overall: the project hit all three must-haves — six languages, zero SEO loss, and handed off to a client team that can edit everything without a developer — within three weeks and 116 commits. The detailed numbers are in the full case study.

Working on something similar?

Book a 30-minute technical call. No sales process — direct architectural feedback.

Pick a time

Architecture, cloud and integration for complex systems. A senior architect on every project.

Navigation

ServicesHow we workInsightsCase StudiesCareerContactAgency vs. freelancer vs. us

Services

DevelopmentCloudDevOpsAI & DataConsultingDelivery

Contact

CodeDock s.r.o.

Zlenická 863/9, 104 00 Praha 22

Czech Republic

info@codedock.com

Company ID: 14292769

VAT ID: CZ14292769


© 2026 Codedock

ContactPrivacy Policy
Book a call