We have been building with Next.js since version 9. When App Router landed, we were skeptical. It felt like a completely different framework bolted onto the one we already knew. Two years and dozens of production deploys later, we have enough scars and wins to say this with confidence: both routers are viable, but we do not treat them as interchangeable. We pick based on project constraints, team experience, and third-party dependencies, not hype.
What App Router actually changed
App Router is not just a new file structure. It is a different mental model for how React, data fetching, and routing fit together.
The biggest shift is React Server Components (RSC). With Pages Router, every component in pages/ is a client component by default. Data fetching happens through getServerSideProps, getStaticProps, or client-side hooks like useEffect and SWR. That usually means:
- More JavaScript shipped to the browser
- Extra client-side waterfalls when components need additional data
- A clear but rigid separation between page-level data and component-level data
With App Router, components in app/ are server components by default. That changes how we structure code:
- We fetch data directly inside server components using
asynccomponents and plainfetchor SDK calls. - We can compose data fetching at the component level instead of pushing everything into a single page-level function.
- We ship less client-side JavaScript, because server components never reach the browser.
In practice, this means we can build a product listing page where:
- The main layout and product grid are server components that fetch data from MedusaJS and Sanity on the server.
- Only interactive pieces (filters, cart drawer, search box) are client components.
The result: smaller bundles, fewer client-side requests, and simpler data flows.
The second big change is nested layouts. In Pages Router, layouts are usually implemented with _app.tsx and _document.tsx, plus some manual composition. It works, but it is coarse-grained. With App Router, we get:
layout.tsxfiles at any route segment- Shared UI that persists across navigation (headers, sidebars, dashboards)
- The ability to nest layouts deeply for complex apps
For example, in a multi-tenant dashboard we might have:
- A root layout for global chrome (navigation, theme, analytics)
- A tenant layout for tenant-specific navigation and context
- A section layout for a specific feature area (orders, products, content)
Navigation between pages inside those layouts feels faster because React can keep the layout tree mounted while only swapping the leaf segments.
App Router also brings streaming and suspense-first thinking. We can stream parts of the UI as data becomes available using Suspense boundaries. On a content-heavy page backed by Sanity, we can render the shell immediately, then stream in slower sections (for example, related content or heavy queries) without blocking the entire page.
All of this is powerful, but the learning curve is real. Teams need to understand:
- The difference between server and client components
- What can and cannot run on the server (for example, browser APIs,
window,document) - How to structure shared code between server and client
- How caching and
fetchoptions affect behavior
When teams treat App Router like "Pages Router but in a different folder," they usually end up frustrated. Once the mental model clicks, though, it becomes easier to reason about data flow and performance.
Where Pages Router still wins
We still ship and maintain production apps on Pages Router, and we are not in a hurry to rewrite them.
Pages Router wins on simplicity. The rules are straightforward:
- One file per route in
pages/ getServerSidePropsfor per-request datagetStaticPropsandgetStaticPathsfor static generation- Everything is a client component
For many teams, especially those with mixed experience levels, this is easier to teach and easier to debug. There is less magic, fewer modes, and fewer edge cases around server vs client boundaries.
Pages Router also has better ecosystem coverage for older or less maintained libraries. Some examples we still see in the wild:
- Auth libraries that assume
pages/apiandgetServerSideProps - Analytics or A/B testing tools that hook into
_app.tsx - UI kits that rely heavily on client-only patterns and global side effects
You can usually make these work with App Router, but it can involve extra glue code, wrappers, or workarounds. On a tight deadline, that friction matters.
For existing codebases, migration has real costs:
- Rewriting routing from
pages/toapp/ - Refactoring data fetching from
getServerSidePropsinto server components - Splitting components into server and client variants
- Reworking API routes if you want to adopt new patterns
If a Pages Router app is stable, fast, and maintainable, we do not recommend a big-bang migration. We continue to maintain several Pages Router projects that handle real traffic and revenue. They are not "legacy" in any meaningful sense; they are just built on an older but still supported model.
What we actually pick for new projects
For most new projects, we start with App Router.
The reasons are practical:
- Server Components let us keep heavy data fetching and transformation on the server, which is ideal for MedusaJS storefronts and Sanity-driven content sites.
- Parallel routes make complex UIs (dashboards, multi-panel layouts, preview experiences) easier to express without hacks.
- Intercepting routes are useful for patterns like modals over lists, in-place previews, and nested flows without losing URL fidelity.
When we build a new ecommerce storefront on MedusaJS, our default is:
- App Router in
app/ - Server components for product listing, product detail, category navigation, and CMS-driven sections
- Client components for cart interactions, filters, and account UI
When we build a content-heavy site on Sanity, we use:
- Server components to query Sanity directly in route segments
- Streaming for slower queries or personalization
- Draft preview flows using parallel and intercepting routes
That said, we are pragmatic, not dogmatic. If a project depends on a library that:
- Has no clear App Router support
- Assumes
pages/andgetServerSideProps - Would require significant patching or forking
we will pick Pages Router without guilt. Shipping a stable product on a known stack beats forcing App Router into a context where it fights the tools you need.
Our rule of thumb:
- If we control most of the stack and can choose libraries freely: App Router.
- If the project is constrained by a specific vendor SDK or legacy integration: Pages Router is still on the table.
The caching story
Caching has been the messiest part of App Router.
In early Next.js 13 and 14, the default caching behavior around fetch in server components was confusing and often too aggressive. We saw issues like:
- Data not updating when expected because
fetchwas cached by default - Developers having to sprinkle
cache: "no-store"everywhere just to be safe - ISR behavior that was hard to reason about across routes and layouts
With Next.js 15, the situation is much better. The caching model is more opt-in and explicit, which aligns with how we like to work.
Our approach today:
- We treat no caching as the baseline for dynamic data. If something should always be fresh (for example, cart state from MedusaJS, authenticated user data), we use
cache: "no-store"or route handlers that bypass caching. - For content that can be cached, we prefer explicit revalidation using
revalidateTagand on-demand ISR.
A typical pattern for a Sanity-backed marketing page:
- The route segment fetches content with
fetchconfigured for ISR with a conservativerevalidatevalue. - When content changes in Sanity, a webhook hits a Next.js route that calls
revalidateTagfor the relevant tag. - We group related queries under the same tag so we can invalidate them together.
For MedusaJS storefronts:
- Product listing pages might use ISR with tags per category or collection.
- Product detail pages can be cached with tags per product ID.
- Cart and checkout flows are never cached.
The key is that we do not rely on implicit defaults. Every important fetch call is configured intentionally. We would rather be slightly conservative on caching than debug stale data in production.