Next.js Core Web Vitals 2026: Why LCP Isn't Just Your Images
We had a 3.2s LCP and 23% first-session drop-off. Here's what the Next.js App Router fixes actually looked like — load time down 40%, repeat orders up 74%.

The vendor portal I inherited had a 3.2 second LCP. That number had been sitting in a Notion doc labelled known issues for two quarters. Everyone knew it was bad. Nobody could tell you exactly why. If you've already added next/image, turned on compression, and you're still stuck in the high 70s on Lighthouse, the problem is almost certainly not your images.
When we finally fixed it — really fixed it, not just ran Lighthouse and called it a day — page load dropped 40%. Repeat orders went up 74%. Average order value climbed 12%. I'm not claiming performance caused all of that. But 23% of vendors were dropping off in their first session. When you stop losing a quarter of your users before they've done anything, everything else you're working on gets a fairer shot.
Most Core Web Vitals content is written for marketing sites. It tells you to compress images and defer scripts. That's fine for a WordPress blog. For a data-heavy Next.js App Router application, the standard checklist doesn't get you past 75. Google's published thresholds (LCP under 2.5s, CLS under 0.1) are the floor, not the target. If your users are power users who touch your product dozens of times a day, they notice every stutter.
LCP in a Next.js app is probably not your images
Open Chrome DevTools, run a performance trace, and look at what the LCP element actually is before you touch a single image. If it's text or a data-driven component, image optimization is the wrong fix.
On ShipGlobal, the LCP element was a dashboard stats card: a <div> with numbers pulled from three separate API calls. The browser was waiting for all three before it could paint anything meaningful in the viewport. The hero image was fine. The data was the bottleneck.
Some of the things that killed our LCP had nothing to do with assets:
- A waterfall of three sequential API calls on first render, each waiting for the previous
- A
useEffectthat fetched critical above-the-fold data client-side instead of server-side - A large
"use client"boundary at the page level that forced the entire page to hydrate before any data could render
The fix wasn't clever, but it wasn't just move to Promise.all either. First, fetchRevenue had to be decoupled from the orders response: the original call passed orders.period as a dependency, making true parallelization impossible until that contract changed. Once decoupled, we moved all three fetches to the server with Promise.all and streamed secondary content below the fold with Suspense. The dashboard stats rendered in the first paint instead of the third.
// Before: client-side waterfall
const [stats, setStats] = useState(null)
useEffect(() => {
fetchOrderCount().then(async (orders) => {
const revenue = await fetchRevenue(orders.period)
const shipments = await fetchShipments()
setStats({ orders, revenue, shipments })
})
}, [])
// After: server-side parallel fetch
// Note: refactored fetchRevenue to accept date range from URL params instead of chaining off orders
async function DashboardStats() {
const [orders, revenue, shipments] = await Promise.all([
fetchOrderCount(),
fetchRevenue(),
fetchShipments(),
])
return <StatsCard orders={orders} revenue={revenue} shipments={shipments} />
}
The useEffect version waited sequentially for three round trips. The server version waited for the slowest of three parallel requests, and the result arrived in the initial HTML payload, not after hydration.
The "use client" boundary that's silently inflating your bundle
The most common hidden LCP regression in Next.js App Router apps: a page-level "use client" that got added for one interactive element and never revisited. Teams added it for a dropdown, a toast, a modal. The entire component tree beneath that boundary ships as client JavaScript and hydrates before rendering.
Push the boundary down to the smallest component that actually needs interactivity. A search input is a client component. The page layout, the data table, the navigation: those don't need to be. If Server Components and the App Router are still coming together for you, mastering React and Next.js in 2026 has the right sequence to build that intuition before performance work makes sense.
// Before: entire page is a client component
"use client"
export default function OrdersPage() {
// 800 lines of component, all shipped to client
}
// After: only the search is a client component
export default async function OrdersPage() {
const orders = await fetchOrders()
return (
<main>
<OrderSearch /> {/* "use client" */}
<OrderTable orders={orders} /> {/* server component, no JS shipped */}
</main>
)
}
On ShipGlobal, moving to proper island architecture cut the client JavaScript bundle by roughly 35%. That reduced Time to Interactive directly, which improved INP scores as well. If you're still working out which components belong on the server vs. the client, building production-ready React apps covers the hook and component architecture patterns that make these boundaries easier to enforce consistently.
The API waterfall and boundary fixes got us most of the way. The last LCP gains came from images — and not the fixes you'd expect.
Images: the priority attribute is probably doing more harm than good
If you're already using next/image, the remaining gains are in details most teams skip.
The biggest remaining issue was priority abuse. Some teams (including ours) mark multiple images as priority to ensure they preload. The problem: priority adds a <link rel="preload"> tag for each image. With three or four of them, the browser competes for bandwidth on resources it doesn't all need immediately.
One priority={true}, on the LCP candidate. Everything else loads lazily.
The second issue was missing sizes on responsive images. Without sizes, Next.js generates a srcset but the browser defaults to 100vw as the assumed display width. On a 375px Retina screen at 2x DPR, that targets a 750px image, which for a narrow content column can be 2–3× more than necessary.
<Image
src="/hero.webp"
alt="Dashboard preview"
width={1200}
height={630}
priority
sizes="(max-width: 768px) 100vw, (max-width: 1200px) 50vw, 800px"
/>
The sizes attribute tells the browser exactly which image to download at each viewport width. On mobile, this alone can save hundreds of kilobytes per page load.
CLS at 0.18: why vendors couldn't explain why the product felt broken
CLS is deceptively hard to debug because it often doesn't appear in Lighthouse. Lighthouse measures CLS on a simulated load with a clean cache. Real CLS happens when:
- A user has slow network and fonts load late
- A banner or cookie consent appears after the initial paint
- A data-driven component changes height after content loads
On ShipGlobal, our CLS was 0.18. In practice, vendors saw the order table jump down when the pagination bar loaded. A small thing. It happened on every page visit. Multiply that by 30 visits a day per vendor across hundreds of vendors and it's a constant source of friction that nobody files a bug report about. Nobody files a bug that says "the page jumped." They just quietly stop using the product.
Fonts cause layout shift even when they load "correctly"
Fallback fonts have different metrics than your custom font. When Inter loads, text that was rendering in Arial reflows to match Inter's line height, letter spacing, and word spacing. Paragraphs shift. Buttons resize. That's your CLS.
next/font handles the loading. The real fix is font metric override: CSS descriptors that make your fallback font match your custom font's dimensions closely enough that the reflow is imperceptible.
import { Inter } from "next/font/google"
const inter = Inter({
subsets: ["latin"],
display: "swap",
fallback: ["system-ui", "Arial"],
adjustFontFallback: true, // Next.js calculates override metrics automatically
})
adjustFontFallback: true generates size-adjust, ascent-override, descent-override, and line-gap-override for the fallback. The visual difference between fallback and loaded font becomes small enough that layout doesn't shift meaningfully.
Dynamic content: reserving space before you know the size
The hardest CLS to fix is from content whose size you don't know yet: banners, notification bars, data-driven cards, ad slots. The naive solution is to avoid adding things dynamically. The real solution is to reserve space.
For fixed-height elements like banners, use a min-height wrapper even when the content is empty:
<div style={{ minHeight: "48px" }}>
{banner && <Banner message={banner.message} />}
</div>
For data-driven content where you don't know the final height, skeleton loaders with accurate proportions are better than no loaders. A skeleton that's 80px tall and content that's 120px tall still causes a shift.
On ShipGlobal, the largest CLS contributor was the order stats row: four cards that loaded with real data after the page rendered. Each card had a different final height depending on the number inside. We fixed it by setting a fixed card height and truncating overflowing numbers, then exposing a tooltip for the full value. CLS went from 0.18 to 0.04. The page stopped moving.
Why performance degrades after you fix it — and how to break the cycle
Performance regresses because improvements and code changes happen in different places. Lighthouse scores feel good locally and break in production. You need to measure in both places, for different reasons.
Locally, Lighthouse tells you what's theoretically possible. Production RUM (real user monitoring) tells you what's actually happening.
For production monitoring, the Web Vitals JS library piped into your analytics is the minimum viable setup:
import { onLCP, onINP, onCLS } from "web-vitals"
import type { Metric } from "web-vitals"
function sendToAnalytics(metric: Metric) {
const payload = JSON.stringify({
name: metric.name,
value: metric.value,
rating: metric.rating, // "good", "needs-improvement", "poor"
page: window.location.pathname,
})
// Quick start: navigator.sendBeacon("/api/vitals", payload) — sends as text/plain
navigator.sendBeacon("/api/vitals", new Blob([payload], { type: "application/json" }))
}
onLCP(sendToAnalytics)
onINP(sendToAnalytics)
onCLS(sendToAnalytics)
One caveat: fire this on a sample of sessions (10–20%) rather than every user, or you'll flood your analytics endpoint on high-traffic pages. Add a Math.random() < 0.1 guard around the beacon call in production.
This gives you per-page breakdown. You want to know that your homepage LCP is 1.8s but your order history page is 3.4s, because those are different problems with different fixes.
The second piece is a performance budget in CI. Not a hard block (that creates friction), but a warning that goes to Slack or fails a check:
// lighthouserc.js
module.exports = {
ci: {
assert: {
assertions: {
"largest-contentful-paint": ["warn", { maxNumericValue: 2500 }],
"cumulative-layout-shift": ["error", { maxNumericValue: 0.1 }],
"total-blocking-time": ["warn", { maxNumericValue: 300 }],
},
},
},
}
Make CLS a hard error. Make LCP a warning. Layout stability is non-negotiable. Load time is something to improve over time.
The budget and the monitoring tell you where you are. The process is what actually moves the number.
A workflow that compounds instead of one that spikes
Single performance fixes don't compound. A workflow does. It doesn't need to be elaborate.
The loop we settled on after the ShipGlobal work:
- One baseline measurement per sprint on three key pages (home, a data-heavy listing page, a form flow)
- One performance task per sprint, focused on the current worst-performing metric on the worst-performing page
- One regression check in PR review: if a PR adds a new
"use client"boundary at a high level, it gets flagged
That's it. No performance sprints. No big-bang optimization projects. Small, consistent, measured.
In six months of running this loop, we went from a team that did occasional performance "fixes" to a team where performance kept improving passively because the worst regressions never made it to production. For the broader Next.js and React patterns that make this kind of iteration sustainable at scale, building scalable web apps in 2026 is a useful companion read.
What the numbers actually mean — and the metric that didn't show up in Lighthouse
The 40% load time improvement, 74% repeat order increase, 12% AOV growth: I want to be honest about attribution. We also redesigned the portal, improved mobile layouts, and fixed navigation architecture in the same period. Performance wasn't the only variable.
The metric I'm most proud of didn't show up in Lighthouse at all: support tickets from vendors dropped to near zero. That wasn't purely a performance win. The React migration cleaned up brittle UI, and rethinking the information architecture meant vendors could find what they needed without calling support. But a fast, stable UI that doesn't jump around removes an entire category of frustration before it becomes a ticket. CLS at 0.18 means vendors watch content shift on every page load. That's not a bug they can articulate. It just makes the product feel broken in a way they can't explain.
The 23% first-session drop-off is the number I'll stake a claim on. When your LCP is 3.2 seconds, a quarter of your users have decided to close the tab before they've seen a single piece of your UI. When it drops to 1.9 seconds, those people stay. What they do once they stay is a product problem, not a performance problem.
Find your equivalent of the 23% number. It's in your analytics: session duration by page load time bucket, conversion rate by connection speed, bounce rate on your heaviest pages. The data is there. Use it to make the argument, because performance is a product problem disguised as a technical one, and nobody funds a Lighthouse score.
That Notion doc still exists. It's mostly empty now.
If your team has a performance number that's been sitting in a backlog for too long, I work with engineering teams on Next.js performance, architecture, and the delivery practices that make improvements stick. Browse my projects or reach out to talk through your situation.
Published: Fri Mar 27 2026