Work with Shubham
Connect with Shubham Jha
Available for senior engineering roles, technical consulting, and product advisory. I specialise in React, Next.js, and full-stack architecture for global-scale platforms.
Start a projectWork with Shubham
Available for senior engineering roles, technical consulting, and product advisory. I specialise in React, Next.js, and full-stack architecture for global-scale platforms.
Start a project
A 94 Lighthouse score and an 11-second mobile load time are not contradictory. They happen together constantly — because Lighthouse runs against a simulated fast connection, and your real traffic doesn't.
A client's B2B dashboard had exactly this problem. Desktop audit: clean. Mobile users — about 60% of their traffic — were bouncing at nearly double the desktop rate. On a mid-range Android on 4G, Time to Interactive was 11 seconds.
The culprit was the architecture. Every component was a Client Component. The entire route tree was 'use client'. They'd built a scalable web app on paper — and shipped it like a glorified React SPA: 380KB of JavaScript to every first-time visitor before they could read a single word.
We spent the next two weeks re-architecting. Server Components for data and layout, Client Components only where interactivity was required, caching at the right layers. The bundle dropped from 380KB to 94KB. Mobile TTI went from 11 seconds to 2.3. Bounce rate on mobile dropped 41%.
The architecture hadn't changed what the app did. It changed what it cost users to use it.
The single most expensive architectural mistake in Next.js apps is treating the App Router like it's still the Pages Router with a new folder structure.
The App Router's default is a Server Component. That means: no JavaScript sent to the browser, direct access to databases and file systems, zero hydration cost. You opt into client-side behaviour with 'use client'; when you do, you're making a deliberate choice to ship JavaScript to the browser and accept the complexity that comes with it.
Most teams invert this. They 'use client' everything at the top of the tree, then wonder why their bundle is large and their INP score is poor.
The correct mental model is layers:
Server Layer (zero client JS)
├── Layout components
├── Data fetching (direct DB calls, fetch with cache)
├── Static content and SEO metadata
└── Server Actions for mutations
Client Layer (deliberate JS)
├── Interactive islands (forms, modals, dropdowns)
├── Browser API integrations (geolocation, clipboard)
├── Real-time subscriptions (WebSockets, SSE)
└── Client-only state (animations, local UI)
A product page ends up looking like this:
// app/product/[id]/page.tsx — Server Component. Zero client JS.
export default async function ProductPage({ params }: { params: Promise<{ id: string }> }) {
const { id } = await params; // Next.js 15: params is a Promise
const product = await getProduct(id); // direct DB call, no API round-trip
return (
<main>
<ProductDetails product={product} /> {/* Server Component */}
<ProductImages images={product.images} /> {/* Server Component */}
<AddToCartButton productId={product.id} /> {/* 'use client' — isolated */}
</main>
);
}
ProductDetails and ProductImages never touch the browser. AddToCartButton opts into client-side JavaScript because it needs interactivity. The 'use client' boundary is surgical: component-level, not page-level.
Your initial JavaScript payload stays small. Server-rendered HTML arrives fast. Interactive pieces hydrate on top of already-visible content. In the client scenario above, this was the difference between an 11-second TTI and a 2.3-second one.
Fetch data in Server Components, not in useEffect hooks. Keep 'use client' at the leaf of the component tree, not the root. Use Server Actions for mutations where the logic is self-contained — a separate API route just adds a round-trip you don't need.
Next.js 15 introduced Partial Prerendering (PPR) — currently experimental, but the most significant performance primitive in the App Router since Server Components themselves. PPR lets you prerender the static shell of a route at build time and stream dynamic content into it at request time, without splitting into separate routes.
// next.config.ts — opt in to PPR
export default {
experimental: {
ppr: 'incremental', // enable per-route with export const experimental_ppr = true
},
}
// app/product/[id]/page.tsx
export const experimental_ppr = true
export default async function ProductPage({ params }: { params: Promise<{ id: string }> }) {
const { id } = await params
const product = await getProduct(id) // static product data — prerendered at build
return (
<main>
<ProductDetails product={product} /> {/* prerendered — arrives from edge cache */}
<Suspense fallback={<PriceSkeleton />}>
<LivePrice productId={id} /> {/* dynamic — streamed per-request */}
</Suspense>
<Suspense fallback={<ReviewsSkeleton />}>
<ReviewFeed productId={id} /> {/* dynamic — streamed per-request */}
</Suspense>
</main>
)
}
The static shell — layout, product details, images — arrives from the edge cache in under 10ms. Dynamic content streams in from the origin as its queries complete. Users see a fully-rendered skeleton instantly, with live data filling in. This is the architecture ceiling for perceived performance in a Next.js app in 2026.
Server Components handle data delivery. What happens once that data reaches the client — how it's stored, shared, and updated — is a separate problem with different tools.
Most React bugs I've debugged trace to the same root cause: the wrong kind of state in the wrong place.
Before writing a single hook, categorize the state you need. Most bugs trace back to this: server state copied into local state, global state used where component state would've been fine. The category determines the tool.
Server state is data that lives on a server and is temporarily cached on the client. User profiles, product lists, feed items. It has a lifecycle (loading, stale, revalidating) that's fundamentally different from local state. This belongs in TanStack Query, not in useState. Copying server state into local state creates a second source of truth that will drift.
UI state is local to a component or subtree: modal open/closed, active tab, hover state. This lives in useState or useReducer. It doesn't cross component boundaries and it doesn't need to be persisted.
Shared app state crosses component boundaries without a parent-child relationship: active user session, current theme, notification count. This lives in Zustand or Context. Keep it as lean as possible. Global state is the hardest kind to trace.
// Wrong: server state duplicated into local state — will drift
const [user, setUser] = useState<User | null>(null);
useEffect(() => {
fetchUser(userId).then(setUser);
}, [userId]);
// Correct: TanStack Query owns the lifecycle
const { data: user, isLoading, isError } = useQuery({
queryKey: ['user', userId],
queryFn: () => fetchUser(userId),
staleTime: 60_000, // cache for 60 seconds
});
The useState version gives you none of the things you'll eventually need: no caching, no deduplication across components, no background revalidation, no cancellation on unmount. The useQuery version gives you all four, and every component that queries the same key shares the same cache.
The full state stack for a 2026 Next.js app:
| State Type | Home | Why |
|---|---|---|
| Data from the server | Server Components or TanStack Query | Single source of truth |
| Forms and validation | React Hook Form + Zod | Uncontrolled inputs, schema validation |
| Global client state | Zustand (lean) | Simple API, no boilerplate |
| Local UI state | useState / useReducer |
No overhead for ephemeral state |
| Memoization | React Compiler (React 19) | Automatic; manual only for edge cases |
For a deep dive into how hooks fit into this model, this guide on React hooks and TypeScript patterns covers each layer in detail.
Performance isn't a build step — it's an architectural constraint. By the time you're running Lighthouse audits before launch, most performance problems are already baked in.
The metrics that matter in 2026 are the three Core Web Vitals:
Most Next.js apps with unoptimized architectures sit in the "needs improvement" band on mobile: technically functional, but sluggish on mid-range hardware. The usual cause isn't a missing useMemo. It's structural: too much JavaScript in the initial bundle, layouts that shift as images load, or click handlers that do expensive work synchronously.
For a deep dive into fixing each metric in a real Next.js app — including what happens when your LCP element isn't an image at all — this post on Core Web Vitals optimization covers the specific changes that moved numbers.
Image optimization is the single fastest way to improve LCP and CLS. Always use next/image:
import Image from 'next/image';
// Bad: raw <img> — no lazy loading, no size optimization, causes CLS
<img src="/hero.png" alt="Hero" />
// Good: next/image — optimized formats, lazy loading, prevents CLS via reserved space
<Image
src="/hero.png"
alt="Product hero shot"
width={1200}
height={630}
priority // only for above-the-fold images
placeholder="blur"
blurDataURL={product.blurHash} // generate with plaiceholder or @unpic/placeholder
/>
The priority prop tells Next.js to preload the image; use it only for the largest above-the-fold element. placeholder="blur" reserves space before the image loads, which prevents CLS.
Every 'use client' directive adds JavaScript to the bundle. The architectural pattern that solves this is component splitting: isolating interactive behaviour into the smallest possible Client Component, leaving the rest of the tree server-rendered.
For heavy components that aren't needed on initial load, lazy loading is zero-cost:
import { lazy, Suspense } from 'react';
const HeavyChartDashboard = lazy(() => import('./HeavyChartDashboard'));
function AnalyticsPage() {
return (
<Suspense fallback={<ChartSkeleton />}>
<HeavyChartDashboard />
</Suspense>
);
}
lazy() splits HeavyChartDashboard into a separate chunk. The initial bundle never includes it. Users who don't navigate to the analytics view never download it at all.
Next.js provides four caching layers: the Request Memoization, the Data Cache, the Full Route Cache, and the Router Cache. Most teams use none of them deliberately.
The practical defaults:
// Cached indefinitely — static data (revalidate manually on update)
const data = await fetch('/api/products', { cache: 'force-cache' });
// Cached for 60 seconds — semi-static data
const data = await fetch('/api/trending', { next: { revalidate: 60 } });
// Never cached — personalized or always-fresh data
const data = await fetch('/api/user/cart', { cache: 'no-store' });
Getting these right means your server isn't doing redundant work on every request. It also means your Time to First Byte stays low, which directly affects LCP.
When a mutation happens — a user creates a record, submits a form, deletes an item — you need to invalidate the right cache without blowing away everything. Server Actions integrate with revalidatePath and revalidateTag for surgical invalidation:
// app/actions/createPost.ts
'use server';
import { revalidatePath, revalidateTag } from 'next/cache';
export async function createPost(data: PostInput) {
await db.post.create({ data });
revalidatePath('/blog'); // re-render the blog listing page
revalidateTag('posts'); // invalidate all fetches tagged 'posts'
}
The caching layer stays consistent without a full cache flush. Users on the blog page see fresh data on their next visit without everyone else's cached responses being thrown away.
Caching controls what the server fetches. Suspense streaming controls what the browser sees while those fetches resolve.
A common dashboard problem: the user header loads in 20ms, but the revenue stats query takes 400ms and the activity feed takes 600ms. Without streaming, the entire page waits for 600ms before painting anything. With Suspense streaming, the header renders immediately, the revenue stats arrive at 400ms, the activity feed at 600ms. The user sees content in 20ms — instead of watching a blank screen for 600ms.
// Without streaming: everyone waits for the slowest query
export default async function DashboardPage() {
const [user, revenue, activity] = await Promise.all([
getUser(), // ~20ms
getRevenue(), // ~400ms
getActivity(), // ~600ms — the whole page waits for this
])
return <Dashboard user={user} revenue={revenue} activity={activity} />
}
// With streaming: header paints at 20ms, components stream in as queries complete
export default async function DashboardPage() {
const user = await getUser() // fast — render immediately
return (
<main>
<UserHeader user={user} />
<Suspense fallback={<RevenueSkeleton />}>
<RevenueStats /> {/* async Server Component — streams in at ~400ms */}
</Suspense>
<Suspense fallback={<ActivitySkeleton />}>
<ActivityFeed /> {/* async Server Component — streams in at ~600ms */}
</Suspense>
</main>
)
}
// Each component fetches its own data — no prop drilling, no coordination needed
async function RevenueStats() {
const revenue = await getRevenue()
return <RevenueCard data={revenue} />
}
Three things to get right when adding Suspense boundaries:
Skeleton accuracy matters for CLS. A skeleton that's 80px tall and loaded content that's 160px tall still causes layout shift when the real component mounts. Measure the loaded height and match it in the skeleton.
Coarse boundaries beat fine-grained ones. A Suspense boundary per data item creates a visible "popcorn" loading effect. Wrap logical sections instead — a stats row, a full feed, a sidebar panel.
Order determines streaming priority. HTTP/2 streams content in document order. Wrap your most important above-the-fold data in the first Suspense boundary so it arrives first.
This pattern changed one dashboard from a 2.8s LCP to a 0.9s LCP. The database queries didn't get faster. The page just stopped making users wait for the slowest one before showing them anything.
Getting caching right keeps TTFB low as the app scales. What it can't do is catch a correctly-cached response landing in a type that allows impossible states — that's a different problem, and it needs to be solved before a user hits it at runtime.
TypeScript in most production codebases is a typed façade over untyped logic. Interfaces on props, a typed useState, and then any everywhere things get complex. That's not type safety. It's documentation that lies.
TypeScript's job is making impossible states unrepresentable — not annotating what can go wrong, but making wrong combinations inexpressible before they get written.
Discriminated unions over boolean flags
Three boolean flags give you eight possible states. Only three are valid. You're shipping the other five as potential bugs. The fix:
// The problem: three flags, eight possible states, only three are valid
interface UserState {
data: User | null;
isLoading: boolean;
error: string | null;
}
// isLoading: true, data: User, error: "..." ← valid TypeScript, impossible state
// The fix: make each valid state a separate variant
type UserState =
| { status: 'idle' }
| { status: 'loading' }
| { status: 'success'; data: User }
| { status: 'error'; error: string };
function UserProfile({ state }: { state: UserState }) {
if (state.status === 'loading') return <Spinner />;
if (state.status === 'error') return <ErrorBanner message={state.error} />;
if (state.status === 'success') return <Profile user={state.data} />;
return null;
}
TypeScript now knows state.data only exists when status === 'success'. Accessing it in the loading branch is a compile error, not a runtime crash. TypeScript eliminates the bug before it's written.
Type your API boundaries strictly
The most dangerous any in a codebase is the one at an API boundary:
// Dangerous: any from fetch is silent about shape changes
const res = await fetch('/api/user');
const user: any = await res.json(); // TypeScript has left the room
// Safe: validate at the boundary, get typed output
import { z } from 'zod';
const UserSchema = z.object({
id: z.string(),
email: z.string().email(),
plan: z.enum(['free', 'pro', 'enterprise']),
});
async function getUser(id: string): Promise<z.infer<typeof UserSchema>> {
const res = await fetch(`/api/user/${id}`);
if (!res.ok) throw new Error(`HTTP ${res.status}`);
return UserSchema.parse(await res.json()); // throws if shape is wrong
}
Zod validates at runtime and infers the TypeScript type statically. The type is derived from the validator. They can't drift apart. When the API changes shape, your TypeScript errors tell you where to fix it.
Typed environment variables
Environment variables are untyped strings by default. A missing NEXT_PUBLIC_API_URL fails silently at runtime, not at build time:
// src/lib/env.ts — validate all env vars at startup
import { z } from 'zod';
const envSchema = z.object({
NEXT_PUBLIC_API_URL: z.string().url(),
DATABASE_URL: z.string().min(1),
NEXTAUTH_SECRET: z.string().min(32),
});
export const env = envSchema.parse(process.env);
// If any variable is missing or wrong type, the app fails at startup — not in prod at 2am
TypeScript enforces the shape of data within your codebase. What it can't enforce is visual consistency across your UI. That's where a design system comes in. The two work together: TypeScript makes the wrong component API unwritable; the design system makes the wrong visual decision unreachable.
A design system isn't just a component library — it's the constraints that make the right decision the default one. The teams I've seen ship fast without accumulating UI debt have the same ingredients: typed component APIs, documented state patterns, and a token file for spacing, typography, and color. Without those, you're making the same decisions independently across the codebase, and the inconsistencies accumulate in ways users notice before you do.
Here's what that looks like for a button:
// components/ui/Button.tsx
type ButtonVariant = 'primary' | 'secondary' | 'ghost' | 'destructive';
type ButtonSize = 'sm' | 'md' | 'lg';
interface ButtonProps {
variant?: ButtonVariant;
size?: ButtonSize;
loading?: boolean;
disabled?: boolean;
children: React.ReactNode;
onClick?: () => void;
type?: 'button' | 'submit' | 'reset';
}
export function Button({
variant = 'primary',
size = 'md',
loading = false,
disabled = false,
children,
onClick,
type = 'button',
}: ButtonProps) {
return (
<button
type={type}
onClick={onClick}
disabled={disabled || loading}
aria-busy={loading}
className={cn(buttonVariants({ variant, size }))}
>
{loading ? <Spinner size="sm" /> : children}
</button>
);
}
Every variant is typed. The component handles loading state explicitly (no relying on the caller to disable the button manually). Accessibility is baked in (aria-busy). The API is stable; adding a new variant doesn't require touching every call site.
buttonVariants({ variant: 'destructive' }) resolves to Tailwind classes like bg-destructive text-destructive-foreground, which resolve to CSS custom properties defined once:
/* globals.css — one variable, every component */
:root {
--primary: 222 47% 11%;
--destructive: 0 84% 60%;
--secondary: 210 40% 96%;
--radius: 0.5rem;
}
When the brand color changes, the token changes — and every component that references it updates without touching a single component file. That's the leverage a design system adds over a component library: the component API stays stable, only the token changes, and it changes everywhere at once.
What separates a real design system from a component folder: typed variants so invalid values don't compile, documented state patterns for every data-driven component (loading, empty, error, success), a token file for spacing, typography, and color, and accessibility baked into primitives — focus rings, aria attributes, keyboard navigation. Teams that skip these tend to add them back as fire drills when the first accessibility audit lands.
For most teams, shadcn/ui plus a token file gets you 80% of the way there without the overhead of building from scratch. The components are yours to own: copy, modify, extend. No black-box dependency that upgrades and breaks your UI.
A consistent UI builds trust. Security defaults are what keep it from being undermined: hijacked links, clickjacked pages, injected content. The interface and the infrastructure have to hold together.
Security in a web app is mostly about defaults. The teams that get breached aren't doing exotic things wrong. They're missing boring defaults.
HTTP security headers
Add these to next.config.ts. They cost nothing and prevent a class of attacks:
// next.config.ts
const securityHeaders = [
{ key: 'X-DNS-Prefetch-Control', value: 'on' },
{ key: 'X-Frame-Options', value: 'SAMEORIGIN' },
{ key: 'X-Content-Type-Options', value: 'nosniff' },
{ key: 'Referrer-Policy', value: 'strict-origin-when-cross-origin' },
{
key: 'Permissions-Policy',
value: 'camera=(), microphone=(), geolocation=()',
},
{
key: 'Content-Security-Policy',
value: [
"default-src 'self'",
"script-src 'self' 'unsafe-inline'", // tighten this for production
"style-src 'self' 'unsafe-inline'",
"img-src 'self' data: https:",
].join('; '),
},
];
const nextConfig = {
headers: async () => [
{ source: '/(.*)', headers: securityHeaders },
],
};
The 'unsafe-inline' comment is the one that always bites teams mid-sprint. The moment analytics, a chat widget, or an error monitoring script gets added, you're forced to choose: keep 'unsafe-inline' (which lets any inline JavaScript run, defeating the point of CSP) or switch to nonces. Nonces are the right answer — a random value generated per request, injected into your CSP header and into any script tag you explicitly allow. Everything else gets blocked.
Next.js supports this via middleware:
// middleware.ts — generate a nonce per request
import { NextResponse } from 'next/server'
export function middleware() {
const nonce = Buffer.from(crypto.randomUUID()).toString('base64')
const csp = [
"default-src 'self'",
`script-src 'self' 'nonce-${nonce}' https://plausible.io`,
"style-src 'self' 'unsafe-inline'",
"img-src 'self' data: https:",
].join('; ')
const response = NextResponse.next()
response.headers.set('Content-Security-Policy', csp)
response.headers.set('x-nonce', nonce) // read this in your layout
return response
}
Your root layout reads x-nonce from headers and passes it to any third-party <Script nonce={nonce}> tag. Scripts without a valid nonce get blocked by the browser. The CSP stays strict, vendor scripts work, and you don't have to gut the header every time a new tool gets added.
Safe external links
Any <a target="_blank"> without rel="noopener noreferrer" gives the linked page access to your window.opener: a classic attack vector. Enforce this at the component level:
// components/ui/ExternalLink.tsx
interface ExternalLinkProps {
href: string;
children: React.ReactNode;
}
export function ExternalLink({ href, children }: ExternalLinkProps) {
return (
<a href={href} target="_blank" rel="noopener noreferrer">
{children}
</a>
);
}
// Never use <a target="_blank"> directly — use this instead
Input validation at system boundaries
Validate everything that enters your system. zod at the API layer, React Hook Form + zod at the form layer. Never trust client-sent data in Server Actions:
// app/actions/updateProfile.ts
'use server';
import { z } from 'zod';
import { auth } from '@/lib/auth';
const UpdateProfileSchema = z.object({
name: z.string().min(1).max(100),
bio: z.string().max(500).optional(),
});
export async function updateProfile(formData: FormData) {
const session = await auth(); // always verify auth in Server Actions
if (!session) throw new Error('Unauthorized');
const result = UpdateProfileSchema.safeParse({
name: formData.get('name'),
bio: formData.get('bio'),
});
if (!result.success) return { error: result.error.flatten() };
await db.user.update({ where: { id: session.user.id }, data: result.data });
return { success: true };
}
Auth check, schema validation, typed result. Any input that doesn't match the schema never reaches the database.
Security defaults prevent things from going wrong. Observability tells you when they do anyway, often for longer than anyone expected before the first complaint arrives.
The question that always comes up after a production incident: "how long was this broken before someone told us?"
That's what frontend observability is for. Error tracking, performance monitoring, instrumenting the flows that drive revenue — not because these are interesting to set up, but because without them, bugs live in production for days before a user complaint surfaces them.
Error boundaries
Every async data boundary in your app should have a fallback. Next.js provides this via error.tsx at the route level:
// app/dashboard/error.tsx
'use client'; // error boundaries must be client components
interface ErrorProps {
error: Error & { digest?: string };
reset: () => void;
}
export default function DashboardError({ error, reset }: ErrorProps) {
useEffect(() => {
// Send to error tracking (Sentry, Datadog, etc.)
reportError(error);
}, [error]);
return (
<div role="alert">
<h2>Something went wrong loading your dashboard.</h2>
<p>Error ID: {error.digest}</p>
<button onClick={reset}>Try again</button>
</div>
);
}
The digest field is a server-side error ID that links the user-visible error to the server log. When a user reports an issue, you have a trace ID to search on.
Key flow monitoring
Instrument the flows that matter most to your product:
// lib/analytics.ts — thin wrapper over your analytics provider
export function trackEvent(event: string, properties?: Record<string, unknown>) {
if (typeof window === 'undefined') return;
// PostHog, Segment, Plausible — implementation detail
analytics.track(event, {
...properties,
timestamp: Date.now(),
url: window.location.pathname,
});
}
// Usage in a conversion-critical flow
async function handleCheckout() {
trackEvent('checkout_initiated', { plan, billing_cycle });
try {
await createSubscription(plan);
trackEvent('checkout_succeeded', { plan });
} catch (err) {
trackEvent('checkout_failed', { plan, error: getErrorMessage(err) });
throw err;
}
}
Three events per conversion flow: initiated, succeeded, failed. This is the minimum that lets you build a funnel, calculate conversion rate, and know when something breaks before your revenue dashboard does.
Real user monitoring for Core Web Vitals
Lighthouse gives you a score against a simulated connection. Real user monitoring (RUM) gives you the actual distribution across your traffic — the p75 LCP on a mid-range Android in Southeast Asia, not your MacBook on Wi-Fi. These are different numbers, sometimes by 3×.
The web-vitals library pipes directly into any analytics endpoint:
// app/web-vitals.ts — wire up once in your root layout
import { onLCP, onINP, onCLS } from 'web-vitals'
import type { Metric } from 'web-vitals'
function sendToAnalytics(metric: Metric) {
// Sample 10% of sessions to avoid flooding your endpoint on high-traffic pages
if (Math.random() > 0.1) return
navigator.sendBeacon(
'/api/vitals',
new Blob(
[JSON.stringify({ name: metric.name, value: metric.value, rating: metric.rating, page: window.location.pathname })],
{ type: 'application/json' }
)
)
}
onLCP(sendToAnalytics)
onINP(sendToAnalytics)
onCLS(sendToAnalytics)
The rating field ("good", "needs-improvement", "poor") lets you slice by threshold without doing the math yourself. The page field is the lever that matters most: you want to know that your homepage LCP is 1.4s but your order history page is 3.8s, because those are different problems with different fixes.
Pair this with a performance budget in CI — a warning, not a block — so regressions surface in pull requests before they reach production users:
// lighthouserc.js
module.exports = {
ci: {
assert: {
assertions: {
'largest-contentful-paint': ['warn', { maxNumericValue: 2500 }],
'cumulative-layout-shift': ['error', { maxNumericValue: 0.1 }],
'total-blocking-time': ['warn', { maxNumericValue: 300 }],
},
},
},
}
Make CLS a hard error. CLS regressions are invisible to developers (they're subtle on a fast machine) but immediately noticeable to users. Everything else stays a warning — load time improves over time, but a layout that jumps is non-negotiable.
Before shipping a Next.js app, run through this list. Each item maps to a section above.
Architecture
'use client' boundary is at the leaf, not the rootuseEffect used for data fetching — Server Components or TanStack Query used instead'use client' directives where component-level would sufficeState
useStatePerformance
next/image with explicit width, height, and altpriority proplazy() + Suspenseforce-cache, revalidate, no-store)TypeScript
any at API boundaries — Zod used for runtime validationReact.FC — explicit function signatures usedSecurity
next.config.tsrel="noopener noreferrer".env committed to version controlObservability
error.tsx exists for every major route segmentdigest ID users can referenceThe rebuilt app shipped two weeks after the audit. Bundle was 94KB. TTI on mobile was 2.3 seconds. Bounce rate among mobile users dropped from 71% to 34%.
None of those changes were algorithmic breakthroughs. They were defaults: Server Components where the server should be doing work, caching where caching made sense, TypeScript enforcing the shape of data at boundaries, images that didn't cause layout shift. The architectural patterns weren't clever. They were boring — in the way that production systems are supposed to be boring.
Boring architecture is a feature. The kind of scalable that survives a Product Hunt launch isn't heroic — it's correct by default.
If you're building a Next.js product and want to pressure-test your architecture or set up production-ready defaults from the start, reach out here.
For the hook and TypeScript patterns that fit into this architecture, this React hooks guide goes deeper on the client layer.
Default to Server Components. Opt into 'use client' when you need browser APIs, event handlers, useState, or useEffect. If your component only fetches data and renders it, it should be a Server Component. The question to ask is 'does this genuinely need the browser?', not 'does this have state?'
Use NextAuth.js v5 or Clerk. Auth decisions happen in middleware.ts before the route renders. Server Components can read the session directly without an API round-trip. Never check auth on the client side alone.
App Router. The Pages Router is in maintenance mode. Server Components, Server Actions, streaming, and the caching model are App Router-only features. New projects should start with App Router. Migrations from Pages Router are incremental; both can coexist.
It depends on how stale the data can be. Static content: force-cache. Data that changes every few minutes: { next: { revalidate: 60 } }. Personalized or always-fresh data: no-store. Use revalidatePath or revalidateTag in Server Actions to invalidate cached data after a mutation.
Three practices: keep 'use client' boundaries surgical — never at the page level when component level works. Lazy-load heavy components with lazy() and Suspense. Audit your bundle periodically with @next/bundle-analyzer. When a package unexpectedly appears in the client bundle, the cause is almost always an accidental import in a Client Component.
Yes, and it pays back faster in small teams than large ones. With fewer people, there is less implicit knowledge shared verbally. TypeScript makes the implicit explicit: component APIs, server action signatures, state shapes. The biggest wins are at API boundaries — catching shape mismatches at build time instead of at 11pm when a user reports broken data.
Four levers in order of impact: (1) Audit 'use client' boundaries — a page-level directive is almost always wrong. Move it to the individual interactive component. Each unnecessary 'use client' pulls its entire subtree into the client bundle. (2) Run @next/bundle-analyzer after every significant dependency addition. A single library import in a Client Component can balloon the bundle if the library isn't tree-shakeable. (3) Lazy-load non-critical components with React.lazy() and Suspense — charts, modals, and rich editors are good candidates. (4) Check for duplicate dependencies with 'npm ls <package>' — multiple versions of the same library (React, date-fns) can silently double your bundle size.
Published: Fri Apr 10 2026