Work with Shubham
Connect with Shubham Jha
Available for senior engineering roles, technical consulting, and product advisory. I specialise in React, Next.js, and full-stack architecture for global-scale platforms.
Start a projectWork with Shubham
Available for senior engineering roles, technical consulting, and product advisory. I specialise in React, Next.js, and full-stack architecture for global-scale platforms.
Start a project
A 94 Lighthouse score and an 11-second mobile load time are not contradictory. They happen together constantly — because Lighthouse runs against a simulated fast connection, and your real traffic doesn't.
A client's B2B dashboard had exactly this problem. Desktop audit: clean. Mobile users — about 60% of their traffic — were bouncing at nearly double the desktop rate. On a mid-range Android on 4G, Time to Interactive was 11 seconds.
The culprit was the architecture. Every component was a Client Component. The entire route tree was 'use client'. They'd built a scalable web app on paper — and shipped it like a glorified React SPA: 380KB of JavaScript to every first-time visitor before they could read a single word.
We spent the next two weeks re-architecting. Server Components for data and layout, Client Components only where interactivity was required, caching at the right layers. The bundle dropped from 380KB to 94KB. Mobile TTI went from 11 seconds to 2.3. Bounce rate on mobile dropped 41%.
The architecture hadn't changed what the app did. It changed what it cost users to use it.
The single most expensive architectural mistake in Next.js apps is treating the App Router like it's still the Pages Router with a new folder structure.
The App Router's default is a Server Component. That means: no JavaScript sent to the browser, direct access to databases and file systems, zero hydration cost. You opt into client-side behaviour with 'use client'; when you do, you're making a deliberate choice to ship JavaScript to the browser and accept the complexity that comes with it.
Most teams invert this. They 'use client' everything at the top of the tree, then wonder why their bundle is large and their INP score is poor.
The correct mental model is layers:
Server Layer (zero client JS)
├── Layout components
├── Data fetching (direct DB calls, fetch with cache)
├── Static content and SEO metadata
└── Server Actions for mutations
Client Layer (deliberate JS)
├── Interactive islands (forms, modals, dropdowns)
├── Browser API integrations (geolocation, clipboard)
├── Real-time subscriptions (WebSockets, SSE)
└── Client-only state (animations, local UI)
A product page ends up looking like this:
// app/product/[id]/page.tsx — Server Component. Zero client JS.
export default async function ProductPage({ params }: { params: { id: string } }) {
const product = await getProduct(params.id); // direct DB call, no API round-trip
return (
<main>
<ProductDetails product={product} /> {/* Server Component */}
<ProductImages images={product.images} /> {/* Server Component */}
<AddToCartButton productId={product.id} /> {/* 'use client' — isolated */}
</main>
);
}
ProductDetails and ProductImages never touch the browser. AddToCartButton opts into client-side JavaScript because it needs interactivity. The 'use client' boundary is surgical: component-level, not page-level.
Your initial JavaScript payload stays small. Server-rendered HTML arrives fast. Interactive pieces hydrate on top of already-visible content. In the client scenario above, this was the difference between an 11-second TTI and a 2.3-second one.
Fetch data in Server Components, not in useEffect hooks. Keep 'use client' at the leaf of the component tree, not the root. Use Server Actions for mutations where the logic is self-contained — a separate API route just adds a round-trip you don't need.
Most React bugs I've debugged trace to the same root cause: the wrong kind of state in the wrong place.
Before writing a single hook, categorize the state you need. Most bugs trace back to this: server state copied into local state, global state used where component state would've been fine. The category determines the tool.
Server state is data that lives on a server and is temporarily cached on the client. User profiles, product lists, feed items. It has a lifecycle (loading, stale, revalidating) that's fundamentally different from local state. This belongs in TanStack Query, not in useState. Copying server state into local state creates a second source of truth that will drift.
UI state is local to a component or subtree: modal open/closed, active tab, hover state. This lives in useState or useReducer. It doesn't cross component boundaries and it doesn't need to be persisted.
Shared app state crosses component boundaries without a parent-child relationship: active user session, current theme, notification count. This lives in Zustand or Context. Keep it as lean as possible. Global state is the hardest kind to trace.
// Wrong: server state duplicated into local state — will drift
const [user, setUser] = useState<User | null>(null);
useEffect(() => {
fetchUser(userId).then(setUser);
}, [userId]);
// Correct: TanStack Query owns the lifecycle
const { data: user, isLoading, isError } = useQuery({
queryKey: ['user', userId],
queryFn: () => fetchUser(userId),
staleTime: 60_000, // cache for 60 seconds
});
The useState version gives you none of the things you'll eventually need: no caching, no deduplication across components, no background revalidation, no cancellation on unmount. The useQuery version gives you all four, and every component that queries the same key shares the same cache.
The full state stack for a 2026 Next.js app:
| State Type | Home | Why |
|---|---|---|
| Data from the server | Server Components or TanStack Query | Single source of truth |
| Forms and validation | React Hook Form + Zod | Uncontrolled inputs, schema validation |
| Global client state | Zustand (lean) | Simple API, no boilerplate |
| Local UI state | useState / useReducer |
No overhead for ephemeral state |
| Memoization | React Compiler (React 19) | Automatic; manual only for edge cases |
For a deep dive into how hooks fit into this model, this guide on React hooks and TypeScript patterns covers each layer in detail.
TypeScript plays into this architecture too — though probably not in the role your current codebase gives it.
Performance isn't a build step — it's an architectural constraint. By the time you're running Lighthouse audits before launch, most performance problems are already baked in.
The metrics that matter in 2026 are the three Core Web Vitals:
Most Next.js apps with unoptimized architectures sit in the "needs improvement" band on mobile: technically functional, but sluggish on mid-range hardware. The usual cause isn't a missing useMemo. It's structural: too much JavaScript in the initial bundle, layouts that shift as images load, or click handlers that do expensive work synchronously.
For a deep dive into fixing each metric in a real Next.js app — including what happens when your LCP element isn't an image at all — this post on Core Web Vitals optimization covers the specific changes that moved numbers.
Image optimization is the single fastest way to improve LCP and CLS. Always use next/image:
import Image from 'next/image';
// Bad: raw <img> — no lazy loading, no size optimization, causes CLS
<img src="/hero.png" alt="Hero" />
// Good: next/image — optimized formats, lazy loading, prevents CLS via reserved space
<Image
src="/hero.png"
alt="Product hero shot"
width={1200}
height={630}
priority // only for above-the-fold images
placeholder="blur"
blurDataURL={product.blurHash} // generate with plaiceholder or @unpic/placeholder
/>
The priority prop tells Next.js to preload the image; use it only for the largest above-the-fold element. placeholder="blur" reserves space before the image loads, which prevents CLS.
Every 'use client' directive adds JavaScript to the bundle. The architectural pattern that solves this is component splitting: isolating interactive behaviour into the smallest possible Client Component, leaving the rest of the tree server-rendered.
For heavy components that aren't needed on initial load, lazy loading is zero-cost:
import { lazy, Suspense } from 'react';
const HeavyChartDashboard = lazy(() => import('./HeavyChartDashboard'));
function AnalyticsPage() {
return (
<Suspense fallback={<ChartSkeleton />}>
<HeavyChartDashboard />
</Suspense>
);
}
lazy() splits HeavyChartDashboard into a separate chunk. The initial bundle never includes it. Users who don't navigate to the analytics view never download it at all.
Next.js provides four caching layers: the Request Memoization, the Data Cache, the Full Route Cache, and the Router Cache. Most teams use none of them deliberately.
The practical defaults:
// Cached indefinitely — static data (revalidate manually on update)
const data = await fetch('/api/products', { cache: 'force-cache' });
// Cached for 60 seconds — semi-static data
const data = await fetch('/api/trending', { next: { revalidate: 60 } });
// Never cached — personalized or always-fresh data
const data = await fetch('/api/user/cart', { cache: 'no-store' });
Getting these right means your server isn't doing redundant work on every request. It also means your Time to First Byte stays low, which directly affects LCP.
When a mutation happens — a user creates a record, submits a form, deletes an item — you need to invalidate the right cache without blowing away everything. Server Actions integrate with revalidatePath and revalidateTag for surgical invalidation:
// app/actions/createPost.ts
'use server';
import { revalidatePath, revalidateTag } from 'next/cache';
export async function createPost(data: PostInput) {
await db.post.create({ data });
revalidatePath('/blog'); // re-render the blog listing page
revalidateTag('posts'); // invalidate all fetches tagged 'posts'
}
The caching layer stays consistent without a full cache flush. Users on the blog page see fresh data on their next visit without everyone else's cached responses being thrown away.
Getting caching right keeps TTFB low as the app scales. What it can't do is catch a correctly-cached response landing in a type that allows impossible states — that's a different problem, and it needs to be solved before a user hits it at runtime.
TypeScript in most production codebases is a typed façade over untyped logic. Interfaces on props, a typed useState, and then any everywhere things get complex. That's not type safety. It's documentation that lies.
TypeScript's job is making impossible states unrepresentable — not annotating what can go wrong, but making wrong combinations inexpressible before they get written.
Discriminated unions over boolean flags
Three boolean flags give you eight possible states. Only three are valid. You're shipping the other five as potential bugs. The fix:
// The problem: three flags, eight possible states, only three are valid
interface UserState {
data: User | null;
isLoading: boolean;
error: string | null;
}
// isLoading: true, data: User, error: "..." ← valid TypeScript, impossible state
// The fix: make each valid state a separate variant
type UserState =
| { status: 'idle' }
| { status: 'loading' }
| { status: 'success'; data: User }
| { status: 'error'; error: string };
function UserProfile({ state }: { state: UserState }) {
if (state.status === 'loading') return <Spinner />;
if (state.status === 'error') return <ErrorBanner message={state.error} />;
if (state.status === 'success') return <Profile user={state.data} />;
return null;
}
TypeScript now knows state.data only exists when status === 'success'. Accessing it in the loading branch is a compile error, not a runtime crash. TypeScript eliminates the bug before it's written.
Type your API boundaries strictly
The most dangerous any in a codebase is the one at an API boundary:
// Dangerous: any from fetch is silent about shape changes
const res = await fetch('/api/user');
const user: any = await res.json(); // TypeScript has left the room
// Safe: validate at the boundary, get typed output
import { z } from 'zod';
const UserSchema = z.object({
id: z.string(),
email: z.string().email(),
plan: z.enum(['free', 'pro', 'enterprise']),
});
async function getUser(id: string): Promise<z.infer<typeof UserSchema>> {
const res = await fetch(`/api/user/${id}`);
if (!res.ok) throw new Error(`HTTP ${res.status}`);
return UserSchema.parse(await res.json()); // throws if shape is wrong
}
Zod validates at runtime and infers the TypeScript type statically. The type is derived from the validator. They can't drift apart. When the API changes shape, your TypeScript errors tell you where to fix it.
Typed environment variables
Environment variables are untyped strings by default. A missing NEXT_PUBLIC_API_URL fails silently at runtime, not at build time:
// src/lib/env.ts — validate all env vars at startup
import { z } from 'zod';
const envSchema = z.object({
NEXT_PUBLIC_API_URL: z.string().url(),
DATABASE_URL: z.string().min(1),
NEXTAUTH_SECRET: z.string().min(32),
});
export const env = envSchema.parse(process.env);
// If any variable is missing or wrong type, the app fails at startup — not in prod at 2am
TypeScript enforces the shape of data within your codebase. What it can't enforce is visual consistency across your UI. That's where a design system comes in. The two work together: TypeScript makes the wrong component API unwritable; the design system makes the wrong visual decision unreachable.
A design system isn't just a component library — it's the constraints that make the right decision the default one. The teams I've seen ship fast without accumulating UI debt have the same ingredients: typed component APIs, documented state patterns, and a token file for spacing, typography, and color. Without those, you're making the same decisions independently across the codebase, and the inconsistencies accumulate in ways users notice before you do.
Here's what that looks like for a button:
// components/ui/Button.tsx
type ButtonVariant = 'primary' | 'secondary' | 'ghost' | 'destructive';
type ButtonSize = 'sm' | 'md' | 'lg';
interface ButtonProps {
variant?: ButtonVariant;
size?: ButtonSize;
loading?: boolean;
disabled?: boolean;
children: React.ReactNode;
onClick?: () => void;
type?: 'button' | 'submit' | 'reset';
}
export function Button({
variant = 'primary',
size = 'md',
loading = false,
disabled = false,
children,
onClick,
type = 'button',
}: ButtonProps) {
return (
<button
type={type}
onClick={onClick}
disabled={disabled || loading}
aria-busy={loading}
className={cn(buttonVariants({ variant, size }))}
>
{loading ? <Spinner size="sm" /> : children}
</button>
);
}
Every variant is typed. The component handles loading state explicitly (no relying on the caller to disable the button manually). Accessibility is baked in (aria-busy). The API is stable; adding a new variant doesn't require touching every call site.
buttonVariants({ variant: 'destructive' }) resolves to Tailwind classes like bg-destructive text-destructive-foreground, which resolve to CSS custom properties defined once:
/* globals.css — one variable, every component */
:root {
--primary: 222 47% 11%;
--destructive: 0 84% 60%;
--secondary: 210 40% 96%;
--radius: 0.5rem;
}
When the brand color changes, the token changes — and every component that references it updates without touching a single component file. That's the leverage a design system adds over a component library: the component API stays stable, only the token changes, and it changes everywhere at once.
What separates a real design system from a component folder: typed variants so invalid values don't compile, documented state patterns for every data-driven component (loading, empty, error, success), a token file for spacing, typography, and color, and accessibility baked into primitives — focus rings, aria attributes, keyboard navigation. Teams that skip these tend to add them back as fire drills when the first accessibility audit lands.
For most teams, shadcn/ui plus a token file gets you 80% of the way there without the overhead of building from scratch. The components are yours to own: copy, modify, extend. No black-box dependency that upgrades and breaks your UI.
A consistent UI builds trust. Security defaults are what keep it from being undermined: hijacked links, clickjacked pages, injected content. The interface and the infrastructure have to hold together.
Security in a web app is mostly about defaults. The teams that get breached aren't doing exotic things wrong. They're missing boring defaults.
HTTP security headers
Add these to next.config.ts. They cost nothing and prevent a class of attacks:
// next.config.ts
const securityHeaders = [
{ key: 'X-DNS-Prefetch-Control', value: 'on' },
{ key: 'X-Frame-Options', value: 'SAMEORIGIN' },
{ key: 'X-Content-Type-Options', value: 'nosniff' },
{ key: 'Referrer-Policy', value: 'strict-origin-when-cross-origin' },
{
key: 'Permissions-Policy',
value: 'camera=(), microphone=(), geolocation=()',
},
{
key: 'Content-Security-Policy',
value: [
"default-src 'self'",
"script-src 'self' 'unsafe-inline'", // tighten this for production
"style-src 'self' 'unsafe-inline'",
"img-src 'self' data: https:",
].join('; '),
},
];
const nextConfig = {
headers: async () => [
{ source: '/(.*)', headers: securityHeaders },
],
};
The 'unsafe-inline' comment is the one that always bites teams mid-sprint. The moment analytics, a chat widget, or an error monitoring script gets added, you're forced to choose: keep 'unsafe-inline' (which lets any inline JavaScript run, defeating the point of CSP) or switch to nonces. Nonces are the right answer — a random value generated per request, injected into your CSP header and into any script tag you explicitly allow. Everything else gets blocked.
Next.js supports this via middleware:
// middleware.ts — generate a nonce per request
import { NextResponse } from 'next/server'
export function middleware() {
const nonce = Buffer.from(crypto.randomUUID()).toString('base64')
const csp = [
"default-src 'self'",
`script-src 'self' 'nonce-${nonce}' https://plausible.io`,
"style-src 'self' 'unsafe-inline'",
"img-src 'self' data: https:",
].join('; ')
const response = NextResponse.next()
response.headers.set('Content-Security-Policy', csp)
response.headers.set('x-nonce', nonce) // read this in your layout
return response
}
Your root layout reads x-nonce from headers and passes it to any third-party <Script nonce={nonce}> tag. Scripts without a valid nonce get blocked by the browser. The CSP stays strict, vendor scripts work, and you don't have to gut the header every time a new tool gets added.
Safe external links
Any <a target="_blank"> without rel="noopener noreferrer" gives the linked page access to your window.opener: a classic attack vector. Enforce this at the component level:
// components/ui/ExternalLink.tsx
interface ExternalLinkProps {
href: string;
children: React.ReactNode;
}
export function ExternalLink({ href, children }: ExternalLinkProps) {
return (
<a href={href} target="_blank" rel="noopener noreferrer">
{children}
</a>
);
}
// Never use <a target="_blank"> directly — use this instead
Input validation at system boundaries
Validate everything that enters your system. zod at the API layer, React Hook Form + zod at the form layer. Never trust client-sent data in Server Actions:
// app/actions/updateProfile.ts
'use server';
import { z } from 'zod';
import { auth } from '@/lib/auth';
const UpdateProfileSchema = z.object({
name: z.string().min(1).max(100),
bio: z.string().max(500).optional(),
});
export async function updateProfile(formData: FormData) {
const session = await auth(); // always verify auth in Server Actions
if (!session) throw new Error('Unauthorized');
const result = UpdateProfileSchema.safeParse({
name: formData.get('name'),
bio: formData.get('bio'),
});
if (!result.success) return { error: result.error.flatten() };
await db.user.update({ where: { id: session.user.id }, data: result.data });
return { success: true };
}
Auth check, schema validation, typed result. Any input that doesn't match the schema never reaches the database.
Security defaults prevent things from going wrong. Observability tells you when they do anyway, often for longer than anyone expected before the first complaint arrives.
The question that always comes up after a production incident: "how long was this broken before someone told us?"
That's what frontend observability is for. Error tracking, performance monitoring, instrumenting the flows that drive revenue — not because these are interesting to set up, but because without them, bugs live in production for days before a user complaint surfaces them.
Error boundaries
Every async data boundary in your app should have a fallback. Next.js provides this via error.tsx at the route level:
// app/dashboard/error.tsx
'use client'; // error boundaries must be client components
interface ErrorProps {
error: Error & { digest?: string };
reset: () => void;
}
export default function DashboardError({ error, reset }: ErrorProps) {
useEffect(() => {
// Send to error tracking (Sentry, Datadog, etc.)
reportError(error);
}, [error]);
return (
<div role="alert">
<h2>Something went wrong loading your dashboard.</h2>
<p>Error ID: {error.digest}</p>
<button onClick={reset}>Try again</button>
</div>
);
}
The digest field is a server-side error ID that links the user-visible error to the server log. When a user reports an issue, you have a trace ID to search on.
Key flow monitoring
Instrument the flows that matter most to your product:
// lib/analytics.ts — thin wrapper over your analytics provider
export function trackEvent(event: string, properties?: Record<string, unknown>) {
if (typeof window === 'undefined') return;
// PostHog, Segment, Plausible — implementation detail
analytics.track(event, {
...properties,
timestamp: Date.now(),
url: window.location.pathname,
});
}
// Usage in a conversion-critical flow
async function handleCheckout() {
trackEvent('checkout_initiated', { plan, billing_cycle });
try {
await createSubscription(plan);
trackEvent('checkout_succeeded', { plan });
} catch (err) {
trackEvent('checkout_failed', { plan, error: getErrorMessage(err) });
throw err;
}
}
Three events per conversion flow: initiated, succeeded, failed. This is the minimum that lets you build a funnel, calculate conversion rate, and know when something breaks before your revenue dashboard does.
Before shipping a Next.js app, run through this list. Each item maps to a section above.
Architecture
'use client' boundary is at the leaf, not the rootuseEffect used for data fetching — Server Components or TanStack Query used instead'use client' directives where component-level would sufficeState
useStatePerformance
next/image with explicit width, height, and altpriority proplazy() + Suspenseforce-cache, revalidate, no-store)TypeScript
any at API boundaries — Zod used for runtime validationReact.FC — explicit function signatures usedSecurity
next.config.tsrel="noopener noreferrer".env committed to version controlObservability
error.tsx exists for every major route segmentdigest ID users can referenceWhen should I use Server Components vs Client Components?
Default to Server Components. Opt into 'use client' when you need browser APIs, event handlers, useState, or useEffect. If your component only fetches data and renders it, it should be a Server Component. The question to ask is "does this genuinely need the browser?", not "does this have state?"
How do I handle authentication in the Next.js App Router?
Use NextAuth.js v5 or Clerk. Auth decisions (session reading, route protection) happen in middleware.ts before the route renders. Server Components can read the session directly without an API round-trip. Never check auth on the client side alone.
Should I use the App Router or Pages Router for a new project in 2026?
App Router. The Pages Router is in maintenance mode. Server Components, Server Actions, streaming, and the caching model are App Router-only features. New projects should start with App Router. Migrations from Pages Router are incremental; both can coexist.
What's the right caching strategy for dynamic data?
Depends on how stale the data can be. Static content: force-cache. Data that changes every few minutes: { next: { revalidate: 60 } }. Personalized or always-fresh data: no-store. Use revalidatePath or revalidateTag in Server Actions to invalidate cached data after a mutation.
How do I prevent bundle bloat as the app grows?
Keep 'use client' boundaries surgical — never at the page level when component level works. Lazy-load heavy components with lazy() + Suspense. Audit your bundle periodically with @next/bundle-analyzer. When a package unexpectedly shows up in the client bundle, it's almost always an accidental import in a Client Component.
Is TypeScript worth the overhead for a small team?
Yes, and it pays back faster in small teams than large ones. With fewer people, there's less implicit knowledge shared verbally. TypeScript makes the implicit explicit: component APIs, server action signatures, state shapes. The biggest wins are at API boundaries: catching shape mismatches at build time instead of at 11pm when a user reports broken data.
The rebuilt app shipped two weeks after the audit. Bundle was 94KB. TTI on mobile was 2.3 seconds. Bounce rate among mobile users dropped from 71% to 34%.
None of those changes were algorithmic breakthroughs. They were defaults: Server Components where the server should be doing work, caching where caching made sense, TypeScript enforcing the shape of data at boundaries, images that didn't cause layout shift. The architectural patterns weren't clever. They were boring — in the way that production systems are supposed to be boring.
Boring architecture is a feature. The kind of scalable that survives a Product Hunt launch isn't heroic — it's correct by default.
If you're building a Next.js product and want to pressure-test your architecture or set up production-ready defaults from the start, reach out here.
For the hook and TypeScript patterns that fit into this architecture, this React hooks guide goes deeper on the client layer.
Default to Server Components. Opt into 'use client' when you need browser APIs, event handlers, useState, or useEffect. If your component only fetches data and renders it, it should be a Server Component. The question to ask is 'does this genuinely need the browser?', not 'does this have state?'
Use NextAuth.js v5 or Clerk. Auth decisions happen in middleware.ts before the route renders. Server Components can read the session directly without an API round-trip. Never check auth on the client side alone.
App Router. The Pages Router is in maintenance mode. Server Components, Server Actions, streaming, and the caching model are App Router-only features. New projects should start with App Router. Migrations from Pages Router are incremental; both can coexist.
It depends on how stale the data can be. Static content: force-cache. Data that changes every few minutes: { next: { revalidate: 60 } }. Personalized or always-fresh data: no-store. Use revalidatePath or revalidateTag in Server Actions to invalidate cached data after a mutation.
Three practices: keep 'use client' boundaries surgical — never at the page level when component level works. Lazy-load heavy components with lazy() and Suspense. Audit your bundle periodically with @next/bundle-analyzer. When a package unexpectedly appears in the client bundle, the cause is almost always an accidental import in a Client Component.
Yes, and it pays back faster in small teams than large ones. With fewer people, there is less implicit knowledge shared verbally. TypeScript makes the implicit explicit: component APIs, server action signatures, state shapes. The biggest wins are at API boundaries — catching shape mismatches at build time instead of at 11pm when a user reports broken data.
Published: Fri Feb 27 2026