

How streaming SSR and selective hydration changed the way we think about perceived performance.
When React Server Components landed in production builds of Next.js, most teams treated it as a performance optimization. Load less JavaScript, hydrate less on the client, ship faster. True — but that framing misses the deeper shift that RSCs enable.
For years, building a React app meant: write components, ship JavaScript, hydrate on the client, fetch data after mount. The network waterfall was the enemy. You'd fight it with prefetching, suspense boundaries, skeleton screens. All band-aids on a fundamentally client-heavy model.
// Old world — data fetching on the client
export default function Page() {
const [data, setData] = useState(null);
useEffect(() => {
fetch('/api/data').then(r => r.json()).then(setData);
}, []);
if (!data) return <Skeleton />;
return <Content data={data} />;
}Server Components flip the default. Components run on the server, data is fetched before anything reaches the client, and only the interactive parts ship as JavaScript. The result isn't just faster — it's architecturally cleaner.
The key insight: zero-latency UI isn't about faster networks. It's about eliminating the round trips that make latency visible in the first place.
// New world — data fetched server-side, zero client JS
async function Page() {
const data = await db.query('SELECT * FROM posts');
return <Content data={data} />;
}
// Only this needs to hydrate
'use client';
function LikeButton({ postId }) {
const [liked, setLiked] = useState(false);
return <button onClick={() => setLiked(true)}>...</button>;
}The real power isn't in eliminating hydration — it's in making it surgical. You ship JavaScript only for the parts that need it. A blog post page might have 50 components. Maybe 3 are interactive. With RSC, only those 3 hydrate. The rest are static HTML, painted instantly by the browser.
This changes how you think about component boundaries. Instead of "should this be a component?" you ask "should this be interactive?" The answer is usually no — and that's the whole point.
Time to First Byte improves. First Contentful Paint improves. But the most dramatic change is in how the app feels. No flash of loading state. No layout shift as data populates. Content is just there — because it was rendered before the response even left the server.