Performance optimization is one of those topics where the internet is flooded with advice, but much of it is outdated, marginal, or flat-out wrong. This article focuses on 15 techniques that produce measurable, meaningful improvements in real-world applications. Each technique includes before-and-after code examples so you can apply them immediately.
We are not talking about micro-benchmarks or premature optimization. These are techniques that affect user-facing metrics: Time to Interactive, Largest Contentful Paint, Total Blocking Time, and runtime responsiveness. Let us begin.
1 Bundle Splitting and Tree Shaking
The fastest JavaScript is JavaScript you never ship. Bundle splitting separates your code into smaller chunks that load on demand. Tree shaking removes unused exports from your final bundle. Together, they can reduce initial load by 40-70%.
Tree Shaking: Use Named Exports
Tree shaking relies on static analysis of ES module import and export
statements. It cannot work with CommonJS require() because those are dynamic.
// Imports the ENTIRE library
import _ from 'lodash';
// Only uses one function
const result = _.debounce(fn, 300);
// Bundle: +71KB minified
// Import only what you need
import { debounce } from 'lodash-es';
// Or import the specific module
import debounce from 'lodash/debounce';
const result = debounce(fn, 300);
// Bundle: ~1KB minified
Vendor Splitting
Separate your vendor libraries from your application code. Vendor code changes less frequently, so users can cache it independently:
// vite.config.js
export default {
build: {
rollupOptions: {
output: {
manualChunks(id) {
if (id.includes('node_modules')) {
// Split large libraries into their own chunks
if (id.includes('react')) return 'vendor-react';
if (id.includes('chart.js')) return 'vendor-charts';
if (id.includes('d3')) return 'vendor-d3';
return 'vendor'; // everything else
}
}
}
}
}
};
2 Lazy Loading and Code Splitting
Code splitting defers loading code until it is actually needed. For a single-page application with many routes, this means users download only the code for the page they are viewing.
Route-Based Code Splitting (React)
// All routes in one bundle
import Home from './pages/Home';
import Dashboard from './pages/Dashboard';
import Settings from './pages/Settings';
import Reports from './pages/Reports';
import Admin from './pages/Admin';
// Users download ALL pages
// even if they only visit Home
import { lazy, Suspense } from 'react';
const Home = lazy(()
=> import('./pages/Home'));
const Dashboard = lazy(()
=> import('./pages/Dashboard'));
const Settings = lazy(()
=> import('./pages/Settings'));
const Reports = lazy(()
=> import('./pages/Reports'));
// Each page loads only when
// the user navigates to it
Prefetching for Perceived Performance
Lazy loading can introduce a delay when users navigate. Prefetching eliminates this by loading chunks in the background when the user is likely to need them:
// Prefetch on hover — the user is likely about to click
function NavLink({ to, children, importFn }) {
const handleMouseEnter = () => {
// Start loading the chunk when user hovers
importFn();
};
return (
<Link
to={to}
onMouseEnter={handleMouseEnter}
onFocus={handleMouseEnter}
>
{children}
</Link>
);
}
// Usage
<NavLink
to="/dashboard"
importFn={() => import('./pages/Dashboard')}
>
Dashboard
</NavLink>
3 Web Workers for Heavy Computation
JavaScript runs on a single main thread that also handles rendering. Any computation that takes longer than 50ms will cause noticeable jank. Web Workers run JavaScript on a separate thread, keeping the UI responsive.
// Blocks UI for 2+ seconds
function processData(data) {
const results = [];
for (let i = 0;
i < data.length; i++) {
results.push(
heavyCalculation(data[i])
);
}
return results;
}
// UI freezes during execution
const result =
processData(bigArray);
// worker.js
self.addEventListener(
'message', (e) => {
const results =
e.data.map(heavyCalculation);
self.postMessage(results);
});
// main.js — UI stays responsive
const worker = new Worker(
'./worker.js'
);
worker.postMessage(bigArray);
worker.onmessage = (e) => {
displayResults(e.data);
};
Modern Worker Pattern with Comlink
// worker.js — expose functions like a normal module
import { expose } from 'comlink';
const api = {
processCSV(csvText) {
const rows = csvText.split('\n').map(r => r.split(','));
// ... heavy processing ...
return processedData;
},
generateReport(data) {
// ... complex aggregation ...
return report;
}
};
expose(api);
// main.js — use it like a normal async module
import { wrap } from 'comlink';
const worker = wrap(new Worker('./worker.js', { type: 'module' }));
// Looks like a normal function call — runs in a Worker
const data = await worker.processCSV(csvContent);
const report = await worker.generateReport(data);
4 requestAnimationFrame Best Practices
Any visual update that occurs outside the browser's render cycle wastes work or produces
janky animations. requestAnimationFrame (rAF) synchronizes your updates with
the browser's paint cycle.
// setInterval is NOT synced
// with the render cycle
setInterval(() => {
element.style.transform =
`translateX(${x++}px)`;
}, 16); // ~60fps... maybe
// On scroll — fires 100+
// times per second
window.addEventListener(
'scroll', () => {
header.style.opacity =
1 - scrollY / 200;
});
// rAF syncs with display
// refresh rate
function animate() {
element.style.transform =
`translateX(${x++}px)`;
requestAnimationFrame(animate);
}
requestAnimationFrame(animate);
// Throttle scroll to rAF
let ticking = false;
window.addEventListener(
'scroll', () => {
if (!ticking) {
requestAnimationFrame(() => {
header.style.opacity =
1 - scrollY / 200;
ticking = false;
});
ticking = true;
}
});
5 Memory Leak Prevention
Memory leaks cause applications to slow down over time and eventually crash. In single-page applications that run for hours, even small leaks compound into major problems. Here are the most common patterns and how to fix them.
Event Listener Leaks
// Component mounts — adds
// listener. Never removes it.
function setupWidget(el) {
window.addEventListener(
'resize',
() => updateSize(el)
);
// el is now retained in the
// closure FOREVER, even after
// the widget is destroyed
}
function setupWidget(el) {
const controller =
new AbortController();
window.addEventListener(
'resize',
() => updateSize(el),
{ signal: controller.signal }
);
// Cleanup: one call removes
// ALL listeners on this signal
return () =>
controller.abort();
}
Closure Leaks
// LEAK: huge data retained by closure even though only 'summary' is needed
function processData(hugeDataset) {
const summary = computeSummary(hugeDataset);
// This closure retains 'hugeDataset' in scope
return () => {
console.log('Summary:', summary);
};
}
// FIXED: Extract what you need before creating the closure
function processData(hugeDataset) {
const summary = computeSummary(hugeDataset);
// hugeDataset is no longer referenced after this function returns
return createLogger(summary); // separate function, no closure over hugeDataset
}
function createLogger(summary) {
return () => console.log('Summary:', summary);
}
WeakRef and FinalizationRegistry
// Cache that does not prevent garbage collection
class WeakCache {
#cache = new Map();
#registry = new FinalizationRegistry((key) => {
this.#cache.delete(key);
});
set(key, value) {
const ref = new WeakRef(value);
this.#cache.set(key, ref);
this.#registry.register(value, key);
}
get(key) {
const ref = this.#cache.get(key);
if (!ref) return undefined;
const value = ref.deref();
if (!value) { this.#cache.delete(key); }
return value;
}
}
6 DOM Manipulation Optimization
The DOM is the biggest bottleneck in most web applications. Every DOM modification can trigger style recalculation, layout (reflow), and paint. Minimizing and batching DOM operations is critical.
// Each appendChild triggers
// a reflow
for (const item of items) {
const li =
document.createElement('li');
li.textContent = item.name;
list.appendChild(li);
}
// 1000 items = 1000 reflows
// DocumentFragment: build
// off-DOM, insert once
const fragment =
document.createDocumentFragment();
for (const item of items) {
const li =
document.createElement('li');
li.textContent = item.name;
fragment.appendChild(li);
}
list.appendChild(fragment);
// 1000 items = 1 reflow
innerHTML vs DOM API
// For large batch inserts, innerHTML is actually faster than DOM API
// because the browser's HTML parser is highly optimized
const html = items.map(item =>
`<li class="item">
<span class="name">${escapeHTML(item.name)}</span>
<span class="price">$${item.price}</span>
</li>`
).join('');
list.innerHTML = html; // Single parse + single reflow
// IMPORTANT: always sanitize user-generated content
function escapeHTML(str) {
const div = document.createElement('div');
div.appendChild(document.createTextNode(str));
return div.innerHTML;
}
7 Event Delegation Patterns
Instead of attaching event listeners to every element, attach one listener to a parent and use event bubbling. This reduces memory usage and setup time, especially for dynamic lists.
// One listener per button
document.querySelectorAll(
'.delete-btn'
).forEach(btn => {
btn.addEventListener(
'click',
() => deleteItem(
btn.dataset.id
)
);
});
// Problem: new items added
// dynamically need manual
// listener attachment
// One listener on the parent
list.addEventListener(
'click',
(e) => {
const btn = e.target
.closest('.delete-btn');
if (!btn) return;
deleteItem(btn.dataset.id);
}
);
// Works automatically for
// dynamically added items
8 Debouncing and Throttling
High-frequency events like scroll, resize, input, and mousemove can fire hundreds of times per second. Debouncing and throttling limit how often your handlers run.
Debounce: Wait Until the User Stops
// Production-grade debounce with cancel and flush
function debounce(fn, delay, { leading = false } = {}) {
let timer = null;
let lastArgs = null;
function debounced(...args) {
lastArgs = args;
const callNow = leading && !timer;
clearTimeout(timer);
timer = setTimeout(() => {
timer = null;
if (!leading) fn.apply(this, lastArgs);
}, delay);
if (callNow) fn.apply(this, args);
}
debounced.cancel = () => {
clearTimeout(timer);
timer = null;
};
debounced.flush = () => {
if (timer) {
clearTimeout(timer);
timer = null;
fn.apply(this, lastArgs);
}
};
return debounced;
}
// Usage: search input — only fire after user stops typing for 300ms
const search = debounce(async (query) => {
const results = await fetch(`/api/search?q=${query}`);
displayResults(await results.json());
}, 300);
input.addEventListener('input', (e) => search(e.target.value));
Throttle: Limit Execution Rate
// Throttle: execute at most once per interval
function throttle(fn, interval) {
let lastTime = 0;
let timer = null;
return function(...args) {
const now = Date.now();
const remaining = interval - (now - lastTime);
if (remaining <= 0) {
clearTimeout(timer);
timer = null;
lastTime = now;
fn.apply(this, args);
} else if (!timer) {
timer = setTimeout(() => {
lastTime = Date.now();
timer = null;
fn.apply(this, args);
}, remaining);
}
};
}
// Usage: scroll position tracking — max 60fps
const onScroll = throttle(() => {
updateScrollIndicator(window.scrollY);
}, 16); // ~60fps
window.addEventListener('scroll', onScroll, { passive: true });
9 Efficient Data Structures
Choosing the right data structure can turn an O(n) operation into O(1). JavaScript provides Map, Set, and typed arrays that outperform plain objects and arrays for specific use cases.
// Searching an array: O(n)
const users = [
{ id: 1, name: 'Alice' },
{ id: 2, name: 'Bob' },
// ... 10,000 more
];
// Every lookup scans the
// entire array
function findUser(id) {
return users.find(
u => u.id === id
);
}
// Index with Map: O(1)
const userMap = new Map(
users.map(u => [u.id, u])
);
// Instant lookup regardless
// of collection size
function findUser(id) {
return userMap.get(id);
}
// Set for fast membership test
const activeIds = new Set(
activeUsers.map(u => u.id)
);
activeIds.has(42); // O(1)
10 Virtual Scrolling for Large Lists
Rendering 10,000 DOM nodes will make any browser crawl. Virtual scrolling renders only the visible items plus a small buffer, swapping DOM nodes as the user scrolls. This keeps the DOM small regardless of list size.
// Minimal virtual scroll implementation
class VirtualList {
constructor(container, items, { itemHeight = 50, buffer = 5 }) {
this.container = container;
this.items = items;
this.itemHeight = itemHeight;
this.buffer = buffer;
// Create the scrollable viewport
this.viewport = document.createElement('div');
this.viewport.style.cssText =
`overflow-y:auto;height:100%;position:relative;`;
// Spacer maintains the full scrollable height
this.spacer = document.createElement('div');
this.spacer.style.height = `${items.length * itemHeight}px`;
this.content = document.createElement('div');
this.content.style.cssText = `position:absolute;left:0;right:0;`;
this.viewport.appendChild(this.spacer);
this.viewport.appendChild(this.content);
container.appendChild(this.viewport);
this.viewport.addEventListener('scroll',
throttle(() => this.render(), 16), { passive: true });
this.render();
}
render() {
const scrollTop = this.viewport.scrollTop;
const viewportHeight = this.viewport.clientHeight;
const startIndex = Math.max(0,
Math.floor(scrollTop / this.itemHeight) - this.buffer);
const endIndex = Math.min(this.items.length,
Math.ceil((scrollTop + viewportHeight) / this.itemHeight) + this.buffer);
this.content.style.top = `${startIndex * this.itemHeight}px`;
this.content.innerHTML = this.items
.slice(startIndex, endIndex)
.map((item, i) =>
`<div style="height:${this.itemHeight}px">${item}</div>`
).join('');
}
}
11 Image and Asset Loading Strategies
Images typically account for over 50% of a web page's total weight. Smart loading strategies can dramatically improve perceived and actual performance.
// Intersection Observer for lazy loading images
const imageObserver = new IntersectionObserver((entries) => {
entries.forEach(entry => {
if (entry.isIntersecting) {
const img = entry.target;
img.src = img.dataset.src;
if (img.dataset.srcset) img.srcset = img.dataset.srcset;
img.classList.add('loaded');
imageObserver.unobserve(img);
}
});
}, {
rootMargin: '200px', // Start loading 200px before visible
threshold: 0.01
});
// Observe all lazy images
document.querySelectorAll('img[data-src]').forEach(img => {
imageObserver.observe(img);
});
12 Caching with Service Workers
Service Workers enable sophisticated caching strategies that make repeat visits near-instantaneous and enable offline functionality.
// sw.js — Stale-while-revalidate strategy
const CACHE_NAME = 'app-v1';
const PRECACHE_URLS = ['/', '/index.html', '/app.js', '/styles.css'];
self.addEventListener('install', (event) => {
event.waitUntil(
caches.open(CACHE_NAME)
.then(cache => cache.addAll(PRECACHE_URLS))
);
});
self.addEventListener('fetch', (event) => {
event.respondWith(
caches.match(event.request).then(cached => {
// Return cached version immediately
const fetchPromise = fetch(event.request)
.then(response => {
// Update cache in background
const clone = response.clone();
caches.open(CACHE_NAME)
.then(cache => cache.put(event.request, clone));
return response;
});
return cached || fetchPromise;
})
);
});
13 Avoiding Layout Thrashing
Layout thrashing occurs when you read a layout property, then write to the DOM, then read again, forcing the browser to recalculate layout multiple times in a single frame. This is one of the most common performance killers.
// Each iteration: read
// → write → read → write
// Forces synchronous layout
elements.forEach(el => {
const h = el.offsetHeight;
el.style.height =
(h * 2) + 'px';
const w = el.offsetWidth;
el.style.width =
(w * 2) + 'px';
});
// N elements = N forced
// layout recalculations
// Phase 1: read all values
const measurements =
elements.map(el => ({
h: el.offsetHeight,
w: el.offsetWidth,
}));
// Phase 2: write all values
elements.forEach((el, i) => {
el.style.height =
(measurements[i].h * 2)
+ 'px';
el.style.width =
(measurements[i].w * 2)
+ 'px';
});
// 1 layout recalculation
// regardless of N
Reading these properties forces a synchronous layout: offsetWidth,
offsetHeight, offsetTop, scrollTop,
clientWidth, clientHeight, getComputedStyle(),
getBoundingClientRect(). Always batch reads before writes.
14 String and Array Method Optimization
When processing large datasets, the choice of array method matters. Chaining multiple array
methods creates intermediate arrays. A single reduce or a for loop avoids this.
// 100K items: creates 3
// temporary arrays
const result = data
.filter(x => x.active)
.map(x => x.value)
.filter(v => v > 100)
.reduce((sum, v) =>
sum + v, 0);
// Memory: 3 extra arrays
// Iterations: ~3 full passes
// Single pass: no
// intermediate arrays
let result = 0;
for (const item of data) {
if (item.active
&& item.value > 100) {
result += item.value;
}
}
// Memory: zero allocations
// Iterations: 1 pass
For small arrays (under 1000 items), chained methods are fine and more readable. Optimize only when processing large datasets or in hot loops. Readability matters more than saving microseconds.
15 Measuring and Profiling
You cannot optimize what you do not measure. Before applying any optimization, establish a baseline measurement. After applying it, measure again. If it did not help, revert it.
Performance API
// Measure a function's execution time
performance.mark('process-start');
processData(largeDataset);
performance.mark('process-end');
performance.measure('Data Processing', 'process-start', 'process-end');
const [measure] = performance.getEntriesByName('Data Processing');
console.log(`Processing took: ${measure.duration.toFixed(2)}ms`);
// Monitor Core Web Vitals
new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
console.log(`${entry.name}: ${entry.value || entry.duration}ms`);
}
}).observe({ type: 'largest-contentful-paint', buffered: true });
new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
if (!entry.hadRecentInput) {
console.log(`CLS: ${entry.value}`);
}
}
}).observe({ type: 'layout-shift', buffered: true });
Chrome DevTools Performance Panel
The Performance panel is your most powerful profiling tool. Key workflow:
- Record — Click the record button, perform the action you want to profile, then stop.
- Analyze the flame chart — Long yellow bars are JavaScript. Long purple bars are layout. Long green bars are painting.
- Look for long tasks — Any task over 50ms is a "long task" that blocks the main thread.
- Check the summary tab — Shows time breakdown: Scripting, Rendering, Painting, Idle.
- Use the Bottom-Up tab — Shows which functions consumed the most time.
The Performance Optimization Workflow
- Measure first. Use Lighthouse, WebPageTest, or the Performance panel to establish a baseline.
- Identify the bottleneck. Is it network (too much JavaScript shipped)? Is it main thread (too much computation)? Is it rendering (too many DOM operations)?
- Apply the right technique. Network bottleneck? Use techniques 1-2. Main thread? Use 3-4, 8. Rendering? Use 6, 13.
- Measure again. Verify the improvement. If it did not help, revert and try something else.
- Ship and monitor. Use real user monitoring (RUM) to track performance in production.