150+ JavaScript Interview Questions
With Answers & Code Examples

The most comprehensive collection of JavaScript interview questions for frontend developers in 2025. Covers everything from basic JS concepts to advanced topics like the event loop, closures, promises, prototypes, and modern ES2023+ features.

91 questions  ·  ✅ 21 categories  ·  ✅ Code examples in every answer  ·  ✅ Updated 2025

Practice All Questions Free →Browse Questions ↓

📋 Table of Contents

  1. 'this' Binding(0)
  2. 'this' Keyword(2)
  3. Arrays(4)
  4. Async Bugs(0)
  5. Async JS(6)
  6. Browser APIs(6)
  7. Closure Traps(0)
  8. Closures & Scope(0)
  9. Core JS(17)
  10. DOM & Events(7)
  11. Error Handling(3)
  12. Event Loop & Promises(0)
  13. Event Loop Traps(0)
  14. Fix the Code(0)
  15. Functions(18)
  16. Hoisting(0)
  17. Modern JS(15)
  18. Objects(9)
  19. Performance(4)
  20. Type Coercion(0)
  21. What's Wrong?(0)

How to Use This Guide

This guide covers the most frequently asked JavaScript interview questions across all experience levels — from junior developers with 0–1 years of experience to senior engineers. Each answer includes a clear explanation, working code examples, and common gotchas that interviewers specifically test for.

The questions are organized by topic so you can focus on your weak areas first. We recommend using the interactive practice platform to test yourself rather than just reading — active recall improves retention by up to 50%.

For each concept, we also have output prediction challenges (predict what a code snippet logs) and a debug lab where you fix real buggy code — the closest thing to an actual interview.

'this' Binding Interview Questions

Practice 'this' Binding questions →

'this' Keyword Interview Questions

Practice 'this' Keyword questions →

How does 'this' work in different contexts?

💡 Hint: Determined by how a function is called, not where it is defined

this is determined by how a function is called:

  • Global → window / undefined (strict)
  • Method call → the object before the dot
  • new → the newly created object
  • call/apply/bind → whatever you pass
  • Arrow function → outer lexical scope
  • Event listener → the element that fired
const obj = { val: 42, getVal() { return this.val; } };
const fn = obj.getVal;
fn();          // undefined — lost context!
fn.call(obj);  // 42 — restored
Practice this question →

Explain the four rules of this binding: default, implicit, explicit, and new.

💡 Hint: Priority: new > explicit (bind/call/apply) > implicit (method) > default (global/undefined)

Four rules determine this, with this priority order: new > explicit > implicit > default

// 1. Default binding — standalone function call
function fn() { console.log(this); }
fn(); // global object (window) in sloppy mode, undefined in strict

// 2. Implicit binding — method call (object before the dot)
const obj = { name: 'Alice', fn() { return this.name; } };
obj.fn(); // 'Alice' — this = obj

// ⚠️ Implicit binding LOST on assignment
const fn2 = obj.fn;
fn2(); // undefined — no object before dot

// 3. Explicit binding — call, apply, bind
function greet(greeting) { return `${greeting}, ${this.name}`; }
greet.call({ name: 'Bob' }, 'Hello');    // 'Hello, Bob'
greet.apply({ name: 'Carol' }, ['Hi']); // 'Hi, Carol'
const bound = greet.bind({ name: 'Dave' });
bound('Hey'); // 'Hey, Dave' — permanently bound

// 4. new binding — constructor call
function Person(name) { this.name = name; }
const p = new Person('Eve');
p.name; // 'Eve' — this = freshly created object

// new: 1) creates {} 2) links prototype 3) binds this 4) returns it

// Arrow functions: LEXICAL this — none of the 4 rules apply
const obj2 = {
  name: 'Zara',
  fn: () => this.name, // this = outer scope (NOT obj2)
  method() { return () => this.name; } // nested arrow captures method's this
};
obj2.fn();            // undefined — arrow ignores implicit binding
obj2.method()();      // 'Zara' — arrow captured method's this
💡 Interview rule: ask "How was the function called?" Default → standalone call. Implicit → object.method(). Explicit → call/apply/bind. new → constructor. Arrow → look where it was DEFINED.
Practice this question →More 'this' Keyword questions →

Arrays Interview Questions

Practice Arrays questions →

When would you use map vs forEach vs reduce vs filter?

💡 Hint: map=transform, filter=subset, reduce=accumulate, forEach=side effects

  • map — transform each element, returns new array of same length
  • filter — keep matching elements, returns smaller array
  • reduce — accumulate into single value (any type)
  • forEach — side effects only, returns undefined, not chainable
const nums = [1,2,3,4,5];
nums.map(n => n * 2);                // [2,4,6,8,10]
nums.filter(n => n % 2 === 0);       // [2,4]
nums.reduce((sum, n) => sum + n, 0); // 15

// Chaining
nums.filter(n => n > 2).map(n => n ** 2); // [9,16,25]
Practice this question →

How do Array.find(), findIndex(), some(), and every() work?

💡 Hint: find=first match value, findIndex=first match index, some=any passes, every=all pass

  • find() — returns the first element where callback returns true, or undefined
  • findIndex() — returns the index of first match, or -1
  • some() — returns true if at least one element passes (short-circuits)
  • every() — returns true only if ALL elements pass (short-circuits)
const users = [
  { id: 1, name: 'Alice', active: true  },
  { id: 2, name: 'Bob',   active: false },
  { id: 3, name: 'Carol', active: true  },
];

users.find(u => u.id === 2);       // { id: 2, name: 'Bob', active: false }
users.findIndex(u => u.id === 2);  // 1
users.find(u => u.id === 99);      // undefined (not found)

users.some(u => u.active);  // true  (stops at Alice)
users.every(u => u.active); // false (stops at Bob)

// Short-circuit saves work
users.some(u => {
  console.log('checking', u.name);
  return u.name === 'Alice'; // only checks Alice — stops immediately
});
💡 some() is like logical OR across the array; every() is like AND. Use find() when you need the value, findIndex() when you need the position.
Practice this question →

How do Array.flat() and Array.flatMap() work?

💡 Hint: flat() flattens nested arrays; flatMap() maps then flattens one level — more efficient

const nested = [1, [2, [3, [4]]]];

nested.flat();         // [1, 2, [3, [4]]] — default depth 1
nested.flat(2);        // [1, 2, 3, [4]]
nested.flat(Infinity); // [1, 2, 3, 4] — fully flat

// flatMap = map + flat(1) — more efficient than separate calls
const sentences = ['Hello World', 'Foo Bar'];
sentences.flatMap(s => s.split(' ')); // ['Hello', 'World', 'Foo', 'Bar']

// vs two-step (less efficient)
sentences.map(s => s.split(' ')).flat(); // same result

// flatMap can filter + transform in one pass
const nums = [1, 2, 3, 4, 5];
nums.flatMap(n => n % 2 === 0 ? [n, n * 10] : []);
// [2, 20, 4, 40] — odds removed, evens doubled
// Return [] to skip, [val] to keep, [a, b] to expand one item into two
💡 flatMap is more efficient than map+flat because it only iterates once. Use it as a combined filter+map by returning [] to skip items.
Practice this question →

What are immutable array operations and the new ES2023 methods?

💡 Hint: toSorted, toReversed, toSpliced, with — immutable versions of mutating methods

Many array methods mutate the original. Always be aware of which do and which don't:

Mutating (change original): sort, reverse, splice, push, pop, shift, unshift, fill

Non-mutating (return new): map, filter, slice, concat, flat, flatMap, reduce

const arr = [3, 1, 2];

// Old pattern — copy first to avoid mutation
const sorted = [...arr].sort((a, b) => a - b); // arr unchanged

// ES2023 — built-in immutable versions
arr.toSorted((a, b) => a - b);  // [1, 2, 3] — arr still [3, 1, 2]
arr.toReversed();               // [2, 1, 3] — arr still [3, 1, 2]
arr.toSpliced(1, 1, 9);         // [3, 9, 2] — arr still [3, 1, 2]
arr.with(1, 99);                // [3, 99, 2] — replace index 1

// All 4 return NEW arrays without touching the original
console.log(arr); // [3, 1, 2] ✓
💡 Immutable operations are essential in React/Redux where you must not mutate state. Prefer [...arr].sort() or arr.toSorted() over arr.sort().
Practice this question →More Arrays questions →

Async Bugs Interview Questions

Practice Async Bugs questions →

Async JS Interview Questions

Practice Async JS questions →

Explain Promises — states, chaining, and error handling.

💡 Hint: pending → fulfilled/rejected; .then chains; .catch handles errors

A Promise is a value that may be available now or later. States: pending → fulfilled / rejected. Once settled, immutable.

fetch('/api/data')
  .then(res => res.json())
  .then(data => console.log(data))
  .catch(err => console.error(err))
  .finally(() => setLoading(false));

Combinators: Promise.all (all must resolve), Promise.allSettled (wait all), Promise.race (first settles), Promise.any (first resolves).

Practice this question →

How does async/await work under the hood?

💡 Hint: Syntactic sugar over Promises; await pauses the function, not the thread

async/await is syntactic sugar over Promises. An async function always returns a Promise. await pauses only that function's execution.

async function fetchUser(id) {
  try {
    const res = await fetch(`/api/users/${id}`);
    if (!res.ok) throw new Error('Not found');
    return await res.json();
  } catch (err) {
    console.error(err);
  }
}
💡 Parallel fetching: use Promise.all([fetch(a), fetch(b)]) instead of sequential awaits — ~2x faster.
Practice this question →

What is callback hell and how do you avoid it?

💡 Hint: Pyramid of doom — nested callbacks; solve with Promises/async-await

Deeply nested callbacks make code hard to read, debug, and maintain.

// ❌ Callback hell
getUser(id, (user) => {
  getPosts(user, (posts) => {
    getComments(posts[0], (comments) => { ... });
  });
});

// ✅ Async/await
const user = await getUser(id);
const posts = await getPosts(user);
const comments = await getComments(posts[0]);
Practice this question →

What are Promise combinators and when do you use each?

💡 Hint: all=all resolve, allSettled=wait all, race=first settles, any=first resolves

  • Promise.all() — waits for ALL to resolve. Rejects immediately if ANY rejects. Use when all must succeed.
  • Promise.allSettled() — waits for ALL to settle (resolve OR reject). Never rejects itself. Use when you need all results regardless of failure.
  • Promise.race() — settles with the FIRST settled promise (resolve or reject). Use for timeout patterns.
  • Promise.any() — resolves with the FIRST resolved promise. Rejects only if ALL reject. Use for fallback/redundancy.
const fast = fetch('/fast');   // 100ms
const slow = fetch('/slow');   // 500ms
const bad  = fetch('/broken'); // 200ms, rejects

await Promise.all([fast, slow]);    // ✅ 500ms (waits for both)
await Promise.all([fast, bad]);     // ❌ rejects at 200ms

const results = await Promise.allSettled([fast, slow, bad]);
// [{status:'fulfilled', value:...}, ..., {status:'rejected', reason:...}]

await Promise.race([fast, slow]);   // resolves at 100ms with fast result

await Promise.any([bad, fast]);     // resolves at 100ms (ignores bad)

// Timeout pattern with race:
const withTimeout = (p, ms) => Promise.race([
  p,
  new Promise((_, r) => setTimeout(() => r(new Error('Timeout')), ms))
]);
💡 allSettled is your safety net — it always resolves, making it great for "fire multiple requests, report all results" patterns.
Practice this question →

What is queueMicrotask() and when should you use it?

💡 Hint: Schedule a function in the microtask queue — runs after current sync, before next macrotask

queueMicrotask(fn) adds a callback directly to the microtask queue — the same queue that Promise callbacks use.

// Functionally equivalent:
Promise.resolve().then(() => console.log('A'));
queueMicrotask(() => console.log('B'));
// A, B — FIFO within the microtask queue

console.log('sync');
queueMicrotask(() => console.log('microtask'));
setTimeout(() => console.log('macrotask'), 0);
console.log('sync end');
// Order: sync → sync end → microtask → macrotask

Advantages over Promise.resolve().then():

  • No Promise overhead — slightly more performant
  • More explicit — clearly states "schedule as microtask"
  • Doesn't create a Promise chain
// Real use case: batch state updates
let pending = false;
function scheduleRender() {
  if (pending) return;
  pending = true;
  queueMicrotask(() => {
    pending = false;
    renderDOM(); // runs once after all sync mutations
  });
}
💡 Use queueMicrotask when you want something to run "ASAP but async" — after current sync code finishes, before any I/O or timers fire.
Practice this question →

How do you handle unhandled Promise rejections?

💡 Hint: They crash Node.js 15+ — always attach .catch() or try/catch; use global handlers as last resort

An unhandled rejection occurs when a Promise rejects with no .catch() or try/catch handler.

// ❌ Unhandled
const p = Promise.reject(new Error('oops'));
// Browser: warning in console; Node.js 15+: crashes process

// ✅ Always handle
async function fetchData() {
  try {
    return await fetch('/api');
  } catch (err) {
    if (err.status === 404) return null; // handle known
    throw err;                           // re-throw unknown
  }
}

// ✅ At call site
fetchData().catch(err => console.error(err));

// Global handlers (last resort / monitoring)
// Browser
window.addEventListener('unhandledrejection', (event) => {
  console.error('Unhandled:', event.reason);
  event.preventDefault(); // suppress browser logging
});

// Node.js
process.on('unhandledRejection', (reason, promise) => {
  console.error('Unhandled Rejection:', reason);
  process.exit(1); // recommended in production
});
💡 Never rely on global handlers for correctness — they're for logging/monitoring. Fix the root cause by ensuring every async operation is wrapped in try/catch or has .catch().
Practice this question →More Async JS questions →

Browser APIs Interview Questions

Practice Browser APIs questions →

What is the Fetch API and how do you handle errors correctly?

💡 Hint: fetch() only rejects on network failure — HTTP 4xx/5xx must be checked via response.ok

The key gotcha: fetch() only rejects on network failure (no connection, DNS fail). HTTP errors like 404 or 500 are "successful" responses!

// ❌ Wrong — HTTP errors silently "succeed"
const data = await fetch('/api').then(r => r.json());
// 404 or 500 still "resolves" — you get error HTML as data

// ✅ Correct — check response.ok
async function apiFetch(url, options = {}) {
  const res = await fetch(url, options);
  if (!res.ok) {
    throw new Error(`HTTP ${res.status}: ${res.statusText}`);
  }
  return res.json();
}

// POST with JSON
await apiFetch('/api/users', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({ name: 'Alice' }),
});

// With timeout (AbortController)
async function fetchWithTimeout(url, ms = 5000) {
  const controller = new AbortController();
  const id = setTimeout(() => controller.abort(), ms);
  try {
    return await fetch(url, { signal: controller.signal });
  } catch (err) {
    if (err.name === 'AbortError') throw new Error('Request timed out');
    throw err;
  } finally {
    clearTimeout(id);
  }
}
💡 Always check response.ok. Create a reusable fetch wrapper that throws on non-2xx responses so every call site doesn't have to remember.
Practice this question →

What is the difference between localStorage, sessionStorage, and cookies?

💡 Hint: Differ in persistence, scope, size, and whether auto-sent to server

FeaturelocalStoragesessionStorageCookie
LifetimePersistent (until cleared)Tab session onlyExpiry date / session
ScopeSame origin, all tabsSame tab onlyDomain + path
Size limit~5-10 MB~5-10 MB~4 KB
Sent to serverNoNoYes (every request)
JS accessYesYesYes (unless HttpOnly)
// localStorage / sessionStorage — same API
localStorage.setItem('theme', 'dark');
localStorage.getItem('theme');    // 'dark'
localStorage.removeItem('theme');
localStorage.clear();

// Must stringify objects
localStorage.setItem('user', JSON.stringify({ name: 'Alice', id: 1 }));
const user = JSON.parse(localStorage.getItem('user'));
💡 Never store auth tokens or sensitive data in localStorage/sessionStorage — XSS attacks can steal it. Use HttpOnly cookies for auth tokens — they're inaccessible to JavaScript.
Practice this question →

What are Web Workers and when should you use them?

💡 Hint: Run JS in a background thread — no DOM access; use postMessage to communicate

Web Workers run JavaScript in a separate background thread — preventing heavy computation from blocking the main thread (UI freezing).

// main.js
const worker = new Worker('worker.js');

// Send data to worker (structured clone — copies data)
worker.postMessage({ array: bigArray, threshold: 50 });

// Receive results
worker.onmessage = (e) => {
  console.log('Result:', e.data.result);
  worker.terminate(); // clean up
};

worker.onerror = (e) => console.error('Worker error:', e.message);

// worker.js — completely separate context
self.onmessage = (e) => {
  const { array, threshold } = e.data;
  // Heavy computation — won't block UI
  const result = array.filter(x => x > threshold).reduce((a,b) => a+b, 0);
  self.postMessage({ result });
};

Limitations: No access to DOM, window, document. Communication only via postMessage. Data is copied (structured clone), not shared (except SharedArrayBuffer).

Use cases: Image/video processing, data parsing, crypto, ML inference, large sort/filter operations.

💡 Use the Comlink library to make Worker APIs feel like regular async function calls — wraps postMessage/onmessage into an async function interface.
Practice this question →

What are Service Workers and what problems do they solve?

💡 Hint: Background JS proxy between app and network — enables offline support, push notifications, caching

A Service Worker is a script that runs in the background, separate from the page — acting as a network proxy. Enables progressive web app features.

// Registration (main thread)
if ('serviceWorker' in navigator) {
  navigator.serviceWorker.register('/sw.js')
    .then(reg => console.log('SW registered:', reg.scope))
    .catch(err => console.error('SW failed:', err));
}

// sw.js — the service worker itself
const CACHE = 'v1';
const ASSETS = ['/index.html', '/styles.css', '/app.js'];

// Install — pre-cache core assets
self.addEventListener('install', event => {
  event.waitUntil(
    caches.open(CACHE).then(cache => cache.addAll(ASSETS))
  );
});

// Activate — clean old caches
self.addEventListener('activate', event => {
  event.waitUntil(
    caches.keys().then(keys =>
      Promise.all(keys.filter(k => k !== CACHE).map(k => caches.delete(k)))
    )
  );
});

// Fetch — intercept ALL network requests
self.addEventListener('fetch', event => {
  event.respondWith(
    caches.match(event.request).then(cached => {
      return cached || fetch(event.request); // cache-first strategy
    })
  );
});

Capabilities: Offline caching, background sync, push notifications, intercepting requests, URL routing.

💡 Service Workers only run on HTTPS (or localhost). They have no DOM access. They're the foundation of Progressive Web Apps (PWAs). Use Workbox library to simplify service worker code.
Practice this question →

What is the Same-Origin Policy and how does CORS work?

💡 Hint: Browser blocks cross-origin requests by default; server opts in via CORS headers

The Same-Origin Policy (SOP) blocks web pages from reading resources from a different origin (protocol + domain + port combination).

// Same origin — all identical: protocol, domain, port
// https://app.com/page can read from https://app.com/api ✅
// https://app.com/page CANNOT read from https://api.other.com ❌

// CORS (Cross-Origin Resource Sharing):
// Server opts in by sending response headers

// Server response headers to allow access:
Access-Control-Allow-Origin: https://app.com  // or * for all
Access-Control-Allow-Methods: GET, POST, PUT
Access-Control-Allow-Headers: Content-Type, Authorization
Access-Control-Allow-Credentials: true // if sending cookies

// Simple requests (GET, POST with basic headers) — no preflight
fetch('https://api.other.com/data');

// Preflighted requests — browser sends OPTIONS first
fetch('https://api.other.com/data', {
  method: 'DELETE',           // non-simple method
  headers: { 'X-Custom': '1' } // non-simple header
});
// Browser sends: OPTIONS https://api.other.com/data
// → checks server allows it before actual request

// CORS does NOT apply to:
// - Server-to-server (Node.js fetch, cURL)
// - <img>, <script>, <link> tags (but limited access)
// - Same origin
💡 CORS is enforced by the BROWSER only — it's not a server-side security measure. Server-to-server calls bypass it entirely. The browser is protecting users from malicious scripts, not the server from requests.
Practice this question →

What is Shadow DOM and when do you use it?

💡 Hint: Encapsulated DOM subtree — styles and JS don't leak in or out; foundation of Web Components

// Attach a shadow root to any element
const host = document.getElementById('my-widget');
const shadow = host.attachShadow({ mode: 'open' }); // or 'closed'

// Add content — fully encapsulated
shadow.innerHTML = `
  <style>
    /* This CSS is SCOPED to shadow DOM only */
    p { color: red; font-size: 1.5rem; }
    :host { display: block; border: 1px solid blue; }
  </style>
  <p>I'm in shadow DOM</p>
  <slot></slot>  <!-- slot: renders host element's children -->
`;

// 'open' mode: accessible via element.shadowRoot
host.shadowRoot.querySelector('p'); // works
// 'closed' mode: host.shadowRoot = null (truly private)

// <slot> — project host children into shadow DOM
// <my-card><h2>Title</h2></my-card>
// The <h2> is rendered where <slot> is placed

// Web Component using Shadow DOM
class MyButton extends HTMLElement {
  constructor() {
    super();
    const shadow = this.attachShadow({ mode: 'open' });
    shadow.innerHTML = `
      <style>button { background: purple; color: white; }</style>
      <button><slot>Click me</slot></button>
    `;
  }
}
customElements.define('my-button', MyButton);
💡 Shadow DOM = CSS encapsulation + DOM encapsulation. Styles from the outside can't penetrate in (except CSS custom properties / variables), and internal styles can't leak out. This is how browser built-ins like <video> and <input type="date"> hide their internal structure.
Practice this question →More Browser APIs questions →

Closure Traps Interview Questions

Practice Closure Traps questions →

Closures & Scope Interview Questions

Practice Closures & Scope questions →

Core JS Interview Questions

Practice Core JS questions →

What is the difference between var, let, and const?

💡 Hint: Think about scope, hoisting, and reassignment

var is function-scoped and hoisted (initialized as undefined). let and const are block-scoped and in a Temporal Dead Zone before declaration.

var x = 1;
let y = 2;    // block-scoped, reassignable
const z = 3;  // block-scoped, binding locked

if (true) {
  var a = 'leaks out';
  let b = 'stays in';
}
console.log(a); // 'leaks out'
console.log(b); // ReferenceError

const doesn't make values immutable — objects/arrays can still be mutated. The binding is locked, not the value.

💡 Default to const, use let only when reassignment is needed. Avoid var.
Practice this question →

Explain closures with a practical example.

💡 Hint: A function that remembers its outer scope after the outer function returns

A closure is a function that retains access to its lexical scope even after the outer function has returned.

function makeCounter() {
  let count = 0;
  return {
    increment: () => ++count,
    decrement: () => --count,
    value:     () => count
  };
}
const counter = makeCounter();
counter.increment(); // 1
counter.increment(); // 2

Real-world uses: data encapsulation, factory functions, event handlers with state, memoization, partial application.

💡 Classic gotcha: var in a for-loop closure — all callbacks share the same variable. Fix with let or IIFE.
Practice this question →

What is hoisting in JavaScript?

💡 Hint: Declarations are moved to top of scope before execution

Hoisting moves declarations to the top of their scope during compilation — before execution. Initializations are NOT hoisted.

console.log(a); // undefined (not ReferenceError)
var a = 5;

greet(); // Works! Function declarations fully hoisted
function greet() { console.log('hello'); }

sayHi(); // TypeError
var sayHi = () => console.log('hi');

let/const are hoisted but live in a Temporal Dead Zone — accessing them before declaration throws a ReferenceError.

Practice this question →

Explain the event loop, call stack, and microtask queue.

💡 Hint: Synchronous → Microtasks (all) → Next Macrotask

JS is single-threaded. The call stack runs sync code. Async callbacks go into task queues. The event loop picks tasks when the stack is empty.

Microtasks (Promises, queueMicrotask) drain completely after each task before the next macrotask runs.

console.log('1');
setTimeout(() => console.log('2'), 0); // macrotask
Promise.resolve().then(() => console.log('3')); // microtask
console.log('4');
// Output: 1 → 4 → 3 → 2
💡 Order: Sync → All Microtasks → Next Macrotask → All Microtasks → ...
Practice this question →

What is the difference between == and ===?

💡 Hint: Type coercion vs strict type + value check

== performs type coercion. === checks value AND type — no coercion.

0 == ''          // true  (both coerce to falsy)
0 == '0'         // true
0 === '0'        // false
null == undefined  // true (special case)
null === undefined // false
NaN == NaN       // false (NaN never equals itself)

Always use ===. Use Number.isNaN() to check for NaN.

Practice this question →

What are the different types of scope in JavaScript?

💡 Hint: Global, function (local), and block scope — each has different variable rules

JavaScript has three types of scope:

  • Global scope — Variables declared outside any function or block. Accessible everywhere. In browsers, becomes a property of window.
  • Function scope — Variables declared with var inside a function are only accessible inside that function.
  • Block scope — Variables declared with let or const inside {} are scoped to that block.
var globalVar = 'everywhere';

function fn() {
  var funcVar = 'function only';
  if (true) {
    let blockVar = 'block only';
    var leaky = 'leaks to fn scope'; // var ignores blocks!
  }
  console.log(leaky);    // ✓ 'leaks to fn scope'
  console.log(blockVar); // ✗ ReferenceError
}

console.log(funcVar); // ✗ ReferenceError

The scope chain: When a variable isn't found in the current scope, JS looks up to the outer scope — all the way to global. Inner scopes access outer variables; outer scopes cannot access inner.

💡 Prefer const/let over var — they're block-scoped and avoid the leaky behavior of var.
Practice this question →

What is lexical scope and how does it affect closures?

💡 Hint: Scope is determined where code is written, not where it is called

Lexical scope (static scope) means a function's scope is determined by where it is written in the code — at author time — not by where it is called at runtime.

const x = 'global';

function outer() {
  const x = 'outer-fn';
  function inner() {
    console.log(x); // 'outer-fn' — closes over WHERE inner is DEFINED
  }
  return inner;
}

const fn = outer();
fn(); // logs 'outer-fn', NOT 'global'
// Even though fn() is called at global level, it remembers outer-fn's scope

This is what makes closures possible — a function carries its scope from where it was created, not from where it's invoked.

💡 Contrast with dynamic scope (Bash, Perl): if JS were dynamic, fn() would print 'global' because it's called there. Lexical scope is more predictable and is the reason closures work.
Practice this question →

What is an Execution Context and what are its two phases?

💡 Hint: Creation phase (hoisting, this binding) then execution phase — pushed onto the call stack

An Execution Context (EC) is the environment in which JavaScript code evaluates and executes. Every function call creates a new EC pushed onto the call stack.

Phase 1 — Creation Phase:

  • Scans for var declarations → hoisted and set to undefined
  • Scans for let/const → hoisted but placed in Temporal Dead Zone
  • Scans for function declarations → fully hoisted (name + body)
  • Determines the value of this
  • Sets up the scope chain (reference to outer environment)

Phase 2 — Execution Phase: runs code line by line, assigns actual values.

function example() {
  // Creation phase saw: var a → undefined, fn fully hoisted
  console.log(a);   // undefined (var hoisted)
  console.log(fn()); // 'works!' (function declaration hoisted)
  var a = 1;
  function fn() { return 'works!'; }
  console.log(a);   // 1 (now assigned)
}

Types of EC: Global EC (one per program), Function EC (one per call), Eval EC (avoid).

💡 The Global EC creates the global object (window/globalThis) and binds this = global object in its creation phase.
Practice this question →

What is the Temporal Dead Zone (TDZ)?

💡 Hint: let/const are hoisted but not initialized — accessing them before declaration throws

The Temporal Dead Zone is the period between when a let/const variable is hoisted (block start) and when it is initialized (the declaration line). Accessing it in this window throws a ReferenceError.

{
  // ← TDZ for 'a' starts here (block start)
  console.log(a); // ❌ ReferenceError: Cannot access 'a' before initialization
  let a = 5;      // ← TDZ ends, a is initialized to 5
  console.log(a); // 5
}

// typeof does NOT protect you in TDZ
console.log(typeof undeclared); // "undefined" — safe (never declared)
console.log(typeof a);          // ReferenceError if 'a' is in TDZ!

Why TDZ exists: Intentional design to catch bugs where you accidentally use a variable before its meaningful initialization. var silently returns undefined, masking errors.

💡 TDZ applies to let, const, and class declarations. Function declarations are fully hoisted with no TDZ.
Practice this question →

What does "use strict" do and why should you use it?

💡 Hint: Opt-in to stricter parsing — catches silent errors, changes some behaviors

Strict mode activates a restricted variant of JavaScript that converts silent errors into thrown errors and disables some confusing/dangerous features.

What it prevents:

  • Implicit globals — x = 5 throws ReferenceError (not window.x)
  • Duplicate parameter names — function fn(a, a) {} throws
  • Writing to read-only properties — throws instead of silently failing
  • this in standalone functions is undefined (not window)
  • delete on variables/functions — throws
  • Octal literals (0777) — throws
'use strict';

x = 5;             // ReferenceError — no more accidental globals
function fn(a, a) {} // SyntaxError

function test() {
  console.log(this); // undefined (not window)
}
test();

Good news: ES6 modules and classes are always in strict mode automatically. In modern code you rarely need to write 'use strict' explicitly.

💡 Enable strict mode per file or per function with the string 'use strict' at the top. Helps catch bugs early and enables engine optimizations.
Practice this question →

How does garbage collection work in JavaScript?

💡 Hint: Mark-and-sweep — unreachable objects are collected; common leak sources

JavaScript uses automatic garbage collection — memory is freed when objects become unreachable from the "roots" (globals + active call stack).

Mark-and-Sweep algorithm:

  1. Start from roots (global scope + call stack)
  2. Mark every reachable object
  3. Sweep — free everything NOT marked
let user = { name: 'Alice' }; // reachable via 'user'
user = null;                  // reference dropped → unreachable → GC'd

// Circular reference — NOT a problem for modern mark-and-sweep
let a = {}; let b = {};
a.ref = b; b.ref = a; // circular — but if a and b lose all external refs, both are GC'd

Common memory leak sources:

  • Forgotten setInterval holding references to DOM elements
  • Detached DOM nodes still referenced in JS variables
  • Event listeners never removed (removeEventListener)
  • Closures unintentionally capturing large objects
  • Unbounded caches / global arrays that grow forever
💡 Use WeakMap/WeakRef to associate data with objects without preventing GC. DevTools Memory tab → Heap Snapshot to hunt leaks.
Practice this question →

What is the difference between primitive and reference types?

💡 Hint: Primitives copy by value; objects copy by reference (the memory address)

Primitive types (string, number, boolean, null, undefined, symbol, bigint) are stored and copied by value.

Reference types (objects, arrays, functions) are stored as pointers. Variables hold a reference — assigning copies the reference, not the object.

// Primitives — copy by value
let a = 5;
let b = a;
b = 10;
console.log(a); // 5 — completely independent

// Objects — copy by reference
let obj1 = { x: 1 };
let obj2 = obj1;    // both point to THE SAME object in memory
obj2.x = 99;
console.log(obj1.x); // 99 — obj1 was mutated!

// Reference equality
console.log([] === []);   // false — different objects
console.log({} === {});   // false — different objects

// Functions receive references
function mutate(arr) { arr.push(1); }
const myArr = [];
mutate(myArr);
console.log(myArr); // [1] — original was mutated
💡 "Pass by value" in JS — always. But for objects, the VALUE being passed is the reference (memory address). So you can mutate the object, but you can't make the caller's variable point to a new object.
Practice this question →

What is Automatic Semicolon Insertion (ASI) and what are its gotchas?

💡 Hint: JS inserts ; in specific places — return on its own line is the classic trap

JavaScript automatically inserts semicolons in certain places during parsing to handle missing semicolons.

ASI rules (simplified): JS inserts ; when the next token would make the code invalid, and at the end of file.

// Classic trap: return on its own line
function getObj() {
  return    // ← ASI inserts ; HERE
  {
    data: 1  // unreachable!
  }
}
getObj(); // undefined! NOT the object

// Fix: opening brace on SAME line as return
function getObj() {
  return {
    data: 1
  };
}

// Trap 2: lines starting with (, [, /, +, -
const a = 1
const b = 2
[a, b].forEach(x => console.log(x)) // PARSED AS: 2[a,b].forEach(...)
// TypeError: Cannot read properties of undefined

// Fix: add semicolon to previous line, OR start with ;
const b = 2;
;[a, b].forEach(x => console.log(x)) // safe defensive semicolon
💡 Safest rule: always put return values on the same line as return. If not using semicolons, use defensive semicolons before lines starting with [, (, or /.
Practice this question →

What is BigInt and when do you need it?

💡 Hint: Arbitrary-precision integers — for values beyond Number.MAX_SAFE_INTEGER (2^53 - 1)

JavaScript's Number type (64-bit float) can only safely represent integers up to 2^53 - 1 = 9,007,199,254,740,991. BigInt handles arbitrarily large integers.

// Problem: precision loss with large integers
9007199254740993 === 9007199254740992; // true! — lost a bit

// BigInt — suffix with n
9007199254740993n === 9007199254740992n; // false ✓

const big = 99999999999999999999999999n;
const sum = big + 1n; // works perfectly

// Can't mix BigInt and Number directly
1n + 1; // TypeError: Cannot mix BigInt and other types
Number(1n) + 1; // 2 — explicit conversion
BigInt(5) + 3n; // 8n — explicit conversion

// Comparison with Number (ok with ==, not ===)
1n == 1;  // true (loose equality)
1n === 1; // false (strict, different types)

// No decimal support
10n / 3n; // 3n — truncates toward zero

// Math methods don't support BigInt
Math.max(1n, 2n); // TypeError
💡 Use BigInt for: large database IDs (64-bit integers from other languages), financial amounts where precision matters, cryptography, and any integer computation beyond 2^53.
Practice this question →

What is the rendering phase of the browser event loop?

💡 Hint: Between macrotasks: style → layout → paint → compositing — sync code blocks it

The browser event loop interleaves JS execution with rendering. Understanding this explains why sync code freezes the UI.

// Browser event loop order:
// 1. Pick one macrotask from the queue (e.g., setTimeout callback)
// 2. Execute it to completion
// 3. Drain ALL microtasks (Promises, queueMicrotask)
// 4. ← RENDER PHASE (if needed):
//      a. requestAnimationFrame callbacks
//      b. Style recalculation
//      c. Layout (reflow)
//      d. Paint
//      e. Composite
// 5. Repeat

// Why sync code freezes UI:
button.onclick = () => {
  // Render phase is BLOCKED until this finishes
  for (let i = 0; i < 1_000_000_000; i++) {} // 1 second of CPU
  updateDOM(); // user sees nothing during the loop
};

// requestAnimationFrame runs IN the render phase — perfect for animation
function animate() {
  element.style.left = (x++) + 'px'; // guaranteed to paint every frame
  requestAnimationFrame(animate);    // schedule for NEXT render phase
}
requestAnimationFrame(animate);

// setTimeout(0) yields to render, rAF aligns WITH render
button.onclick = () => {
  status.textContent = 'Loading...';
  setTimeout(() => heavyWork(), 0); // allows repaint of 'Loading...' first
};
💡 Use requestAnimationFrame for visual updates — it runs just before the browser paints, ensuring 60fps synchronization. setTimeout(0) lets the browser render but has no frame alignment.
Practice this question →

What are Environment Records and how do they underpin scope?

💡 Hint: The data structure that stores variable bindings in each scope — the "real" scope implementation

An Environment Record is the actual data structure that stores identifier (variable) bindings for a scope. When the spec says "scope," it means Environment Records under the hood.

// Each scope = one Environment Record created at runtime
// Environment Record stores { variable: value } mappings

function outer() {
  let x = 1;     // x stored in outer's Environment Record
  function inner() {
    let y = 2;   // y stored in inner's Environment Record
    console.log(x); // looks up outer's Environment Record via [[OuterEnv]]
  }
  inner();
}
// Call stack at inner():
// inner's ER: { y: 2, [[OuterEnv]] → outer's ER }
// outer's ER: { x: 1, inner: fn, [[OuterEnv]] → global ER }
// global ER:  { outer: fn, ... }

// Types of Environment Records:
// - Declarative ER: let, const, function declarations, parameters
// - Object ER: var declarations and global code (backed by an object)
// - Global ER: combination of both (global scope)
// - Module ER: ES module scope
// - Function ER: function body scope

// Closures = an inner function holding a reference to
// the outer function's Environment Record AFTER it has returned
const fn = outer(); // outer's ER stays alive because inner references it
💡 Environment Records replaced the older "Activation Object" spec term. Understanding them demystifies closures completely: a closure is just a function holding a reference to an outer Environment Record.
Practice this question →

What is the new Function() constructor and when is it used?

💡 Hint: Create functions from strings at runtime — dynamic but slower, eval-like risks, no closure

// Syntax: new Function([...params], functionBody)
const add = new Function('a', 'b', 'return a + b');
add(2, 3); // 5

const greet = new Function('name', 'return "Hello, " + name');
greet('Alice'); // 'Hello, Alice'

// Key characteristic: no closure — runs in GLOBAL scope
const x = 10;
function test() {
  const x = 20; // local x
  const fn = new Function('return x'); // does NOT close over local x
  return fn();
}
test(); // 10 (global x) — NOT 20!

// Use cases (rare):
// 1. Dynamic code from server (CMS, user-defined formulas)
const formula = new Function('a', 'b', serverSideFormula);

// 2. Template engines (pre-compile to JS)

// 3. Sandboxed evaluation (safer than eval)
const fn = new Function('data', 'with(data) { return x + y }');
fn({ x: 1, y: 2 }); // 3 (limited scope)

Risks and drawbacks: Cannot access local scope (can't make closures). Security risk if body comes from user input. Blocks V8 optimization. Slower than regular functions.

💡 Think of new Function() as eval() for function bodies. Only use for dynamic code execution from trusted sources. In CSP-hardened apps, new Function() is blocked along with eval.
Practice this question →More Core JS questions →

DOM & Events Interview Questions

Practice DOM & Events questions →

Explain event delegation and why it is useful.

💡 Hint: One listener on parent; use event.target to identify the child

Attach ONE listener to a parent instead of many listeners on children. Works because events bubble up the DOM.

// ❌ Inefficient
document.querySelectorAll('li').forEach(li =>
  li.addEventListener('click', handleClick)
);

// ✅ Event delegation
document.querySelector('ul').addEventListener('click', (e) => {
  if (e.target.matches('li')) handleClick(e.target);
});

Benefits: fewer listeners = less memory, works for dynamically added elements, cleaner teardown.

Practice this question →

What is the difference between event.stopPropagation() and event.preventDefault()?

💡 Hint: stopPropagation=stop bubbling; preventDefault=cancel browser default action

  • stopPropagation() — stops event from bubbling to parent elements
  • preventDefault() — cancels browser default (link navigation, form submit) but still bubbles
  • stopImmediatePropagation() — stops bubbling + prevents other listeners on same element
link.addEventListener('click', (e) => {
  e.preventDefault();    // don't navigate
  e.stopPropagation();   // don't bubble up
  doSomething();
});
Practice this question →

What is the difference between event bubbling and event capturing?

💡 Hint: 3 phases: capture (down), target, bubble (up) — addEventListener default is bubble

When an event fires, it passes through 3 phases in the DOM tree:

  1. Capture phase — event travels DOWN: window → document → ... → target
  2. Target phase — event is at the element that was clicked/triggered
  3. Bubble phase — event travels UP: target → ... → document → window
// Capture phase listener (3rd arg = true, or { capture: true })
document.body.addEventListener('click', () => console.log('body capture'), true);

// Bubble phase listener (default)
document.body.addEventListener('click', () => console.log('body bubble'));
button.addEventListener('click', () => console.log('button'));

// Click the button:
// body capture (capturing, going down)
// button       (at target)
// body bubble  (bubbling, going up)

// Stop bubbling
button.addEventListener('click', (e) => {
  e.stopPropagation(); // stops bubble — body bubble won't fire
  // e.stopImmediatePropagation() — also blocks same-element listeners
});

Events that don't bubble: focus, blur, scroll, mouseenter, mouseleave — use focusin/focusout for delegation instead.

💡 Event delegation relies on bubbling — one listener on the parent handles events from all children. This is more efficient than attaching listeners to each child.
Practice this question →

How do you create and dispatch Custom Events?

💡 Hint: new CustomEvent(name, { detail, bubbles }) — dispatch with element.dispatchEvent()

Custom Events let you create your own event types for loosely-coupled component communication.

// Create
const loginEvent = new CustomEvent('user:login', {
  detail: { userId: 42, name: 'Alice' }, // payload — any data
  bubbles: true,     // will bubble up the DOM
  cancelable: true   // can be preventDefault'd
});

// Dispatch
document.dispatchEvent(loginEvent);
// or: specificElement.dispatchEvent(loginEvent);

// Listen
document.addEventListener('user:login', (e) => {
  console.log(e.detail.name); // 'Alice'
  console.log(e.type);        // 'user:login'
});

// Real-world: decoupled component communication
class Cart {
  addItem(item) {
    this.items.push(item);
    window.dispatchEvent(new CustomEvent('cart:updated', {
      detail: { items: this.items, count: this.items.length }
    }));
  }
}

// Navbar listens independently
window.addEventListener('cart:updated', ({ detail }) => {
  cartBadge.textContent = detail.count;
});
💡 Namespace your events with colons (user:login, cart:updated) to avoid collisions with native events. This is a lightweight pub/sub without a library.
Practice this question →

What is MutationObserver and when do you use it?

💡 Hint: Watch for DOM changes asynchronously — replaces deprecated Mutation Events

MutationObserver watches for changes to the DOM tree and fires a callback when changes occur — batched and asynchronous.

const observer = new MutationObserver((mutations, obs) => {
  mutations.forEach(mutation => {
    if (mutation.type === 'childList') {
      console.log('Added:', mutation.addedNodes);
      console.log('Removed:', mutation.removedNodes);
    }
    if (mutation.type === 'attributes') {
      console.log('Attr changed:', mutation.attributeName, 'on', mutation.target);
    }
    if (mutation.type === 'characterData') {
      console.log('Text changed');
    }
  });
});

observer.observe(document.getElementById('app'), {
  childList: true,     // watch add/remove of child nodes
  attributes: true,    // watch attribute changes
  characterData: true, // watch text content changes
  subtree: true,       // observe all descendants too
});

observer.disconnect(); // stop observing

Use cases: Lazy loading, detecting when third-party code modifies the DOM, building virtual DOM diffing, accessibility announcements.

💡 MutationObserver replaced deprecated synchronous Mutation Events (DOMNodeInserted etc.) which were slow and could cause infinite loops. MutationObserver batches changes for performance.
Practice this question →

What is IntersectionObserver and how do you use it for lazy loading?

💡 Hint: Detect when elements enter the viewport — no scroll listener, no layout thrashing

// Lazy loading images
const observer = new IntersectionObserver((entries, obs) => {
  entries.forEach(entry => {
    if (entry.isIntersecting) {
      const img = entry.target;
      img.src = img.dataset.src; // load real image
      obs.unobserve(img);        // stop watching this one
    }
  });
}, {
  root: null,          // null = viewport
  rootMargin: '200px', // trigger 200px BEFORE element is visible
  threshold: 0.1       // fire when 10% of element is visible
});

// Watch all lazy images
document.querySelectorAll('img[data-src]').forEach(img => {
  observer.observe(img);
});

// Infinite scroll
const sentinel = document.querySelector('.load-more');
new IntersectionObserver(([entry]) => {
  if (entry.isIntersecting) loadNextPage();
}).observe(sentinel);

// entry properties:
// entry.isIntersecting — true/false
// entry.intersectionRatio — 0 to 1
// entry.boundingClientRect — element position
// entry.target — the observed element
💡 IntersectionObserver is async and non-blocking — no layout thrashing from getBoundingClientRect() in scroll handlers. Far more performant than scroll events.
Practice this question →

What is the difference between async and defer for script loading?

💡 Hint: Both avoid blocking HTML parsing — async executes ASAP, defer after full parse

By default, <script> blocks HTML parsing while downloading and executing.

AttributeDownloadExecutes whenOrder
None (default)Blocks HTMLImmediately, blocksIn order
asyncParallelAs soon as downloadedNOT guaranteed
deferParallelAfter HTML fully parsedIn document order
<!-- Blocks parsing ❌ (put in <head>) -->
<script src="app.js"></script>

<!-- Parallel download, executes ASAP when downloaded -->
<!-- ORDER NOT GUARANTEED — analytics.js might run before vendor.js -->
<script async src="analytics.js"></script>

<!-- Parallel download, runs after HTML parsed, IN ORDER ✅ -->
<script defer src="vendor.js"></script>
<script defer src="app.js"></script>
<!-- app.js always runs after vendor.js -->

When to use:

  • defer — most app scripts (DOM-dependent, order-dependent)
  • async — completely independent scripts (analytics, ads)
💡 type="module" scripts are deferred by default. In modern apps using bundlers, you usually put one deferred script tag pointing at the bundle.
Practice this question →More DOM & Events questions →

Error Handling Interview Questions

Practice Error Handling questions →

How do you handle errors in async/await properly?

💡 Hint: try/catch, re-throw unexpected errors, never swallow silently

// Option 1: try/catch (most common)
async function fetchData() {
  try {
    const data = await api.get('/users');
    return data;
  } catch (err) {
    if (err.status === 404) handleNotFound();
    else throw err; // re-throw unexpected errors
  }
}

// Option 2: .catch() at call site
const data = await fetchData().catch(err => null);

// Option 3: go/to helper
const to = p => p.then(d => [null, d]).catch(e => [e, null]);
const [err, data2] = await to(fetchData());
💡 Always handle promise rejections — unhandled ones crash Node.js.
Practice this question →

How do you create custom Error types in JavaScript?

💡 Hint: Extend Error class — set this.name, call super(message); enables instanceof checks

class ValidationError extends Error {
  constructor(message, field) {
    super(message);                    // sets .message and .stack
    this.name = 'ValidationError';     // override — default is 'Error'
    this.field = field;                // custom property
  }
}

class NetworkError extends Error {
  constructor(message, statusCode) {
    super(message);
    this.name = 'NetworkError';
    this.statusCode = statusCode;
  }
}

// Usage — instanceof lets you catch specific types
function validate(user) {
  if (!user.name)  throw new ValidationError('Name required',  'name');
  if (!user.email) throw new ValidationError('Email required', 'email');
}

try {
  validate({ name: '' });
} catch (err) {
  if (err instanceof ValidationError) {
    console.log(`Field "${err.field}": ${err.message}`);
  } else if (err instanceof NetworkError) {
    console.log(`HTTP ${err.statusCode}: ${err.message}`);
  } else {
    throw err; // re-throw unknown errors — don't swallow them
  }
}
💡 Always call super(message) — this sets .message and .stack correctly. Always set this.name — otherwise err.name shows 'Error' not 'ValidationError'.
Practice this question →

What is error propagation and when should you re-throw an error?

💡 Hint: Catch what you can handle; re-throw everything else; never silently swallow errors

Error propagation means letting errors bubble up the call stack until something can meaningfully handle them.

// ❌ Anti-pattern: silently swallowing errors
try {
  await doSomething();
} catch (err) {} // hides all bugs — never do this!

// ❌ Anti-pattern: catching all errors at every level
async function fetchUser(id) {
  try { return await fetch(...); }
  catch (err) {
    console.log('error!'); // useless — caller doesn't know what happened
  }
}

// ✅ Correct pattern: handle what you can, re-throw the rest
async function fetchUser(id) {
  try {
    const res = await fetch(`/api/users/${id}`);
    if (!res.ok) throw new NetworkError('Not found', res.status);
    return await res.json();
  } catch (err) {
    if (err instanceof NetworkError && err.statusCode === 404) {
      return null; // 404 is expected — handle it
    }
    throw err; // unexpected error — propagate up
  }
}

// ✅ Top-level handler
async function main() {
  try {
    const user = await fetchUser(1);
    render(user);
  } catch (err) {
    logToErrorService(err); // catch everything remaining
    showErrorMessage();
  }
}
💡 Rule: only catch what you can meaningfully recover from. If you can't handle it, re-throw. Handle everything else at the top of your app boundary.
Practice this question →More Error Handling questions →

Event Loop & Promises Interview Questions

Practice Event Loop & Promises questions →

Event Loop Traps Interview Questions

Practice Event Loop Traps questions →

Fix the Code Interview Questions

Practice Fix the Code questions →

Functions Interview Questions

Practice Functions questions →

What is the difference between call, apply, and bind?

💡 Hint: All set "this" — call=comma, apply=array, bind=returns new fn

All three explicitly set this:

  • call(thisArg, arg1, arg2) — invoke immediately, args individually
  • apply(thisArg, [args]) — invoke immediately, args as array
  • bind(thisArg, arg1) — returns new permanently-bound function
function greet(greeting, punct) {
  return `${greeting}, ${this.name}${punct}`;
}
const user = { name: 'Priya' };
greet.call(user, 'Hello', '!');     // "Hello, Priya!"
greet.apply(user, ['Hi', '.']);     // "Hi, Priya."
const fn = greet.bind(user, 'Hey');
fn('?');                            // "Hey, Priya?"
💡 Call=Comma, Apply=Array, Bind=returns Bound fn
Practice this question →

How do arrow functions differ from regular functions?

💡 Hint: No own this, no arguments object, no new, no prototype

Arrow functions are not just shorter syntax — key behavioral differences:

  • No own this — inherits lexical this from outer scope
  • No arguments object — use rest params (...args)
  • Cannot be constructors — new throws TypeError
  • No prototype property
const obj = {
  name: 'Dev',
  regular() { console.log(this.name); },  // 'Dev'
  arrow: () => console.log(this.name),    // undefined
};
💡 Use arrow fns for callbacks (inherit this). Use regular fns for methods and constructors.
Practice this question →

What is a pure function and why does it matter?

💡 Hint: Same input → same output, no side effects

A pure function: (1) always returns the same output for same inputs, (2) has zero side effects.

// Pure ✅
const add = (a, b) => a + b;

// Impure ❌ — modifies external state
let total = 0;
const addToTotal = (n) => { total += n; return total; };

Pure functions are predictable, testable, and cacheable. React expects components and reducers to be pure.

Practice this question →

What are Higher-Order Functions (HOF)?

💡 Hint: Functions that take other functions as arguments, or return functions as results

A Higher-Order Function is a function that either:

  • Accepts a function as an argument, OR
  • Returns a function as its result (or both)
// Takes a function as argument
[1, 2, 3].map(x => x * 2);        // map is HOF — takes callback
[1, 2, 3].filter(x => x > 1);     // filter is HOF
setTimeout(() => console.log('hi'), 1000); // HOF

// Returns a function
function multiplier(factor) {
  return (n) => n * factor; // returns a new function
}
const double = multiplier(2);
const triple = multiplier(3);
double(5); // 10
triple(5); // 15

// Does both (debounce)
function debounce(fn, delay) {
  let timer;
  return (...args) => {          // returns function
    clearTimeout(timer);
    timer = setTimeout(() => fn(...args), delay); // takes fn
  };
}

HOFs are foundational to functional programming, enabling code reuse, composition, and abstractions without mutation.

💡 map, filter, reduce, forEach, addEventListener, setTimeout — all HOFs you use every day without realizing it.
Practice this question →

What is an IIFE (Immediately Invoked Function Expression) and when do you use it?

💡 Hint: Defined and called immediately — creates a private scope

An IIFE is a function that is both defined and invoked immediately. It creates an isolated scope.

// Classic IIFE syntax
(function() {
  const private = 'inaccessible outside';
  console.log(private);
})();

// Arrow IIFE
(() => {
  // isolated scope
})();

// IIFE with parameters
(function(global) {
  global.myLib = {};
})(window);

// IIFE returning a value
const result = (() => {
  const x = computeExpensiveThing();
  return x * 2;
})();

Use cases:

  • Avoid polluting global scope (classic library pattern)
  • Create truly private variables (module pattern)
  • Capture loop variables (pre-let closure fix)
  • One-time initialization logic
💡 In modern JS, ES modules and block-scoped let/const make IIFEs less necessary. But they're heavily used in legacy code and still valid for specific patterns.
Practice this question →

What does it mean that functions are "first-class citizens" in JavaScript?

💡 Hint: Functions are values — assignable, passable, returnable, storable

Functions are first-class citizens — they're treated as values just like strings or numbers. This means:

  • Assign to variables
  • Pass as arguments
  • Return from functions
  • Store in arrays/objects
  • Have properties and methods attached
// Assigned to variable
const greet = (name) => 'Hello, ' + name;

// Passed as argument (callback)
[1, 2, 3].forEach(function(n) { console.log(n); });

// Returned from function
function makeAdder(x) {
  return (y) => x + y; // ← function as return value
}
const add5 = makeAdder(5);
add5(3); // 8

// Stored in object
const math = {
  add: (a, b) => a + b,
  sub: (a, b) => a - b,
};

// Has properties
function fn() {}
fn.version = '1.0';
console.log(fn.name);   // 'fn'
console.log(fn.length); // 0 (param count)
💡 First-class functions are what enable HOFs, callbacks, closures, and all functional programming patterns in JS.
Practice this question →

What is currying and how do you implement a generic curry function?

💡 Hint: Transform f(a,b,c) into f(a)(b)(c) — each call returns a new function waiting for more args

Currying transforms a multi-argument function into a chain of unary functions, each waiting for one argument at a time.

// Manual curried function
const add = a => b => c => a + b + c;
add(1)(2)(3); // 6
const add1 = add(1);     // partially applied — waits for b and c
const add1and2 = add1(2); // waits for c
add1and2(3);              // 6

// Generic curry utility
function curry(fn) {
  return function curried(...args) {
    if (args.length >= fn.length) { // enough args collected?
      return fn(...args);
    }
    return (...moreArgs) => curried(...args, ...moreArgs);
  };
}

const sum = (a, b, c) => a + b + c;
const curriedSum = curry(sum);
curriedSum(1)(2)(3); // 6
curriedSum(1, 2)(3); // 6 — also works (partial application hybrid)

Currying enables: partial application, point-free style, composable specialized functions.

💡 Currying vs Partial Application: currying always breaks a function into unary steps. Partial application pre-fills SOME arguments and returns a function waiting for the rest.
Practice this question →

What is memoization and how do you implement it?

💡 Hint: Cache results keyed by arguments — avoid recomputing for the same inputs

Memoization is an optimization where a function caches its results. Calling with the same inputs returns the cached result instead of recomputing.

function memoize(fn) {
  const cache = new Map();
  return function(...args) {
    const key = JSON.stringify(args);
    if (cache.has(key)) return cache.get(key); // cache hit
    const result = fn.apply(this, args);
    cache.set(key, result);
    return result;
  };
}

// Fibonacci without memoization: O(2^n)
// With memoization: O(n)
const fib = memoize(function(n) {
  if (n <= 1) return n;
  return fib(n - 1) + fib(n - 2); // self-referencing
});

fib(40); // instant — 40 cache entries
fib(40); // instant again — cache hit

When to use: Pure functions with expensive computation and repeated same-argument calls. React's useMemo and useCallback implement this concept.

💡 Only memoize PURE functions — same input must always give same output. Don't memoize time-dependent or side-effectful functions.
Practice this question →

What is function composition and how do compose() and pipe() differ?

💡 Hint: Chain functions: output of one becomes input of next — compose=right-to-left, pipe=left-to-right

Function composition combines multiple functions where the output of one becomes the input of the next, building complex operations from simple pieces.

// compose — right to left (mathematical convention)
const compose = (...fns) => x => fns.reduceRight((v, f) => f(v), x);

// pipe — left to right (more readable)
const pipe = (...fns) => x => fns.reduce((v, f) => f(v), x);

const trim      = str => str.trim();
const lowercase = str => str.toLowerCase();
const addBang   = str => str + '!';

// compose: addBang(lowercase(trim(x)))
const processC = compose(addBang, lowercase, trim);

// pipe: trim → lowercase → addBang
const processP = pipe(trim, lowercase, addBang);

processP('  Hello World  '); // 'hello world!'
processC('  Hello World  '); // 'hello world!'

// Without composition (harder to read as chain grows)
const manual = str => addBang(lowercase(trim(str)));
💡 compose() mirrors mathematical f∘g notation. pipe() reads like a Unix pipeline — more natural for most developers. Both are equivalent, just different argument order.
Practice this question →

What are rest parameters and how do they differ from the arguments object?

💡 Hint: Rest (...args) is a real Array; arguments is array-like, no arrow support, no Array methods

Rest parameters (...args) collect remaining function arguments into a real Array.

// Rest parameters — modern
function sum(first, ...rest) {
  console.log(first);   // 1
  console.log(rest);    // [2, 3, 4] — real Array!
  return rest.reduce((a, b) => a + b, first);
}
sum(1, 2, 3, 4); // 10
rest.map(x => x * 2); // ✅ has Array methods

// arguments — legacy
function old() {
  console.log(arguments);        // { 0:1, 1:2, ... } — array-LIKE
  console.log(arguments.map);    // undefined — NOT a real Array
  const arr = Array.from(arguments); // convert needed
}

// Arrow functions have NO arguments object
const arrow = () => {
  console.log(arguments); // ReferenceError!
  // Use rest: (...args) => { console.log(args) }
};

Key differences:

  • Rest is a real Array → has all array methods
  • arguments is array-like → no map/filter/etc
  • Arrow functions don't have arguments
  • Rest collects only the remaining args after named params
💡 Always use rest parameters in modern code. arguments is legacy and has quirks that trip people up.
Practice this question →

What is recursion and what causes a stack overflow?

💡 Hint: Function calling itself — needs a base case; too many calls = call stack exhausted

Recursion is when a function calls itself. Every recursive function needs:

  1. A base case — a stopping condition
  2. A recursive case — that moves toward the base case
// Factorial
function factorial(n) {
  if (n <= 1) return 1;         // base case
  return n * factorial(n - 1); // recursive case
}
factorial(5); // 120

// Flatten nested array
function flatten(arr) {
  return arr.reduce((acc, item) =>
    Array.isArray(item)
      ? acc.concat(flatten(item)) // recurse
      : acc.concat(item),
  []);
}
flatten([1, [2, [3]]]); // [1, 2, 3]

Stack overflow: Each recursive call adds a stack frame. Without a base case (or with very deep recursion), the call stack fills up:

function infinite(n) {
  return infinite(n + 1); // no base case!
}
infinite(1); // RangeError: Maximum call stack size exceeded
💡 Tail Call Optimization (TCO) can theoretically prevent stack overflow for tail-recursive calls, but TCO is only reliably supported in Safari. For deep recursion, use iteration or trampolining.
Practice this question →

What is a Named Function Expression (NFE)?

💡 Hint: A function expression with an internal name — visible inside the body only

A Named Function Expression has a name after function, but unlike a declaration, the name is only visible inside the function body — not outside.

// Anonymous function expression — self-reference is fragile
const factorial = function(n) {
  if (n <= 1) return 1;
  return n * factorial(n - 1); // relies on outer var 'factorial'
};
// If factorial is reassigned, self-reference breaks!

// Named function expression — safe self-reference
const factorial = function fact(n) {
  if (n <= 1) return 1;
  return n * fact(n - 1); // 'fact' is always THIS function
};

console.log(factorial.name); // 'fact'
console.log(typeof fact);    // 'undefined' — not accessible outside!

// If outer var is reassigned, NFE still works
const f = factorial;
factorial = null;
f(5); // 120 — 'fact' still refers to itself correctly
💡 Use NFEs for recursive function expressions. Better stack traces (shows 'fact' not 'anonymous') and reliable self-reference even if the outer variable changes.
Practice this question →

What are the Module Pattern and the Revealing Module Pattern?

💡 Hint: IIFE + closure = private scope; expose only public API; revealing = explicitly name what's public

Pre-ES-modules patterns for creating encapsulated, private state in JavaScript.

// Module Pattern
const counter = (function() {
  let _count = 0; // private — inaccessible from outside

  return {
    increment() { _count++; },
    decrement() { _count--; },
    getCount()  { return _count; }
  };
})();

counter.increment();
counter.getCount(); // 1
counter._count;     // undefined — truly private ✓

// Revealing Module Pattern
// Define everything privately, then reveal selectively
const bankAccount = (function() {
  let _balance = 1000;
  let _transactions = [];

  function _log(type, amount) {
    _transactions.push({ type, amount, date: Date.now() });
  }

  function deposit(amount) {
    if (amount > 0) { _balance += amount; _log('deposit', amount); }
  }

  function withdraw(amount) {
    if (amount > 0 && amount <= _balance) {
      _balance -= amount; _log('withdrawal', amount);
    }
  }

  function getBalance() { return _balance; }
  function getHistory() { return [..._transactions]; }

  // Explicitly reveal the public interface
  return { deposit, withdraw, getBalance, getHistory };
})();
💡 Before ES modules, this was THE pattern for encapsulation. jQuery, Lodash, and most pre-2015 JS libraries used this. Today, use ES modules instead.
Practice this question →

What is partial application and how does it differ from currying?

💡 Hint: Partial application = pre-fill some args, return function waiting for the rest; currying = always one arg at a time

Both techniques create specialized functions from general ones — but differ in how arguments are collected.

// Partial Application — pre-fill SOME args
function partial(fn, ...presetArgs) {
  return function(...laterArgs) {
    return fn(...presetArgs, ...laterArgs);
  };
}

const add = (a, b, c) => a + b + c;
const add10 = partial(add, 10);         // pre-fill first arg
const add10and20 = partial(add, 10, 20); // pre-fill two args

add10(5, 3);     // 18 — takes remaining 2 args AT ONCE
add10and20(7);   // 37 — takes remaining 1 arg

// Currying — always ONE arg at a time
const curriedAdd = a => b => c => a + b + c;
curriedAdd(1)(2)(3); // 6 — strictly one at a time

// Practical partial application with bind()
function greet(greeting, punct, name) {
  return `${greeting}, ${name}${punct}`;
}
const hello = greet.bind(null, 'Hello', '!'); // partial via bind
hello('Alice'); // 'Hello, Alice!'
hello('Bob');   // 'Hello, Bob!'

Summary:

  • Currying: f(a, b, c) → f(a)(b)(c) — each call takes exactly ONE argument
  • Partial application: f(a, b, c) → f(a, b)(c) — pre-fill any number of args
💡 In practice, curried functions support partial application too (you can call with multiple args and they accumulate). The distinction is mostly theoretical.
Practice this question →

What is the difference between function declarations and function expressions?

💡 Hint: Declarations are hoisted fully; expressions are not — and expression form gives more control

Both create functions but behave differently with hoisting and syntax.

// Function Declaration — hoisted completely (name + body)
greet(); // ✅ works BEFORE the declaration
function greet() { return 'hello'; }

// Function Expression — NOT hoisted as a function
sayHi(); // ❌ TypeError: sayHi is not a function (var hoisted as undefined)
var sayHi = function() { return 'hi'; };

// Arrow function expression
const add = (a, b) => a + b;

// Named function expression (NFE) — name only visible inside
const fact = function factorial(n) {
  return n <= 1 ? 1 : n * factorial(n - 1); // factorial = self
};
console.log(typeof factorial); // undefined — not accessible outside

When to prefer each:

  • Declaration — top-level utility functions, when you want full hoisting
  • Expression — callbacks, conditional function creation, storing in variables, passing as args
💡 Most style guides prefer expressions (especially arrow) for callbacks and class methods. Declarations are fine for named utility functions at module scope.
Practice this question →

What is the spread operator (...) and what are its use cases?

💡 Hint: Expand iterable into individual elements — arrays, function args, object spreading

// 1. Spread in function calls — expand array as arguments
const nums = [1, 5, 3, 2, 4];
Math.max(...nums); // 5 — same as Math.max(1,5,3,2,4)

// 2. Copy and combine arrays (immutable operations)
const a = [1, 2, 3];
const b = [4, 5, 6];
const copy    = [...a];          // [1,2,3] — shallow copy
const merged  = [...a, ...b];    // [1,2,3,4,5,6]
const prepend = [0, ...a];       // [0,1,2,3]

// 3. Spread in object literals (ES2018)
const base = { a: 1, b: 2 };
const extended = { ...base, c: 3 };        // { a:1, b:2, c:3 }
const override = { ...base, b: 99 };       // { a:1, b:99 } — later wins

// 4. Convert iterable to array
const set = new Set([1,2,3]);
[...set]; // [1,2,3]
[...'hello']; // ['h','e','l','l','o']
[...document.querySelectorAll('p')]; // NodeList → Array

// 5. Clone + update (immutable pattern)
const state = { user: 'Alice', count: 0 };
const newState = { ...state, count: state.count + 1 };
💡 Spread creates SHALLOW copies — nested objects are still shared references. Use structuredClone() for deep copies. Spread in objects is order-sensitive: later properties win.
Practice this question →

What are default parameters and how do they work?

💡 Hint: Evaluated at call time, only when arg is undefined — can reference earlier params and outer scope

// Basic default parameters
function greet(name = 'World', greeting = 'Hello') {
  return `${greeting}, ${name}!`;
}
greet();              // 'Hello, World!'
greet('Alice');       // 'Hello, Alice!'
greet('Bob', 'Hi');   // 'Hi, Bob!'
greet(undefined, 'Hey'); // 'Hey, World!' — undefined triggers default
greet(null, 'Hey');   // 'Hey, null!' — null does NOT trigger default

// Defaults can reference earlier parameters
function range(start = 0, end = start + 10) {
  return { start, end };
}
range();     // { start: 0, end: 10 }
range(5);    // { start: 5, end: 15 }

// Defaults can be expressions / function calls
let count = 0;
function makeId(id = ++count) { return id; } // evaluated each call
makeId(); // 1
makeId(); // 2
makeId(99); // 99 (provided, so default not evaluated)

// Defaults + destructuring (very common pattern)
function createUser({ name = 'Anonymous', role = 'user', active = true } = {}) {
  return { name, role, active };
}
createUser({ name: 'Alice' }); // { name:'Alice', role:'user', active:true }
createUser();                  // {} → uses = {} → all defaults apply
💡 Default params replaced the old pattern: name = name || 'World'. The old way was buggy (falsy values like 0 or '' triggered the default). New defaults only trigger for undefined.
Practice this question →

What is tail call optimization (TCO) and how does it prevent stack overflow?

💡 Hint: A tail call is the last operation in a function — engine can reuse the stack frame instead of adding a new one

A tail call is when the last action of a function is calling another function. If the engine applies TCO, it reuses the current stack frame instead of pushing a new one — preventing stack overflow for deep recursion.

// Regular recursion — O(n) stack frames
function factorial(n) {
  if (n <= 1) return 1;
  return n * factorial(n - 1); // NOT tail call — must multiply AFTER return
}
factorial(100000); // Stack overflow!

// Tail-recursive version — last op is the call
function factTail(n, accumulator = 1) {
  if (n <= 1) return accumulator;
  return factTail(n - 1, n * accumulator); // tail call — nothing after it
}
// With TCO, this would run in O(1) stack space

// Current reality:
// TCO is in the ES6 spec BUT only Safari implements it fully
// V8 (Node/Chrome) removed their TCO implementation
// So tail recursion is NOT safe in Node.js / Chrome!

// Practical alternatives:
// 1. Iteration (always safe)
function factIterative(n) {
  let result = 1;
  for (let i = 2; i <= n; i++) result *= i;
  return result;
}

// 2. Trampolining — simulates TCO in user space
const trampoline = fn => (...args) => {
  let result = fn(...args);
  while (typeof result === 'function') result = result();
  return result;
};

const factTramp = trampoline(function fact(n, acc = 1) {
  return n <= 1 ? acc : () => fact(n - 1, n * acc); // return fn instead of calling
});
factTramp(100000); // works!
💡 Know the theory for interviews but use iteration in production. Trampolining is the practical way to handle very deep recursion in JS without relying on TCO.
Practice this question →More Functions questions →

Hoisting Interview Questions

Practice Hoisting questions →

Modern JS Interview Questions

Practice Modern JS questions →

What are generators and when would you use them?

💡 Hint: function* that can yield values lazily; returns an iterator

Generators (function*) can pause execution and yield values one at a time.

function* range(start, end) {
  for (let i = start; i <= end; i++) yield i;
}
for (const num of range(1, 5)) console.log(num); // 1,2,3,4,5

// Infinite lazy sequence
function* naturals() {
  let n = 0;
  while (true) yield n++;
}

Use cases: lazy/infinite sequences, custom iterators, Redux-Saga uses generators for async flows.

Practice this question →

What is optional chaining (?.) and nullish coalescing (??)?

💡 Hint: ?. short-circuits on null/undefined; ?? fallbacks only for null/undefined

// Optional chaining
const city = user?.address?.city;
const name = users?.[0]?.name;
const val = obj?.method?.();

// Nullish coalescing — fallback for null/undefined ONLY
const count = user.count ?? 0;
// if count=0, result is 0 (correct!)
// vs OR operator:
const bad = user.count || 0;
// if count=0, bad=0 but for wrong reason (0 is falsy)
💡 Use ?? when 0, '', false are valid values. Use || for general falsy fallbacks.
Practice this question →

What are tagged template literals and what are they used for?

💡 Hint: A function that processes the template — receives string parts and interpolated values separately

A tagged template is a function placed before a template literal — it receives the string parts and interpolated values separately, allowing custom processing.

function highlight(strings, ...values) {
  // strings: ['User ', ' scored ', '%']
  // values:  ['Alice', 95]
  return strings.reduce((result, str, i) =>
    result + str + (values[i] !== undefined
      ? `${values[i]}`
      : ''), '');
}

const user = 'Alice', score = 95;
highlight`User ${user} scored ${score}%`;
// 'User Alice scored 95%'

// Real-world uses:
// 1. styled-components
const Button = styled.div`
  background: ${props => props.primary ? 'blue' : 'white'};
`;

// 2. SQL sanitization (prevents injection!)
const result = sql`SELECT * FROM users WHERE id = ${userId}`;
// tag function escapes userId before inserting

// 3. GraphQL queries
const query = gql`
  query GetUser { user(id: ${id}) { name } }
`;
💡 Tagged templates are how styled-components and sql template libraries work. The tag function is called before string interpolation, enabling sanitization and custom processing.
Practice this question →

Explain destructuring for objects and arrays — including defaults, renaming, rest, and nesting.

💡 Hint: Extract values into variables with concise syntax — works in params, assignments, loops

// ── Array destructuring (position-based) ─────
const [a, b, c] = [1, 2, 3];
const [first, , third] = [1, 2, 3];    // skip index 1
const [x, ...rest] = [1, 2, 3, 4];     // x=1, rest=[2,3,4]
const [p = 10, q = 20] = [1];          // p=1, q=20 (default)

// ── Object destructuring (name-based) ────────
const { name, age } = { name: 'Alice', age: 25 };
const { name: userName } = { name: 'Alice' };  // rename to userName
const { city = 'NYC' } = {};                    // default if undefined

// ── Nested ────────────────────────────────────
const { address: { city: town } } = { address: { city: 'Paris' } };
// town = 'Paris'

// ── In function parameters ────────────────────
function greet({ name, age = 18, role = 'user' }) {
  return `${name} (age:${age}, ${role})`;
}

// ── Swap variables ────────────────────────────
let m = 1, n = 2;
[m, n] = [n, m]; // m=2, n=1

// ── In loops ─────────────────────────────────
for (const [key, value] of Object.entries(obj)) {
  console.log(key, value);
}
💡 Destructuring doesn't mutate the original — it creates new bindings. Combine default params + destructuring for clean, self-documenting function signatures.
Practice this question →

What are Symbols and what are their main use cases?

💡 Hint: Unique, non-string property keys — used for collision-free metadata and well-known protocols

A Symbol is a primitive that is guaranteed globally unique. Used mainly as object property keys to avoid name collisions.

const id = Symbol('id');
const id2 = Symbol('id');
id === id2; // false — always unique even with same description

const user = {};
user[id] = 42; // Symbol as property key

// Symbols are invisible to normal enumeration
Object.keys(user);                          // []
JSON.stringify(user);                       // '{}' — symbols excluded
Object.getOwnPropertySymbols(user);         // [Symbol(id)] — explicit access

// Well-known Symbols — customize built-in behavior
class MyIterable {
  [Symbol.iterator]() {      // makes instances work in for...of
    let n = 0;
    return { next: () => n < 3
      ? { value: n++, done: false }
      : { done: true } };
  }
}

for (const v of new MyIterable()) console.log(v); // 0, 1, 2

// Other well-known Symbols:
// Symbol.toPrimitive — control type coercion
// Symbol.hasInstance — customize instanceof
// Symbol.toStringTag — customize Object.prototype.toString output
💡 Use Symbols as property keys when extending objects you don't own — impossible to accidentally collide with existing or future string keys.
Practice this question →

What are Map and Set and how do they compare to objects and arrays?

💡 Hint: Map=ordered key-value with any key type; Set=unique-value collection; both iterable

Map vs plain object: keys can be any type, maintains insertion order, has .size, is directly iterable, better performance for frequent add/delete.

Set vs array: values must be unique, has O(1) lookup with .has(), no index access.

// Map
const map = new Map();
map.set('string', 1);
map.set(42, 'number key');     // any type as key!
map.set({}, 'object key');
map.get('string');  // 1
map.has(42);        // true
map.size;           // 3
map.delete(42);

// Iterate
for (const [k, v] of map) console.log(k, v);
[...map.keys()]; [...map.values()]; [...map.entries()];

// Convert to/from object
const obj = Object.fromEntries(map);
new Map(Object.entries(obj));

// Set
const set = new Set([1, 2, 2, 3, 3]); // {1, 2, 3} — duplicates removed
set.add(4);
set.has(2);   // true — O(1)
set.size;     // 4

// Remove duplicates from array (classic use)
const unique = [...new Set([1,2,2,3,3,3])]; // [1, 2, 3]

// Set operations
const a = new Set([1, 2, 3]);
const b = new Set([2, 3, 4]);
const union        = new Set([...a, ...b]);      // {1,2,3,4}
const intersection = new Set([...a].filter(x => b.has(x))); // {2,3}
💡 Use Map over objects when keys are non-strings, when insertion order matters, or when keys are frequently added/removed. Use Set for unique-value tracking.
Practice this question →

What are WeakMap and WeakSet and when do you use them?

💡 Hint: Keys are weakly held — objects can be GC'd; no iteration, no size — use for private metadata

WeakMap/WeakSet hold weak references to their keys/entries. The garbage collector can collect the referenced object — the entry is automatically removed. Keys must be objects.

// WeakMap — per-object cache that doesn't prevent GC
const cache = new WeakMap();

function process(user) {
  if (cache.has(user)) return cache.get(user); // cache hit
  const result = expensiveCompute(user);
  cache.set(user, result);
  return result;
}
// When user object is GC'd → cache entry vanishes automatically
// No manual cleanup needed!

// WeakSet — track objects without preventing GC
const processing = new WeakSet();
async function handleOnce(obj) {
  if (processing.has(obj)) return; // already running
  processing.add(obj);
  await doWork(obj);
  processing.delete(obj);
}

// WeakMap for private class fields (pre-#private syntax)
const _private = new WeakMap();
class Secure {
  constructor() { _private.set(this, { secret: 42 }); }
  getSecret() { return _private.get(this).secret; }
}

Key limitation: No .size, no iteration, no .keys()/.values(). You can't see what's in them — only access by key.

💡 WeakMap is perfect for: memoization caches keyed by object identity, private object metadata, DOM element data. The key insight: it doesn't extend the lifetime of the key object.
Practice this question →

What is Proxy and how does it enable metaprogramming?

💡 Hint: Intercept fundamental object operations (get, set, has, deleteProperty) with handler traps

A Proxy wraps an object and intercepts fundamental operations using "trap" handler methods.

const handler = {
  get(target, prop, receiver) {
    console.log(`Getting: ${prop}`);
    return Reflect.get(target, prop, receiver); // ← always use Reflect
  },
  set(target, prop, value, receiver) {
    if (typeof value !== 'number') throw new TypeError('Numbers only');
    return Reflect.set(target, prop, value, receiver);
  },
  has(target, prop) {
    return prop in target; // intercepts 'in' operator
  },
  deleteProperty(target, prop) {
    if (prop.startsWith('_')) throw new Error('Cannot delete private');
    return Reflect.deleteProperty(target, prop);
  }
};

const obj = new Proxy({}, handler);
obj.x = 42;
obj.x;         // logs "Getting: x" → 42
'x' in obj;    // calls has trap
obj.y = 'str'; // TypeError

// Real-world use cases:
// 1. Validation
// 2. Reactive state (Vue 3 uses Proxy for reactivity!)
// 3. Default property values
// 4. Logging / debugging
// 5. Negative array indexing
const arr = new Proxy([], {
  get: (t, p) => t[p < 0 ? t.length + +p : p]
});
arr[-1]; // last element
💡 Always use Reflect inside Proxy traps — it handles edge cases with prototype chains and getters/setters correctly. Reflect methods mirror Proxy trap signatures exactly.
Practice this question →

What are Iterators and Iterables in JavaScript?

💡 Hint: Iterator has next() → {value, done}. Iterable has [Symbol.iterator](). Used by for...of, spread, destructuring.

The iteration protocol standardizes how values are produced sequentially.

  • Iterator: an object with next() that returns { value, done }
  • Iterable: an object with [Symbol.iterator]() that returns an iterator
// Custom iterable range object
const range = {
  from: 1, to: 5,
  [Symbol.iterator]() {
    let current = this.from;
    const last = this.to;
    return {
      next() {
        return current <= last
          ? { value: current++, done: false }
          : { value: undefined, done: true };
      }
    };
  }
};

for (const n of range) console.log(n); // 1, 2, 3, 4, 5
[...range];   // [1, 2, 3, 4, 5]
const [a, b] = range; // destructuring works too

// Built-in iterables: Array, String, Map, Set, NodeList, arguments
for (const char of 'hello') console.log(char); // h, e, l, l, o

// Manual iteration
const iter = [1, 2][Symbol.iterator]();
iter.next(); // { value: 1, done: false }
iter.next(); // { value: 2, done: false }
iter.next(); // { value: undefined, done: true }
💡 Anything that works with for...of, spread (...), or destructuring must be iterable. Implement [Symbol.iterator] to make your custom classes work with all these.
Practice this question →

What are ES Modules and how do they differ from CommonJS?

💡 Hint: ESM: static import/export, live bindings, always strict; CJS: require(), dynamic, copied values

// ── ES Modules (ESM) ─────────────────────────────
// math.js
export const PI = 3.14;
export function add(a, b) { return a + b; }
export default class App {}

// main.js
import { PI, add } from './math.js';   // named imports
import App from './App.js';             // default import
import * as Math from './math.js';      // namespace

// ── CommonJS (CJS) ─────────────────────────────
module.exports = { PI, add };
const { PI, add } = require('./math');

// ── Key Differences ────────────────────────────
FeatureESMCommonJS
AnalysisStatic (parse time)Dynamic (runtime)
BindingLive (tracks changes)Copied (snapshot)
Strict modeAlwaysOpt-in
Top-level await
Browsertype="module"Bundler needed
Tree shaking✅ (static)❌ (dynamic)
💡 Tree-shaking only works with ESM because imports are static — bundlers can analyze what's used. CJS require() is dynamic so bundlers can't eliminate dead code.
Practice this question →

What are dynamic imports and why are they useful?

💡 Hint: import() returns a Promise — enables code splitting, conditional loading, lazy loading

Dynamic imports (import()) load modules on demand, returning a Promise. Enables code splitting.

// Static — always loads at startup
import { parse } from 'csv-parser';

// Dynamic — loads only when needed
async function handleUpload(file) {
  if (file.type === 'text/csv') {
    const { parse } = await import('csv-parser'); // loaded now
    return parse(file);
  }
}

// React code splitting
const Chart = React.lazy(() => import('./HeavyChart'));

// Conditional loading
const lang = navigator.language;
const { messages } = await import(`./i18n/${lang}.js`); // dynamic path

// User-triggered loading
button.onclick = async () => {
  const { default: Editor } = await import('./editor.js');
  new Editor(container);
};

// Access default + named exports
const mod = await import('./math.js');
mod.default; // default export
mod.add;     // named export

Benefits: Smaller initial bundle, faster page load, load features only when needed.

💡 Webpack and Vite automatically create separate JS chunks for each dynamic import. Use loading states while the chunk loads.
Practice this question →

What are WeakRef and FinalizationRegistry?

💡 Hint: WeakRef: hold object without preventing GC; FinalizationRegistry: callback when object is collected

Advanced memory management APIs — use sparingly and only for performance optimizations.

// WeakRef — holds a weak reference (doesn't prevent GC)
let bigObject = { data: new Array(1_000_000).fill('*') };
const ref = new WeakRef(bigObject);

bigObject = null; // release strong reference → GC can now collect it

// Access via .deref() — returns undefined if already collected
const obj = ref.deref();
if (obj) {
  console.log('Still alive:', obj.data.length);
} else {
  console.log('Was garbage collected');
}

// FinalizationRegistry — callback when a registered object is collected
const registry = new FinalizationRegistry((heldValue) => {
  console.log('Collected! Clean up:', heldValue);
  cleanupResources(heldValue);
});

let target = { name: 'Alice' };
registry.register(target, 'alice-cleanup-token');
// target can now be GC'd — callback fires sometime after

Important: GC timing is non-deterministic. Don't use these for program correctness — only for optional caching or cleanup of non-critical resources.

💡 WeakRef + FinalizationRegistry are for library authors building caches that should automatically clean up. App code rarely needs these — use WeakMap instead.
Practice this question →

What are Async Generators and Async Iterators?

💡 Hint: function* + async = yield Promises lazily; consumed with for await...of

Async generators combine generator syntax with async/await — they yield values asynchronously and are consumed with for await...of.

// Async generator — paginated API fetcher
async function* fetchPages(url) {
  let page = 1;
  while (true) {
    const res = await fetch(`${url}?page=${page++}`);
    const { items, hasMore } = await res.json();
    yield items; // pause, give back items, resume on next iteration
    if (!hasMore) return; // done
  }
}

// Consume with for await...of
async function loadAll() {
  const allItems = [];
  for await (const items of fetchPages('/api/data')) {
    allItems.push(...items);
    if (allItems.length >= 100) break; // can stop early!
  }
  return allItems;
}

// Streaming data (Fetch Streams API)
async function* streamLines(url) {
  const reader = (await fetch(url)).body.getReader();
  const decoder = new TextDecoder();
  let buffer = '';
  while (true) {
    const { done, value } = await reader.read();
    if (done) { if (buffer) yield buffer; return; }
    buffer += decoder.decode(value);
    const lines = buffer.split('\n');
    buffer = lines.pop();
    for (const line of lines) yield line;
  }
}
💡 Async generators are perfect for paginated APIs, event streams, or any sequence of values that arrive asynchronously over time.
Practice this question →

What is Reflect and how does it relate to Proxy?

💡 Hint: Reflect mirrors Proxy trap methods — use Reflect inside traps for correct default behavior

Reflect is a built-in object with static methods mirroring Proxy traps — same names, same signatures.

// Reflect mirrors operations but with better API design:
Reflect.get(target, prop, receiver);    // target[prop]
Reflect.set(target, prop, value, recv); // target[prop] = value
Reflect.has(target, prop);              // prop in target
Reflect.deleteProperty(target, prop);  // delete target[prop]
Reflect.ownKeys(target);               // all own keys (strings + symbols)
Reflect.apply(fn, thisArg, args);      // fn.apply(thisArg, args)
Reflect.construct(Cls, args);          // new Cls(...args)

// Why Reflect inside Proxy traps?
const proxy = new Proxy(obj, {
  get(target, prop, receiver) {
    log(prop);
    // Use Reflect.get (not target[prop]) to:
    // 1. Correctly pass receiver (preserves 'this' for getters)
    // 2. Return consistent boolean values
    return Reflect.get(target, prop, receiver); // ✅
    // return target[prop]; // ❌ breaks for inherited getters
  }
});

// Reflect.set returns true/false instead of throwing
const success = Reflect.set(obj, 'x', 5);
if (!success) console.log('Could not set');
💡 Rule: always use Reflect inside Proxy traps. It handles prototype chain, getters/setters, and non-writable properties correctly. Never do target[prop] directly in get trap.
Practice this question →

What are the basics of regular expressions in JavaScript?

💡 Hint: Pattern matching: literal chars, character classes, quantifiers, groups, flags (g, i, m)

// Creating regex
const r1 = /hello/;           // literal
const r2 = new RegExp('hello'); // dynamic pattern

// Test & match
/hello/.test('say hello');  // true
'hello world'.match(/\w+/g); // ['hello', 'world']
'hello world'.match(/(\w+)\s(\w+)/); // with groups

// Character classes
/[aeiou]/     // any vowel
/[^aeiou]/    // NOT a vowel
/[a-z]/       // lowercase letter
/\d/          // digit [0-9]
/\w/          // word char [a-zA-Z0-9_]
/\s/          // whitespace
/./           // any char except newline

// Quantifiers
/a+/          // one or more
/a*/          // zero or more
/a?/          // zero or one
/a{3}/        // exactly 3
/a{2,5}/      // 2 to 5

// Anchors
/^hello/      // starts with
/world$/      // ends with

// Groups
/(\d{4})-(\d{2})-(\d{2})/.exec('2024-01-15');
// groups: ['2024', '01', '15']
/(?<year>\d{4})-(?<month>\d{2})/.exec('2024-01');
// Named: match.groups.year, match.groups.month

// Lookahead / lookbehind
/\d+(?= dollars)/   // digits followed by " dollars"
/(?<=\$)\d+/        // digits preceded by $
/\d+(?! dollars)/   // digits NOT followed by " dollars"

// Flags
/pattern/g   // global — find all matches
/pattern/i   // case insensitive
/pattern/m   // multiline — ^ and $ match line starts/ends
/pattern/s   // dotAll — . matches newline too

// Common operations
'a1b2'.replace(/\d/g, '#');    // 'a#b#'
'a,b,,c'.split(/,+/);          // ['a','b','c']
'hello'.search(/e/);            // 1 (index)
💡 Greedy vs lazy: /a.*b/ greedily matches as MUCH as possible; /a.*?b/ lazily matches as LITTLE as possible. Add ? after quantifiers for lazy: +?, *?, {n,m}?
Practice this question →More Modern JS questions →

Objects Interview Questions

Practice Objects questions →

How does prototypal inheritance work in JavaScript?

💡 Hint: Every object has a [[Prototype]] link — property lookup walks the chain

Every JS object has an internal [[Prototype]] link. Property lookup walks up the chain until found or null.

const animal = { speak() { return '...'; } };
const dog = Object.create(animal); // dog's proto = animal
dog.name = 'Rex';
dog.speak(); // Found on chain → '...'

// ES6 class is sugar over this
class Dog extends Animal {
  speak() { return 'Woof!'; }
}
💡 Use hasOwnProperty() to check if a property is directly on the object vs inherited.
Practice this question →

What is the difference between shallow copy and deep copy?

💡 Hint: Shallow copies references; deep copies recursively

Shallow copy: top-level properties only — nested objects are still shared references. Deep copy: recursively copies everything.

const orig = { a: 1, nested: { b: 2 } };

// Shallow — spread, Object.assign
const shallow = { ...orig };
shallow.nested.b = 99;
console.log(orig.nested.b); // 99 — orig mutated!

// Deep copy
const deep = structuredClone(orig); // ✅ modern recommended
// JSON.parse(JSON.stringify()) — simple but lossy (no undefined/Dates)
Practice this question →

What are property descriptors and property flags (writable, enumerable, configurable)?

💡 Hint: Every property has a descriptor controlling whether it can be changed, seen, or deleted

Every object property has a descriptor with 3 flags:

  • writable — can the value be changed?
  • enumerable — does it show up in for...in / Object.keys()?
  • configurable — can the descriptor be changed? Can the property be deleted?
const obj = {};

Object.defineProperty(obj, 'ID', {
  value: 42,
  writable: false,    // read-only
  enumerable: false,  // hidden from loops
  configurable: false // can't redefine or delete
});

obj.ID = 99;              // silently fails (TypeError in strict)
console.log(obj.ID);      // 42

Object.keys(obj);         // [] — ID is non-enumerable
delete obj.ID;            // false — non-configurable

// View a property's descriptor
Object.getOwnPropertyDescriptor(obj, 'ID');
// { value: 42, writable: false, enumerable: false, configurable: false }

// Regular property defaults:
// { value: ..., writable: true, enumerable: true, configurable: true }
💡 Object.freeze() sets writable + configurable to false for all props. Object.seal() sets configurable to false but keeps writable. Both use property descriptors under the hood.
Practice this question →

What are getters and setters in JavaScript?

💡 Hint: Accessor properties — run a function on read (get) or write (set)

Getters and setters are special methods that execute code when a property is read or written — they look like properties but behave like functions.

const user = {
  firstName: 'John',
  lastName: 'Doe',

  get fullName() {
    return `${this.firstName} ${this.lastName}`; // computed on read
  },

  set fullName(val) {
    [this.firstName, this.lastName] = val.split(' '); // validated on write
  }
};

console.log(user.fullName); // 'John Doe' — runs getter
user.fullName = 'Jane Smith'; // runs setter
console.log(user.firstName); // 'Jane'

// In a class
class Temperature {
  constructor(celsius) { this._c = celsius; }

  get fahrenheit() { return this._c * 9/5 + 32; }
  set fahrenheit(f) { this._c = (f - 32) * 5/9; }
}

const t = new Temperature(0);
console.log(t.fahrenheit); // 32
t.fahrenheit = 212;
console.log(t._c);         // 100
💡 Use getters for derived/computed values. Use setters for validation. Avoid getter/setter pairs that call each other — infinite loops!
Practice this question →

What is the difference between Object.freeze() and Object.seal()?

💡 Hint: freeze = truly immutable; seal = no add/delete but values can change

Both prevent structural changes but differ in degree:

  • Object.freeze() — no add, no delete, no value change. Completely locked.
  • Object.seal() — no add, no delete, but existing values CAN be changed.
// freeze
const config = Object.freeze({ host: 'localhost', port: 3000 });
config.port = 9999;     // silently fails (TypeError in strict)
config.debug = true;    // silently fails
delete config.host;     // false
console.log(config.port); // 3000 — unchanged

// seal
const sealed = Object.seal({ x: 1, y: 2 });
sealed.x = 99;          // ✅ allowed — value change is OK
sealed.z = 3;           // ❌ silently fails — no new props
delete sealed.x;        // ❌ fails — no delete

// Critical gotcha: BOTH are SHALLOW
const frozen = Object.freeze({ nested: { a: 1 } });
frozen.nested.a = 99;   // ⚠️ WORKS — nested object is not frozen!

// Deep freeze (recursive)
function deepFreeze(obj) {
  Object.getOwnPropertyNames(obj).forEach(name => {
    if (typeof obj[name] === 'object' && obj[name] !== null) {
      deepFreeze(obj[name]);
    }
  });
  return Object.freeze(obj);
}
💡 Use freeze() for config constants and action type objects in Redux. Remember it's shallow — deep freeze recursively if needed.
Practice this question →

What are the different ways to enumerate object properties?

💡 Hint: for...in, Object.keys, Object.values, Object.entries — differ in own vs inherited, enumerable vs all

Each enumeration method has different behavior around own properties, inherited properties, and enumerability:

const parent = { inherited: true };
const obj = Object.create(parent); // obj's prototype is parent
obj.a = 1;
obj.b = 2;
Object.defineProperty(obj, 'hidden', { value: 3, enumerable: false });

// for...in — own + inherited, enumerable only
for (const k in obj) console.log(k); // 'a', 'b', 'inherited'

// Object.keys — own properties, enumerable only ← most common
Object.keys(obj);    // ['a', 'b']

// Object.values — own, enumerable, values
Object.values(obj);  // [1, 2]

// Object.entries — own, enumerable, [key, value] pairs
Object.entries(obj); // [['a', 1], ['b', 2]]

// Object.getOwnPropertyNames — own, ALL (including non-enumerable)
Object.getOwnPropertyNames(obj); // ['a', 'b', 'hidden']

// Check if own property
obj.hasOwnProperty('a');         // true
obj.hasOwnProperty('inherited'); // false
Object.hasOwn(obj, 'a');         // ES2022 — preferred over hasOwnProperty
💡 Use Object.keys/values/entries in modern code — they only return own enumerable properties. Use for...in only if you explicitly need inherited properties (rare).
Practice this question →

How does instanceof work and what are its limitations?

💡 Hint: Walks the prototype chain looking for constructor.prototype — can be fooled

instanceof checks if a constructor's prototype appears anywhere in an object's prototype chain.

class Animal {}
class Dog extends Animal {}

const dog = new Dog();
dog instanceof Dog;    // true — Dog.prototype in chain
dog instanceof Animal; // true — Animal.prototype in chain too
dog instanceof Object; // true — everything inherits from Object

// How it works internally:
// dog.__proto__ === Dog.prototype ✓ → true

// Limitation 1: Can be fooled by setPrototypeOf
const fake = Object.create(Dog.prototype);
fake instanceof Dog; // true — but Dog() was never called!

// Limitation 2: Cross-realm failure
// Arrays from iframes: arr instanceof Array → false!
// Use Array.isArray() — realm-safe

// Limitation 3: Primitives always fail
'hello' instanceof String;     // false (primitive, not object)
new String('hello') instanceof String; // true (wrapped object)

// Better type checking alternatives:
Array.isArray([]);                          // ✅ realm-safe
typeof 'hello';                             // 'string'
Object.prototype.toString.call([]);         // '[object Array]'
💡 instanceof tests the prototype chain, not the constructor. For safe type checks use Array.isArray(), typeof, or Object.prototype.toString.call().
Practice this question →

What are mixins in JavaScript and why are they used?

💡 Hint: Copy methods from multiple sources into a class — avoids single-inheritance limits

A mixin is a pattern to copy methods from one object into another class or prototype, enabling code reuse without inheritance chains.

// Mixin objects (plain objects with methods)
const Serializable = {
  serialize() { return JSON.stringify(this); },
};

const Validatable = {
  validate() {
    return Object.values(this).every(v => v !== null && v !== undefined);
  }
};

class User {
  constructor(name, email) {
    this.name = name;
    this.email = email;
  }
}

// Apply mixins — copy methods onto the class prototype
Object.assign(User.prototype, Serializable, Validatable);

const u = new User('Alice', 'a@b.com');
u.serialize(); // '{"name":"Alice","email":"a@b.com"}'
u.validate();  // true

// More modern approach: mixin factory functions
const withLogging = (Base) => class extends Base {
  log(msg) { console.log(`[${this.constructor.name}] ${msg}`); }
};

class LoggableUser extends withLogging(User) {}

Mixins solve the diamond problem of multiple inheritance — compose behavior from multiple independent sources.

💡 React class component mixins were deprecated in favour of Hooks. Hooks are the modern mixin equivalent — reusable stateful logic without inheritance.
Practice this question →

What are native prototypes and how can you safely extend them?

💡 Hint: Array.prototype, String.prototype etc — extending them affects ALL instances; almost always a bad idea

All built-in types (Array, String, Object, Function…) have prototypes with their methods. Every array shares Array.prototype.

// How it works — built-in prototype chain
const arr = [1, 2, 3];
// arr.__proto__ === Array.prototype ✓
// Array.prototype.__proto__ === Object.prototype ✓

// All arrays share Array.prototype methods
arr.map === Array.prototype.map;   // true — same reference

// Checking native prototype
Array.prototype.includes;   // function — built-in
String.prototype.padStart;  // function — built-in

// ❌ Bad: extending native prototypes (prototype pollution risk)
Array.prototype.last = function() { return this[this.length - 1]; };
// Now EVERY array in ALL your code + libraries has .last — collisions!

// ❌ Famous historical mistake: Prototype.js library
// It extended Array.prototype and broke all for...in loops on arrays

// ✅ If you must extend (only in polyfills — check first)
if (!Array.prototype.myMethod) { // always guard with existence check
  Array.prototype.myMethod = function() { ... };
}

// ✅ Better: use utility functions or subclassing
class SuperArray extends Array {
  last() { return this[this.length - 1]; }
}
const sa = new SuperArray(1, 2, 3);
sa.last(); // 3 — only SuperArray instances affected
💡 Rule: never extend native prototypes in application code or libraries. Exception: polyfills (always check for existence first). It's the JS equivalent of monkey-patching — dangerous at scale.
Practice this question →More Objects questions →

Performance Interview Questions

Practice Performance questions →

What are debounce and throttle? When do you use each?

💡 Hint: Debounce=wait for pause; throttle=limit rate

  • Debounce — fires AFTER user stops triggering. Best for: search input, resize, form validation
  • Throttle — fires at most once per interval. Best for: scroll, mouse move
function debounce(fn, delay) {
  let timer;
  return (...args) => {
    clearTimeout(timer);
    timer = setTimeout(() => fn(...args), delay);
  };
}
const onSearch = debounce((q) => fetchResults(q), 300);
💡 Debounce = waits for storm to pass. Throttle = steady drip.
Practice this question →

What causes memory leaks in JavaScript and how do you detect them?

💡 Hint: Unintentional references prevent GC: forgotten timers, closures, detached DOM nodes

Memory leaks occur when objects are no longer needed but are still referenced, preventing garbage collection.

Common causes:

// 1. Forgotten intervals
const el = document.getElementById('status');
const id = setInterval(() => { el.innerHTML = Date.now(); }, 1000);
// If el is removed from DOM but interval isn't cleared → leak
// Fix: clearInterval(id) when done

// 2. Detached DOM nodes
let detached;
function create() {
  detached = document.createElement('div'); // global reference
  document.body.appendChild(detached);
  document.body.removeChild(detached); // removed from DOM...
  // but 'detached' variable still holds it → leak
}

// 3. Closures capturing large objects
function leaky() {
  const bigData = new Array(1_000_000).fill('*');
  return () => bigData[0]; // ALL of bigData kept alive!
}

// 4. Event listeners not removed
window.addEventListener('resize', heavyHandler);
// Fix: window.removeEventListener('resize', heavyHandler) on cleanup

// 5. Growing caches without eviction
const cache = {};
function store(key, val) { cache[key] = val; } // never cleared!

Detection: Chrome DevTools → Memory → Heap Snapshot → look for "Detached" DOM nodes, or compare snapshots over time for growing retained size.

💡 Use WeakMap for object-keyed caches — entries are automatically released when the key object is GC'd. Perfect for per-element data storage.
Practice this question →

What is the difference between reflow and repaint, and how do you avoid layout thrashing?

💡 Hint: Reflow=recalculate layout (expensive cascade); repaint=visual update only; batch DOM reads/writes

Repaint — visual property change (color, background, visibility) without affecting layout. Less expensive.

Reflow (Layout) — geometry change (width, height, position, padding, margin). Expensive: cascades through the document.

// Layout thrashing — alternating reads and writes force reflow each iteration
for (let i = 0; i < 100; i++) {
  const h = el.offsetHeight;       // READ — forces layout (reflow)
  el.style.height = h + 1 + 'px'; // WRITE — invalidates layout
}
// Browser must reflow 100 times! ❌

// Fix: batch all reads, then all writes
const h = el.offsetHeight; // single read
for (let i = 0; i < 100; i++) {
  el.style.height = (h + i) + 'px'; // writes only
}

// Best fix: CSS transforms — composited on GPU, NO reflow
el.style.transform = 'translateY(10px)'; // skip layout entirely!

// Properties that DON'T trigger reflow (GPU composited):
// transform, opacity, filter, will-change

What triggers reflow: offsetWidth/Height, getBoundingClientRect(), scrollTop, getComputedStyle(), adding/removing DOM nodes, font changes.

💡 Use requestAnimationFrame to batch DOM reads and writes in the correct phase. Libraries like FastDOM enforce this pattern.
Practice this question →

What is lazy loading and code splitting?

💡 Hint: Lazy loading: load resources only when needed; code splitting: divide bundle into smaller chunks

Lazy loading defers loading of non-critical resources until they're actually needed.

Code splitting breaks your JS bundle into smaller chunks loaded on demand.

// ─── Native Lazy Loading (images, iframes) ─────────────────
<img src="hero.jpg" loading="eager" />   // load immediately
<img src="below-fold.jpg" loading="lazy" /> // load when near viewport

// ─── JS Code Splitting (Webpack/Vite) ─────────────────────
// Static import — included in main bundle
import { utils } from './utils.js';

// Dynamic import — separate chunk, loaded on demand
async function loadChart() {
  const { Chart } = await import('./HeavyChart.js'); // separate bundle
  new Chart(element, data);
}

// Route-based splitting (React Router)
const Dashboard = React.lazy(() => import('./Dashboard'));
const Settings  = React.lazy(() => import('./Settings'));

function App() {
  return (
    <Suspense fallback={<Spinner />}>
      <Routes>
        <Route path="/dashboard" element={<Dashboard />} />
        <Route path="/settings"  element={<Settings />} />
      </Routes>
    </Suspense>
  );
}

// ─── IntersectionObserver lazy loading ────────────────────
const observer = new IntersectionObserver(([entry]) => {
  if (entry.isIntersecting) {
    entry.target.src = entry.target.dataset.src;
    observer.unobserve(entry.target);
  }
});
document.querySelectorAll('img[data-src]').forEach(img => observer.observe(img));
💡 Impact: code splitting cuts initial bundle by 30–70% for most apps. Rule of thumb: everything not needed for the landing page view should be lazily loaded.
Practice this question →More Performance questions →

Type Coercion Interview Questions

Practice Type Coercion questions →

What's Wrong? Interview Questions

Practice What's Wrong? questions →

Don't Just Read — Practice

Reading answers is passive. JSPrep Pro makes you actively recall answers, predict code output, fix real bugs, and get evaluated by AI — just like an actual interview.

Practice All Questions Free →

Related Resources