Intermediate0 questionsFull Guide

JavaScript Memoization Interview Questions

Memoization caches function return values to avoid recomputation. Learn the cache key problem, WeakMap pattern, LRU cache implementation, recursive memoization, and how useMemo and useCallback apply the same concept in React.

The Mental Model

Picture a brilliant but lazy mathematician. Every time you ask them to calculate something, they write the answer on a sticky note and slap it on their desk. The next time you ask the same question, they do not recalculate — they glance at the note and read the answer back instantly. That is memoization. A memoized function solves a problem once, stores the result, and serves it instantly from a cache when the same problem appears again. The calculation never runs twice for the same input. The key insight: memoization is a trade. You exchange memory for speed. You pay with storage space and get back time. This trade only makes sense under three specific conditions — the function must always return the same output for the same input, the calculation must be expensive enough to justify caching overhead, and the same inputs must appear frequently enough to generate cache hits. When all three are true, memoization can turn an unusable function into an instant one. When any one is false, memoization either returns wrong results, wastes memory, or slows things down.

The Explanation

What memoization actually is

Memoization is an optimization technique that caches the return value of a function based on its input arguments. The word comes from "memorandum" — a note to remember something by. On the first call with a given set of arguments, the function runs normally and stores the result in a cache. On every subsequent call with the same arguments, the cached result is returned without executing the function body at all.

Memoization is only valid for pure functions — functions that always return the same output for the same input and produce no side effects. A function that reads from a database, generates random numbers, or depends on the current timestamp cannot be safely memoized because the same input could produce different results at different times.

The simplest memoization implementation

A basic memoize function wraps another function and uses a closure to maintain a cache object between calls. The cache lives inside the closure — it persists across invocations without polluting the outer scope.

function memoize(fn) {
  const cache = {}

  return function(...args) {
    const key = JSON.stringify(args)

    if (key in cache) {
      return cache[key]       // cache hit — skip computation
    }

    const result = fn(...args)
    cache[key] = result       // cache miss — compute and store
    return result
  }
}

function slowSquare(n) {
  // Simulate expensive computation
  for (let i = 0; i < 1e8; i++) {}
  return n * n
}

const fastSquare = memoize(slowSquare)

fastSquare(12)   // slow — runs computation, caches result
fastSquare(12)   // instant — returns cached 144
fastSquare(12)   // instant
fastSquare(7)    // slow — new input, runs computation, caches 49
fastSquare(7)    // instant

The closure keeps cache alive between calls. Each unique combination of arguments gets one cache entry. The cache persists for the entire lifetime of the memoized function reference.

The cache key problem — three ways JSON.stringify fails

The cache key must uniquely and consistently identify a set of arguments. JSON.stringify(args) covers the common cases but has three real limitations that interviewers specifically probe.

Problem 1 — Object property order is significant to JSON.stringify:

JSON.stringify({ a: 1, b: 2 })   // '{"a":1,"b":2}'
JSON.stringify({ b: 2, a: 1 })   // '{"b":2,"a":1}'  ← different key!

// These produce different cache keys despite being semantically identical
memoized({ a: 1, b: 2 })   // cache miss — computes and stores
memoized({ b: 2, a: 1 })   // cache miss — computes again (wrong)

Problem 2 — Functions and Symbols are silently dropped:

JSON.stringify({ fn: () => {} })   // '{}'  — function omitted
JSON.stringify({ id: Symbol() })   // '{}'  — Symbol omitted

// Two completely different function arguments produce the same key '{}'
// Cache returns the first result for all subsequent calls — incorrect

Problem 3 — Circular references throw immediately:

const obj = {}
obj.self = obj
JSON.stringify(obj)   // TypeError: Converting circular structure to JSON

The fix is to make the key strategy configurable and fall back to a custom resolver for non-serializable arguments:

function memoize(fn, getKey = (...args) => JSON.stringify(args)) {
  const cache = new Map()
  return function(...args) {
    const key = getKey(...args)
    if (cache.has(key)) return cache.get(key)
    const result = fn(...args)
    cache.set(key, result)
    return result
  }
}

// Custom key for user objects: use a stable unique identifier
const getUser = memoize(fetchUser, (id, options) => `${id}:${options.role}`)

// Custom key for DOM elements: use WeakMap (see next section)

WeakMap pattern — memoizing object arguments without memory leaks

When a function receives an object and the question is identity (same reference = same result), a WeakMap is the correct backing store. Unlike a plain object or Map, WeakMap does not prevent its keys from being garbage collected. When the object is no longer referenced elsewhere in the application, its cache entry disappears automatically.

function memoizeWithObject(fn) {
  const cache = new WeakMap()

  return function(obj) {
    if (cache.has(obj)) {
      return cache.get(obj)     // cache hit
    }
    const result = fn(obj)
    cache.set(obj, result)      // weak reference — no GC prevention
    return result
  }
}

const analyzeLayout = memoizeWithObject((el) => {
  // Expensive DOM measurement
  return el.getBoundingClientRect()
})

const btn = document.getElementById('submit')
analyzeLayout(btn)   // computes and caches
analyzeLayout(btn)   // instant — same reference

// If btn is later removed from DOM and the reference dropped,
// the WeakMap entry is garbage collected automatically — no leak

Memoizing recursive functions — the fibonacci pattern

Memoization has its most dramatic impact on recursive functions with overlapping subproblems. The naive recursive Fibonacci recomputes the same values exponentially — fib(50) makes over two billion calls. Memoization reduces this to 51 unique computations.

// Naive recursive — O(2^n) time
function fib(n) {
  if (n <= 1) return n
  return fib(n - 1) + fib(n - 2)
}

fib(40)   // ~200 million calls — takes seconds
fib(50)   // ~2 billion calls — effectively hangs
// Memoized — O(n) time
function memoize(fn) {
  const cache = new Map()
  return function(n) {
    if (cache.has(n)) return cache.get(n)
    const result = fn(n)
    cache.set(n, result)
    return result
  }
}

// CRITICAL: reassign fib so the recursive calls use the memoized version
const fib = memoize(function(n) {
  if (n <= 1) return n
  return fib(n - 1) + fib(n - 2)   // calls memoized fib — cache hit every time
})

fib(50)    // instant — 51 unique subproblems, all others from cache
fib(100)   // instant

The most common mistake: the inner recursive call must reference the memoized version. If the function references the original pre-memoization name, inner calls bypass the cache entirely and only the outermost call is memoized.

LRU cache — bounded memoization for production

An unbounded cache is a memory leak — every unique argument set adds a permanent entry that is never evicted. In production, always bound the cache size. A Least Recently Used (LRU) cache discards the entry accessed least recently when the cache reaches its capacity.

class LRUCache {
  constructor(capacity) {
    this.capacity = capacity
    this.cache    = new Map()   // Map preserves insertion order — key to LRU
  }

  get(key) {
    if (!this.cache.has(key)) return undefined
    // Refresh access order: delete then re-insert moves key to end (most recent)
    const value = this.cache.get(key)
    this.cache.delete(key)
    this.cache.set(key, value)
    return value
  }

  set(key, value) {
    if (this.cache.has(key)) {
      this.cache.delete(key)     // remove to refresh position
    } else if (this.cache.size >= this.capacity) {
      // First entry = least recently used — delete it
      this.cache.delete(this.cache.keys().next().value)
    }
    this.cache.set(key, value)
  }
}

function memoizeWithLRU(fn, capacity = 100) {
  const lru = new LRUCache(capacity)
  return function(...args) {
    const key    = JSON.stringify(args)
    const cached = lru.get(key)
    if (cached !== undefined) return cached
    const result = fn(...args)
    lru.set(key, result)
    return result
  }
}

Map's guaranteed insertion order is what makes this implementation clean — the first entry is always the least recently used. This pattern is one of the most common standalone coding interview questions at senior level.

Memoization with TTL — for data that expires

Some functions are pure over a short window but not indefinitely — API responses, configuration reads, computed positions. A TTL (time-to-live) cache serves results within the valid window and recomputes after expiry.

function memoizeWithTTL(fn, ttlMs = 60_000) {
  const cache = new Map()   // key → { value, expiresAt }

  return function(...args) {
    const key   = JSON.stringify(args)
    const entry = cache.get(key)

    if (entry && Date.now() < entry.expiresAt) {
      return entry.value    // still valid
    }

    const value = fn(...args)
    cache.set(key, { value, expiresAt: Date.now() + ttlMs })
    return value
  }
}

// Caches for 5 minutes, then recomputes on the next call
const getConfig = memoizeWithTTL(fetchRemoteConfig, 5 * 60 * 1000)

useMemo and useCallback in React

React's memoization hooks apply the same principle inside the component lifecycle. They cache values between renders and recompute only when specified dependencies change.

import { useMemo, useCallback, memo } from 'react'

function ProductList({ products, filters, onSelect }) {

  // useMemo — caches the computed value
  // Only recomputes when products or filters changes
  const filtered = useMemo(() => {
    return products
      .filter(p => p.category === filters.category)
      .filter(p => p.price <= filters.maxPrice)
      .sort((a, b) => a.price - b.price)
  }, [products, filters])

  // useCallback — caches the function reference
  // Without this, a new function object is created each render,
  // causing MemoizedCard below to re-render even when nothing changed
  const handleSelect = useCallback((id) => {
    onSelect(id)
  }, [onSelect])

  return filtered.map(p =>
    <MemoizedCard key={p.id} product={p} onSelect={handleSelect} />
  )
}

// memo wraps a component so it only re-renders when props change
// Only effective when all props are stable references — hence useCallback above
const MemoizedCard = memo(function Card({ product, onSelect }) {
  return <div onClick={() => onSelect(product.id)}>{product.name}</div>
})

The costly mistake: memoizing cheap computations or non-referentially-sensitive values. useMemo and useCallback run on every render to compare dependencies. For a simple addition or string concatenation, this check costs more than just recomputing. Apply them selectively — only when the computation is genuinely expensive, or when a stable reference prevents a memoized child from re-rendering.

When NOT to memoize

Memoization is a performance tool, not a default. The overhead of cache lookup and key generation exceeds the savings in these four cases:

// 1. Cheap functions — cache overhead exceeds computation cost
const add = memoize((a, b) => a + b)    // slower than a + b directly

// 2. Impure functions — returns incorrect cached results
const getTime = memoize(() => Date.now())   // always returns first cached timestamp

// 3. Unique inputs — cache grows forever, never hits
const processId = memoize(id => transform(id))   // if every id is unique, 0% cache hits

// 4. Large object arguments with JSON.stringify
// Serializing a 5,000-key object on every call is slower than recomputing

Common Misconceptions

⚠️

Many devs think memoization and caching are the same thing — but actually memoization is a specific type of caching that operates on function return values keyed by input arguments. General caching can store anything by any strategy. Memoization is function-level automatic result caching driven entirely by the arguments passed to that function.

⚠️

Many devs think you can memoize any function — but actually memoization is only correct for pure functions. A function that reads from a database, depends on the current time, generates random values, or has side effects will return stale or incorrect cached results. The function must always return the same output for the same input with no external dependencies.

⚠️

Many devs think JSON.stringify is always a safe cache key — but actually it silently drops function and Symbol properties (both become empty), treats object property order as significant so the same logical object can produce different keys, and throws a TypeError on circular references. Custom key strategies or WeakMap are needed for functions receiving complex objects.

⚠️

Many devs think a larger cache is always better — but actually an unbounded cache is a memory leak. Every unique set of arguments permanently occupies a cache entry that is never evicted. In long-running applications with varied inputs, the cache silently grows to consume all available memory. Bounded caches with LRU eviction or TTL expiry are required in production.

⚠️

Many devs think useMemo in React is always a performance improvement — but actually useMemo adds overhead on every render to compare dependencies by reference and retrieve the cached value. For inexpensive computations, this overhead exceeds the savings. useMemo only pays off for expensive computations or when a stable reference is passed to a memoized child component to prevent its re-render.

⚠️

Many devs think memoizing a recursive function automatically accelerates all its calls — but actually the recursive calls inside the function body must reference the memoized version, not the original unwrapped function. If the inner calls reference the original, memoization applies only to the outermost call and sub-problems are recomputed at full cost every time.

⚠️

Many devs think useCallback memoizes the return value of the function — but actually useCallback caches the function reference itself, returning the same function object between renders. The function body still executes fresh on every call. Its value is preventing memoized child components from receiving a new function reference prop and re-rendering unnecessarily.

Where You'll See This in Real Code

Lodash's _.memoize ships with a resolver option that lets developers provide a custom cache key function — this directly addresses the object key problem that JSON.stringify cannot handle for non-serializable arguments, and is one of the most used utility functions in production JavaScript codebases.

React's Reselect library is built entirely on memoization — a selector computes derived state from the Redux store and caches the result, rerunning only when the relevant input slices change by reference. This prevents connected components from re-rendering when unrelated parts of the store update, which is the primary performance optimization in large Redux applications.

Search-as-you-type implementations in large applications memoize the filter and sort computation over a product list — typing the same query string twice (backspace then retype) hits the cache and returns instantly without rescanning thousands of items, making the search feel noticeably faster than a naive reimplementation would.

GraphQL clients like Apollo Client use normalized response memoization — identical queries with identical variables return the same JavaScript object reference from the cache, enabling React's reconciliation to determine via reference equality that nothing changed and skip re-rendering the consuming component entirely.

Compiler toolchains like TypeScript and Babel use memoization extensively during incremental builds — parsing results for unchanged source files are cached so subsequent builds only reparse files that actually changed. This is what reduces TypeScript incremental build times from tens of seconds to under one second on large codebases.

LRU cache implementation is one of the most common "implement a data structure" interview questions at senior frontend roles — companies including Google, Meta, and Atlassian ask candidates to implement it from scratch, and understanding that Map's guaranteed insertion order makes delete-then-reinsert an elegant O(1) refresh operation is the differentiating insight.

Interview Cheat Sheet

  • Memoization caches function return values by input arguments — same input always returns cached output without recomputing
  • Only valid for pure functions — same input must always produce the same output with no side effects
  • JSON.stringify key failures: drops functions/Symbols silently, treats property order as significant, throws on circular references
  • WeakMap for object arguments — keys are garbage collected when objects are no longer referenced elsewhere, preventing memory leaks
  • Unbounded cache = silent memory leak — use LRU (capacity limit) or TTL (time expiry) in production
  • Recursive memoization: inner calls must reference the memoized function, not the original, to benefit sub-problems
  • LRU cache: Map preserves insertion order — delete then re-insert to refresh access order in O(1)
  • useMemo caches a computed value; useCallback caches a function reference — both compare dependencies by reference equality
  • useMemo and useCallback add per-render overhead — only apply when computation is expensive or stable reference prevents downstream re-renders
  • Memoization trades memory for speed — only beneficial when computation cost exceeds cache overhead and cache hit rate is high
💡

How to Answer in an Interview

  • 1.Before implementing, ask "is the function pure?" — this demonstrates you understand the precondition, not just the technique
  • 2.Build the basic version first (closure + plain object), then proactively enumerate its limitations before the interviewer asks — it signals you know the full picture
  • 3.The Fibonacci demonstration is the canonical example — implement it and explicitly point out that the inner call must reference the memoized version to achieve O(n) instead of O(2^n)
  • 4.Connect useMemo and useCallback to the same underlying concept when React performance comes up — it shows you understand the mechanism, not just the API
  • 5.LRU cache is the natural follow-up question at senior level — have the Map-based implementation ready and explain why insertion order makes delete-then-reinsert the right approach
  • 6.Distinguish memoization from debounce and throttle when asked about performance — memoization eliminates recomputation for repeated inputs; debounce and throttle control call frequency regardless of what the inputs are

Practice Questions

No questions tagged to this topic yet.

Related Topics

JavaScript Map & Set Interview Questions
Intermediate·4–6 Qs
JavaScript Performance Interview Questions
Advanced·6–10 Qs
JavaScript DOM Interview Questions
Intermediate·6–10 Qs
JavaScript Memory Management Interview Questions
Senior·3–5 Qs
🎯

Can you answer these under pressure?

Reading answers is not the same as knowing them. Practice saying them out loud with AI feedback — that's what builds real interview confidence.

Practice Free →Try Output Quiz