How It Works Features Pricing Blog
Log In Start Free Trial
Node.js · JavaScript

Fix JavaScript heap out of memory: FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed in Node.js

This error occurs when Node.js runs out of heap memory, usually due to a memory leak from growing arrays, unclosed event listeners, or large data loaded without streaming. Fix it by identifying the leak source with heap snapshots, using streams for large data, and cleaning up event listeners properly.

Reading the Stack Trace

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory 1: 0x100398a45 node::Abort() [/usr/local/bin/node] 2: 0x100398c25 node::OOMErrorHandler(char const*, v8::OOMDetails const&) [/usr/local/bin/node] 3: 0x1004f9cb9 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [/usr/local/bin/node] 4: 0x1006b4b85 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/usr/local/bin/node] 5: 0x1006b8321 v8::internal::Heap::AllocateRawOrFail(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/usr/local/bin/node] 6: 0x10068b601 v8::internal::Factory::NewFixedArray(int, v8::internal::AllocationType) [/usr/local/bin/node] 7: 0x10072f8c5 v8::internal::OrderedHashTable<v8::internal::OrderedHashMap, 2>::Rehash(v8::internal::Isolate*, v8::internal::Handle<v8::internal::OrderedHashMap>, int) [/usr/local/bin/node] <--- Last few GCs ---> [18932:0x130008000] 42531 ms: Scavenge 2047.1 (2049.5) -> 2046.9 (2049.5) MB, 3.2 / 0.0 ms [18932:0x130008000] 42782 ms: Mark-sweep 2047.2 (2049.5) -> 2046.8 (2049.5) MB, 250.1 / 0.0 ms

Here's what each line means:

Common Causes

1. Unbounded array growth in a cache

An in-memory cache or array grows without bounds as the application processes requests, eventually consuming all available heap memory.

const cache = [];

app.get('/api/data', async (req, res) => {
  const data = await db.query('SELECT * FROM large_table');
  cache.push(...data); // Cache grows on every request
  res.json(data);
});

2. Event listeners not removed

Event listeners are added on every request but never removed, causing a growing number of closures that hold references to large objects.

app.get('/api/stream', (req, res) => {
  const handler = (data) => res.write(data);
  eventEmitter.on('data', handler);
  // handler is never removed when request ends
});

3. Loading entire file into memory

A large file is read entirely into memory with readFileSync or readFile instead of being streamed, causing spikes that exhaust the heap.

const fs = require('fs');

app.get('/download', (req, res) => {
  const file = fs.readFileSync('/data/huge-export.csv');
  res.send(file);
});

The Fix

Replace the unbounded array cache with an LRU (Least Recently Used) cache that has a maximum size and TTL. This ensures memory usage stays bounded while still providing caching benefits.

Before (broken)
const cache = [];

app.get('/api/data', async (req, res) => {
  const data = await db.query('SELECT * FROM large_table');
  cache.push(...data);
  res.json(data);
});
After (fixed)
const LRU = require('lru-cache');

const cache = new LRU({ max: 500, ttl: 1000 * 60 * 5 });

app.get('/api/data', async (req, res) => {
  const cacheKey = 'large_table_data';
  let data = cache.get(cacheKey);

  if (!data) {
    data = await db.query('SELECT * FROM large_table');
    cache.set(cacheKey, data);
  }

  res.json(data);
});

Testing the Fix

const LRU = require('lru-cache');

describe('LRU Cache', () => {
  it('evicts old entries when max size is reached', () => {
    const cache = new LRU({ max: 2 });
    cache.set('a', 1);
    cache.set('b', 2);
    cache.set('c', 3);
    expect(cache.get('a')).toBeUndefined();
    expect(cache.get('b')).toBe(2);
    expect(cache.get('c')).toBe(3);
  });

  it('returns cached data on subsequent calls', () => {
    const cache = new LRU({ max: 100, ttl: 1000 * 60 });
    cache.set('key', [{ id: 1 }]);
    expect(cache.get('key')).toEqual([{ id: 1 }]);
  });

  it('does not exceed max size', () => {
    const cache = new LRU({ max: 5 });
    for (let i = 0; i < 100; i++) {
      cache.set(`key-${i}`, i);
    }
    expect(cache.size).toBeLessThanOrEqual(5);
  });
});

Run your tests:

npm test

Pushing Through CI/CD

git checkout -b fix/memory-leak-cache,git add src/routes/data.js package.json,git commit -m "fix: replace unbounded cache with LRU to prevent memory leaks",git push origin fix/memory-leak-cache

Your CI config should look something like this:

name: CI
on:
  pull_request:
    branches: [main]
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'
      - run: npm ci
      - run: npm test
      - run: npm run lint

The Full Manual Process: 18 Steps

Here's every step you just went through to fix this one bug:

  1. Notice the error alert or see it in your monitoring tool
  2. Open the error dashboard and read the stack trace
  3. Identify the file and line number from the stack trace
  4. Open your IDE and navigate to the file
  5. Read the surrounding code to understand context
  6. Reproduce the error locally
  7. Identify the root cause
  8. Write the fix
  9. Run the test suite locally
  10. Fix any failing tests
  11. Write new tests covering the edge case
  12. Run the full test suite again
  13. Create a new git branch
  14. Commit and push your changes
  15. Open a pull request
  16. Wait for code review
  17. Merge and deploy to production
  18. Monitor production to confirm the error is resolved

Total time: 30-60 minutes. For one bug.

Or Let bugstack Fix It in Under 2 minutes

Every step above? bugstack does it automatically.

Step 1: Install the SDK

npm install bugstack-sdk

Step 2: Initialize

const { initBugStack } = require('bugstack-sdk')

initBugStack({ apiKey: process.env.BUGSTACK_API_KEY })

Step 3: There is no step 3.

bugstack handles everything from here:

  1. Captures the stack trace and request context
  2. Pulls the relevant source files from your GitHub repo
  3. Analyzes the error and understands the code context
  4. Generates a minimal, verified fix
  5. Runs your existing test suite
  6. Pushes through your CI/CD pipeline
  7. Deploys to production (or opens a PR for review)

Time from error to fix deployed: Under 2 minutes.

Human involvement: zero.

Try bugstack Free →

No credit card. 5-minute setup. Cancel anytime.

Deploying the Fix (Manual Path)

  1. Install the lru-cache package with npm install lru-cache.
  2. Run the test suite to confirm cache behavior is correct.
  3. Open a pull request with the bounded cache implementation.
  4. Have a teammate review the max size and TTL settings.
  5. Merge to main and monitor memory usage in production dashboards.

Frequently Asked Questions

BugStack analyzes heap allocation patterns, verifies the fix introduces bounded memory usage, and runs load tests to confirm memory stays stable under sustained traffic.

All fixes are delivered as pull requests with CI validation. Your team reviews memory-related changes before they reach production.

Increasing the heap limit is a temporary workaround. The leak will still grow until it hits the new limit. Always fix the root cause instead of raising limits.

Use Node.js --inspect flag to connect Chrome DevTools, take heap snapshots before and after traffic, and compare retained object counts to find growing allocations.