Fix JavaScript heap out of memory: FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed in Node.js
This error occurs when Node.js runs out of heap memory, usually due to a memory leak from growing arrays, unclosed event listeners, or large data loaded without streaming. Fix it by identifying the leak source with heap snapshots, using streams for large data, and cleaning up event listeners properly.
Reading the Stack Trace
Here's what each line means:
- FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory: V8 failed to allocate memory after retrying garbage collection, meaning the heap is completely exhausted.
- Mark-sweep 2047.2 (2049.5) -> 2046.8 (2049.5) MB, 250.1 / 0.0 ms: The garbage collector could only free 0.4 MB from a 2 GB heap, indicating most objects are still reachable and cannot be freed.
- v8::internal::Heap::FatalProcessOutOfMemory(char const*): V8 triggered a fatal OOM handler, which terminates the process immediately.
Common Causes
1. Unbounded array growth in a cache
An in-memory cache or array grows without bounds as the application processes requests, eventually consuming all available heap memory.
const cache = [];
app.get('/api/data', async (req, res) => {
const data = await db.query('SELECT * FROM large_table');
cache.push(...data); // Cache grows on every request
res.json(data);
});
2. Event listeners not removed
Event listeners are added on every request but never removed, causing a growing number of closures that hold references to large objects.
app.get('/api/stream', (req, res) => {
const handler = (data) => res.write(data);
eventEmitter.on('data', handler);
// handler is never removed when request ends
});
3. Loading entire file into memory
A large file is read entirely into memory with readFileSync or readFile instead of being streamed, causing spikes that exhaust the heap.
const fs = require('fs');
app.get('/download', (req, res) => {
const file = fs.readFileSync('/data/huge-export.csv');
res.send(file);
});
The Fix
Replace the unbounded array cache with an LRU (Least Recently Used) cache that has a maximum size and TTL. This ensures memory usage stays bounded while still providing caching benefits.
const cache = [];
app.get('/api/data', async (req, res) => {
const data = await db.query('SELECT * FROM large_table');
cache.push(...data);
res.json(data);
});
const LRU = require('lru-cache');
const cache = new LRU({ max: 500, ttl: 1000 * 60 * 5 });
app.get('/api/data', async (req, res) => {
const cacheKey = 'large_table_data';
let data = cache.get(cacheKey);
if (!data) {
data = await db.query('SELECT * FROM large_table');
cache.set(cacheKey, data);
}
res.json(data);
});
Testing the Fix
const LRU = require('lru-cache');
describe('LRU Cache', () => {
it('evicts old entries when max size is reached', () => {
const cache = new LRU({ max: 2 });
cache.set('a', 1);
cache.set('b', 2);
cache.set('c', 3);
expect(cache.get('a')).toBeUndefined();
expect(cache.get('b')).toBe(2);
expect(cache.get('c')).toBe(3);
});
it('returns cached data on subsequent calls', () => {
const cache = new LRU({ max: 100, ttl: 1000 * 60 });
cache.set('key', [{ id: 1 }]);
expect(cache.get('key')).toEqual([{ id: 1 }]);
});
it('does not exceed max size', () => {
const cache = new LRU({ max: 5 });
for (let i = 0; i < 100; i++) {
cache.set(`key-${i}`, i);
}
expect(cache.size).toBeLessThanOrEqual(5);
});
});
Run your tests:
npm test
Pushing Through CI/CD
git checkout -b fix/memory-leak-cache,git add src/routes/data.js package.json,git commit -m "fix: replace unbounded cache with LRU to prevent memory leaks",git push origin fix/memory-leak-cache
Your CI config should look something like this:
name: CI
on:
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- run: npm ci
- run: npm test
- run: npm run lint
The Full Manual Process: 18 Steps
Here's every step you just went through to fix this one bug:
- Notice the error alert or see it in your monitoring tool
- Open the error dashboard and read the stack trace
- Identify the file and line number from the stack trace
- Open your IDE and navigate to the file
- Read the surrounding code to understand context
- Reproduce the error locally
- Identify the root cause
- Write the fix
- Run the test suite locally
- Fix any failing tests
- Write new tests covering the edge case
- Run the full test suite again
- Create a new git branch
- Commit and push your changes
- Open a pull request
- Wait for code review
- Merge and deploy to production
- Monitor production to confirm the error is resolved
Total time: 30-60 minutes. For one bug.
Or Let bugstack Fix It in Under 2 minutes
Every step above? bugstack does it automatically.
Step 1: Install the SDK
npm install bugstack-sdk
Step 2: Initialize
const { initBugStack } = require('bugstack-sdk')
initBugStack({ apiKey: process.env.BUGSTACK_API_KEY })
Step 3: There is no step 3.
bugstack handles everything from here:
- Captures the stack trace and request context
- Pulls the relevant source files from your GitHub repo
- Analyzes the error and understands the code context
- Generates a minimal, verified fix
- Runs your existing test suite
- Pushes through your CI/CD pipeline
- Deploys to production (or opens a PR for review)
Time from error to fix deployed: Under 2 minutes.
Human involvement: zero.
Try bugstack Free →No credit card. 5-minute setup. Cancel anytime.
Deploying the Fix (Manual Path)
- Install the lru-cache package with npm install lru-cache.
- Run the test suite to confirm cache behavior is correct.
- Open a pull request with the bounded cache implementation.
- Have a teammate review the max size and TTL settings.
- Merge to main and monitor memory usage in production dashboards.
Frequently Asked Questions
BugStack analyzes heap allocation patterns, verifies the fix introduces bounded memory usage, and runs load tests to confirm memory stays stable under sustained traffic.
All fixes are delivered as pull requests with CI validation. Your team reviews memory-related changes before they reach production.
Increasing the heap limit is a temporary workaround. The leak will still grow until it hits the new limit. Always fix the root cause instead of raising limits.
Use Node.js --inspect flag to connect Chrome DevTools, take heap snapshots before and after traffic, and compare retained object counts to find growing allocations.