How It Works Features Pricing Blog Error Guides
Log In Start Free Trial
Express · JavaScript

Fix Error: socket hang up in Express

This error occurs when the TCP connection is closed before the HTTP response is fully sent, typically because the client disconnected, a reverse proxy timed out, or an upstream service dropped the connection. Fix it by increasing timeout values, handling the close event on the request, and adding retry logic for upstream calls.

Reading the Stack Trace

Error: socket hang up at connResetException (node:internal/errors:720:14) at TLSSocket.socketOnEnd (node:_http_client:518:23) at TLSSocket.emit (node:events:518:28) at endReadableNT (node:internal/streams/readable:1696:12) at process.processTicksAndRejections (node:internal/process/task_queues:82:21) { code: 'ECONNRESET' } at handleUpstreamRequest (/app/src/services/apiClient.js:34:11) at Layer.handle [as handle_request] (/app/node_modules/express/lib/router/layer.js:95:5) at next (/app/node_modules/express/lib/router/route.js:144:13) at Route.dispatch (/app/node_modules/express/lib/router/route.js:114:3)

Here's what each line means:

Common Causes

1. Upstream service closing connection early

An upstream API or database closes the TCP connection before the response completes, often due to its own timeout or crash.

const http = require('http');

async function fetchFromUpstream(req, res) {
  const response = await fetch('http://internal-api:3001/data');
  const data = await response.json();
  res.json(data);
}

app.get('/api/data', fetchFromUpstream);

2. Server timeout too short

The Express server or reverse proxy has a timeout shorter than the time needed to complete the request, closing the socket prematurely.

const server = app.listen(3000);
// Default timeout is 120 seconds, but long-running queries take longer
// No timeout adjustment made

3. Keep-alive connection reuse issue

An idle keep-alive connection is reused after the server has already closed it, causing the next request on that socket to fail.

const http = require('http');
const agent = new http.Agent({ keepAlive: true, keepAliveMsecs: 60000 });

// Server closes idle connections after 5s, but agent keeps them for 60s
fetch('http://internal-api:3001/data', { agent });

The Fix

Add an AbortController with a timeout to prevent indefinite waiting on upstream requests. Listen for the client close event to abort the upstream call if the client disconnects. Return appropriate 504 or 502 status codes instead of crashing.

Before (broken)
async function fetchFromUpstream(req, res) {
  const response = await fetch('http://internal-api:3001/data');
  const data = await response.json();
  res.json(data);
}

app.get('/api/data', fetchFromUpstream);
After (fixed)
async function fetchFromUpstream(req, res) {
  const controller = new AbortController();
  const timeout = setTimeout(() => controller.abort(), 10000);

  req.on('close', () => controller.abort());

  try {
    const response = await fetch('http://internal-api:3001/data', {
      signal: controller.signal
    });
    clearTimeout(timeout);
    const data = await response.json();
    res.json(data);
  } catch (err) {
    clearTimeout(timeout);
    if (err.name === 'AbortError') {
      if (!res.headersSent) {
        return res.status(504).json({ error: 'Upstream request timed out' });
      }
      return;
    }
    if (!res.headersSent) {
      res.status(502).json({ error: 'Upstream service unavailable' });
    }
  }
}

app.get('/api/data', fetchFromUpstream);

Testing the Fix

const request = require('supertest');
const express = require('express');
const http = require('http');

function createUpstreamServer(delay) {
  return http.createServer((req, res) => {
    setTimeout(() => {
      res.writeHead(200, { 'Content-Type': 'application/json' });
      res.end(JSON.stringify({ ok: true }));
    }, delay);
  });
}

function createApp(upstreamUrl) {
  const app = express();
  app.get('/api/data', async (req, res) => {
    const controller = new AbortController();
    const timeout = setTimeout(() => controller.abort(), 200);
    try {
      const response = await fetch(upstreamUrl, { signal: controller.signal });
      clearTimeout(timeout);
      const data = await response.json();
      res.json(data);
    } catch (err) {
      clearTimeout(timeout);
      if (!res.headersSent) {
        res.status(504).json({ error: 'Upstream request timed out' });
      }
    }
  });
  return app;
}

describe('GET /api/data', () => {
  let upstream;
  afterEach((done) => { if (upstream) upstream.close(done); else done(); });

  it('returns 200 when upstream responds quickly', async () => {
    upstream = createUpstreamServer(10);
    await new Promise(resolve => upstream.listen(0, resolve));
    const port = upstream.address().port;
    const res = await request(createApp(`http://localhost:${port}`))
      .get('/api/data');
    expect(res.status).toBe(200);
    expect(res.body.ok).toBe(true);
  });

  it('returns 504 when upstream times out', async () => {
    upstream = createUpstreamServer(5000);
    await new Promise(resolve => upstream.listen(0, resolve));
    const port = upstream.address().port;
    const res = await request(createApp(`http://localhost:${port}`))
      .get('/api/data');
    expect(res.status).toBe(504);
  });
});

Run your tests:

npx jest --testPathPattern=socket-hangup

Pushing Through CI/CD

git checkout -b fix/express-socket-hangup,git add src/services/apiClient.js src/__tests__/socketHangup.test.js,git commit -m "fix: add timeout and abort handling for upstream requests",git push origin fix/express-socket-hangup

Your CI config should look something like this:

name: CI
on:
  pull_request:
    branches: [main]
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'
      - run: npm ci
      - run: npx jest --coverage
      - run: npm run lint

The Full Manual Process: 18 Steps

Here's every step you just went through to fix this one bug:

  1. Notice the error alert or see it in your monitoring tool
  2. Open the error dashboard and read the stack trace
  3. Identify the file and line number from the stack trace
  4. Open your IDE and navigate to the file
  5. Read the surrounding code to understand context
  6. Reproduce the error locally
  7. Identify the root cause
  8. Write the fix
  9. Run the test suite locally
  10. Fix any failing tests
  11. Write new tests covering the edge case
  12. Run the full test suite again
  13. Create a new git branch
  14. Commit and push your changes
  15. Open a pull request
  16. Wait for code review
  17. Merge and deploy to production
  18. Monitor production to confirm the error is resolved

Total time: 30-60 minutes. For one bug.

Or Let bugstack Fix It in Under 2 minutes

Every step above? bugstack does it automatically.

Step 1: Install the SDK

npm install bugstack-sdk

Step 2: Initialize

const { initBugStack } = require('bugstack-sdk')

initBugStack({ apiKey: process.env.BUGSTACK_API_KEY })

Step 3: There is no step 3.

bugstack handles everything from here:

  1. Captures the stack trace and request context
  2. Pulls the relevant source files from your GitHub repo
  3. Analyzes the error and understands the code context
  4. Generates a minimal, verified fix
  5. Runs your existing test suite
  6. Pushes through your CI/CD pipeline
  7. Deploys to production (or opens a PR for review)

Time from error to fix deployed: Under 2 minutes.

Human involvement: zero.

Try bugstack Free →

No credit card. 5-minute setup. Cancel anytime.

Deploying the Fix (Manual Path)

  1. Run the test suite locally to confirm socket hangup scenarios are handled gracefully.
  2. Open a pull request with the timeout and abort handling changes.
  3. Wait for CI checks to pass on the PR.
  4. Have a teammate review and approve the PR.
  5. Merge to main and monitor error rates in staging before promoting to production.

Frequently Asked Questions

BugStack simulates upstream timeouts and client disconnects, verifies correct 504/502 responses, and confirms no socket leaks before marking it safe to deploy.

Every fix is delivered as a pull request with full CI validation. Your team reviews and approves before anything reaches production.

Socket hang ups are often caused by network instability, upstream service load spikes, or keep-alive connection reuse after the server has silently closed the idle socket.

Yes, for idempotent GET requests. Use exponential backoff with a maximum of 2-3 retries. Do not retry non-idempotent requests like POST without careful consideration.