How It Works Features Pricing Blog Error Guides
Log In Start Free Trial
Go · Go

Fix GoroutineLeak: goroutine count: 10247 — suspected goroutine leak in Go

This error occurs when goroutines are spawned but never terminate, typically because they block on a channel, timer, or network call that never completes. The goroutine count grows unboundedly, consuming memory and eventually crashing the process. Fix it by using context cancellation to signal goroutines to stop and always providing an exit path from blocking operations.

Reading the Stack Trace

goroutine 10247 [chan receive, 4320 minutes]: main.processJob(0x14000196040) /app/workers/job.go:34 +0x94 created by main.startWorkers in goroutine 1 /app/workers/job.go:18 +0x48 goroutine 10248 [chan receive, 4319 minutes]: main.processJob(0x14000196050) /app/workers/job.go:34 +0x94 created by main.startWorkers in goroutine 1 /app/workers/job.go:18 +0x48 goroutine 10249 [select, 4318 minutes]: main.watchChanges(0x14000196060) /app/workers/watcher.go:22 +0x148 created by main.startWatchers in goroutine 1 /app/workers/watcher.go:10 +0x48

Here's what each line means:

Common Causes

1. Goroutine blocked on unbuffered channel forever

A goroutine reads from a channel that will never receive a value because the sender exited or was never started.

func startWorkers(jobs chan Job) {
	for i := 0; i < 100; i++ {
		go func() {
			for job := range jobs {
				process(job)
			}
		}()
	}
	// jobs channel is never closed — goroutines block forever
}

2. No context cancellation for background goroutines

Background goroutines have no way to be signaled to stop when the parent operation completes.

func handleRequest(w http.ResponseWriter, r *http.Request) {
	go func() {
		for {
			pollExternalService() // runs forever
			time.Sleep(time.Second)
		}
	}()
	w.Write([]byte("ok"))
}

3. Timer or ticker never stopped

A time.NewTicker is created but never stopped, and its goroutine runs indefinitely.

func monitor() {
	ticker := time.NewTicker(time.Second)
	// defer ticker.Stop() missing
	for range ticker.C {
		checkHealth()
	}
}

The Fix

Accept a context.Context and use a select statement to listen for both new jobs and cancellation. Use sync.WaitGroup to track goroutine completion. This gives callers the ability to shut down workers cleanly.

Before (broken)
func startWorkers(jobs chan Job) {
	for i := 0; i < 100; i++ {
		go func() {
			for job := range jobs {
				process(job)
			}
		}()
	}
}
After (fixed)
func startWorkers(ctx context.Context, jobs <-chan Job) {
	var wg sync.WaitGroup
	for i := 0; i < 100; i++ {
		wg.Add(1)
		go func() {
			defer wg.Done()
			for {
				select {
				case <-ctx.Done():
					return
				case job, ok := <-jobs:
					if !ok {
						return
					}
					process(job)
				}
			}
		}()
	}
	// Caller can cancel ctx and wait: wg.Wait()
}

Testing the Fix

package workers_test

import (
	"context"
	"runtime"
	"testing"
	"time"

	"github.com/stretchr/testify/assert"
)

func TestStartWorkers_NoLeak(t *testing.T) {
	initial := runtime.NumGoroutine()

	ctx, cancel := context.WithCancel(context.Background())
	jobs := make(chan Job, 10)

	startWorkers(ctx, jobs)
	jobs <- Job{ID: 1}
	jobs <- Job{ID: 2}

	cancel()
	time.Sleep(100 * time.Millisecond)

	final := runtime.NumGoroutine()
	assert.InDelta(t, initial, final, 5, "goroutine count should return to baseline")
}

func TestStartWorkers_ProcessesJobs(t *testing.T) {
	ctx, cancel := context.WithCancel(context.Background())
	defer cancel()

	results := make(chan int, 10)
	jobs := make(chan Job, 10)

	startWorkersWithResults(ctx, jobs, results)
	jobs <- Job{ID: 42}

	select {
	case id := <-results:
		assert.Equal(t, 42, id)
	case <-time.After(time.Second):
		t.Fatal("timed out waiting for job result")
	}
}

Run your tests:

go test ./workers/... -v -count=1

Pushing Through CI/CD

git checkout -b fix/go-goroutine-leak,git add workers/job.go workers/job_test.go,git commit -m "fix: add context cancellation to prevent goroutine leaks in workers",git push origin fix/go-goroutine-leak

Your CI config should look something like this:

name: CI
on:
  pull_request:
    branches: [main]
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-go@v5
        with:
          go-version: '1.22'
      - run: go mod download
      - run: go vet ./...
      - run: go test ./... -race -coverprofile=coverage.out -count=1
      - run: go build ./...

The Full Manual Process: 18 Steps

Here's every step you just went through to fix this one bug:

  1. Notice the error alert or see it in your monitoring tool
  2. Open the error dashboard and read the stack trace
  3. Identify the file and line number from the stack trace
  4. Open your IDE and navigate to the file
  5. Read the surrounding code to understand context
  6. Reproduce the error locally
  7. Identify the root cause
  8. Write the fix
  9. Run the test suite locally
  10. Fix any failing tests
  11. Write new tests covering the edge case
  12. Run the full test suite again
  13. Create a new git branch
  14. Commit and push your changes
  15. Open a pull request
  16. Wait for code review
  17. Merge and deploy to production
  18. Monitor production to confirm the error is resolved

Total time: 30-60 minutes. For one bug.

Or Let bugstack Fix It in Under 2 minutes

Every step above? bugstack does it automatically.

Step 1: Install the SDK

go get github.com/bugstack/sdk

Step 2: Initialize

import "github.com/bugstack/sdk"

func init() {
  bugstack.Init(os.Getenv("BUGSTACK_API_KEY"))
}

Step 3: There is no step 3.

bugstack handles everything from here:

  1. Captures the stack trace and request context
  2. Pulls the relevant source files from your GitHub repo
  3. Analyzes the error and understands the code context
  4. Generates a minimal, verified fix
  5. Runs your existing test suite
  6. Pushes through your CI/CD pipeline
  7. Deploys to production (or opens a PR for review)

Time from error to fix deployed: Under 2 minutes.

Human involvement: zero.

Try bugstack Free →

No credit card. 5-minute setup. Cancel anytime.

Deploying the Fix (Manual Path)

  1. Run go test ./... -count=1 locally to confirm no goroutine leaks.
  2. Open a pull request with the context cancellation changes.
  3. Wait for CI checks to pass on the PR.
  4. Have a teammate review and approve the PR.
  5. Merge to main and monitor goroutine count in staging.

Frequently Asked Questions

BugStack runs goroutine leak detection tests, validates that all spawned goroutines terminate on context cancellation, and monitors goroutine counts before and after test runs before marking it safe to deploy.

BugStack never pushes directly to production. Every fix goes through a pull request with full CI checks, so your team can review it before merging.

Use the go.uber.org/goleak package. Add goleak.VerifyNone(t) to your test functions to fail if any goroutines are still running when the test completes.

It depends on workload, but thousands of goroutines is fine. Watch for the count growing over time without bound, which indicates a leak.