How It Works Features Pricing Blog Error Guides
Log In Start Free Trial
Echo · Go

Fix RateLimitError: code=429, message=rate limit exceeded in Echo

This error occurs when Echo's rate limiting middleware blocks a request that exceeds the configured rate, but the implementation lacks proper client identification, store cleanup, or informative response headers. Fix it by using a token-bucket limiter with per-IP tracking, adding Retry-After headers, and implementing a cleanup goroutine to prevent memory leaks.

Reading the Stack Trace

2024/03/15 19:00:10 echo: GET /api/search -> echo/middleware.RateLimiterWithConfig.func1 | 429 | 0.104ms | 127.0.0.1 goroutine 87 [running]: runtime/debug.Stack() /usr/local/go/src/runtime/debug/stack.go:24 +0x5e github.com/labstack/echo/v4/middleware.RateLimiterWithConfig.func1.1({0x1029e4f80, 0x14000226000}) /go/pkg/mod/github.com/labstack/echo/v4@v4.11.4/middleware/rate_limiter.go:87 +0x1c4 github.com/labstack/echo/v4.(*Echo).ServeHTTP(0x14000128680, {0x1029e4f80, 0x140001c40e0}, 0x140002b4000) /go/pkg/mod/github.com/labstack/echo/v4@v4.11.4/echo.go:669 +0x1a0

Here's what each line means:

Common Causes

1. In-memory store leaks on high cardinality IPs

The rate limiter stores per-IP counters in memory without cleanup, consuming unbounded memory over time.

e.Use(middleware.RateLimiter(middleware.NewRateLimiterMemoryStore(10)))
// No TTL or cleanup — memory grows with every unique IP

2. No Retry-After header in 429 response

The rate limiter returns 429 but does not tell the client when it can retry.

e.Use(middleware.RateLimiterWithConfig(middleware.RateLimiterConfig{
	Store: middleware.NewRateLimiterMemoryStore(10),
	// No DenyHandler — default response has no Retry-After
}))

3. Global rate limit instead of per-client

The rate limiter applies a single global counter, causing one heavy client to block all other clients.

// Single global limiter for all clients
var globalLimiter = rate.NewLimiter(10, 10)

func RateLimitMiddleware(next echo.HandlerFunc) echo.HandlerFunc {
	return func(c echo.Context) error {
		if !globalLimiter.Allow() {
			return c.JSON(429, "too many requests")
		}
		return next(c)
	}
}

The Fix

Configure the rate limiter with a memory store that includes expiration to prevent memory leaks. Use IdentifierExtractor for per-IP limiting and a custom DenyHandler that returns a structured 429 response with a Retry-After header.

Before (broken)
e.Use(middleware.RateLimiter(middleware.NewRateLimiterMemoryStore(10)))
After (fixed)
e.Use(middleware.RateLimiterWithConfig(middleware.RateLimiterConfig{
	Store: middleware.NewRateLimiterMemoryStoreWithConfig(
		middleware.RateLimiterMemoryStoreConfig{
			Rate:      10,
			Burst:     20,
			ExpiresIn: 3 * time.Minute,
		},
	),
	IdentifierExtractor: func(c echo.Context) (string, error) {
		return c.RealIP(), nil
	},
	DenyHandler: func(c echo.Context, identifier string, err error) error {
		c.Response().Header().Set("Retry-After", "60")
		return c.JSON(http.StatusTooManyRequests, map[string]string{
			"error":   "rate_limit_exceeded",
			"message": "Too many requests. Please retry after 60 seconds.",
		})
	},
}))

Testing the Fix

package main_test

import (
	"net/http"
	"net/http/httptest"
	"testing"

	"github.com/labstack/echo/v4"
	"github.com/stretchr/testify/assert"
)

func TestRateLimit_AllowsNormalTraffic(t *testing.T) {
	e := setupEchoWithRateLimit()

	req := httptest.NewRequest(http.MethodGet, "/api/search", nil)
	rec := httptest.NewRecorder()
	e.ServeHTTP(rec, req)

	assert.Equal(t, http.StatusOK, rec.Code)
}

func TestRateLimit_BlocksExcessTraffic(t *testing.T) {
	e := setupEchoWithRateLimit() // configured with rate=1, burst=1

	for i := 0; i < 5; i++ {
		req := httptest.NewRequest(http.MethodGet, "/api/search", nil)
		rec := httptest.NewRecorder()
		e.ServeHTTP(rec, req)
		if rec.Code == http.StatusTooManyRequests {
			assert.NotEmpty(t, rec.Header().Get("Retry-After"))
			assert.Contains(t, rec.Body.String(), "rate_limit_exceeded")
			return
		}
	}
	t.Fatal("expected rate limit to trigger")
}

Run your tests:

go test ./... -v

Pushing Through CI/CD

git checkout -b fix/echo-rate-limit-error,git add main.go,git commit -m "fix: configure per-IP rate limiting with Retry-After header and store cleanup",git push origin fix/echo-rate-limit-error

Your CI config should look something like this:

name: CI
on:
  pull_request:
    branches: [main]
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-go@v5
        with:
          go-version: '1.22'
      - run: go mod download
      - run: go vet ./...
      - run: go test ./... -race -coverprofile=coverage.out
      - run: go build ./...

The Full Manual Process: 18 Steps

Here's every step you just went through to fix this one bug:

  1. Notice the error alert or see it in your monitoring tool
  2. Open the error dashboard and read the stack trace
  3. Identify the file and line number from the stack trace
  4. Open your IDE and navigate to the file
  5. Read the surrounding code to understand context
  6. Reproduce the error locally
  7. Identify the root cause
  8. Write the fix
  9. Run the test suite locally
  10. Fix any failing tests
  11. Write new tests covering the edge case
  12. Run the full test suite again
  13. Create a new git branch
  14. Commit and push your changes
  15. Open a pull request
  16. Wait for code review
  17. Merge and deploy to production
  18. Monitor production to confirm the error is resolved

Total time: 30-60 minutes. For one bug.

Or Let bugstack Fix It in Under 2 minutes

Every step above? bugstack does it automatically.

Step 1: Install the SDK

go get github.com/bugstack/sdk

Step 2: Initialize

import "github.com/bugstack/sdk"

func init() {
  bugstack.Init(os.Getenv("BUGSTACK_API_KEY"))
}

Step 3: There is no step 3.

bugstack handles everything from here:

  1. Captures the stack trace and request context
  2. Pulls the relevant source files from your GitHub repo
  3. Analyzes the error and understands the code context
  4. Generates a minimal, verified fix
  5. Runs your existing test suite
  6. Pushes through your CI/CD pipeline
  7. Deploys to production (or opens a PR for review)

Time from error to fix deployed: Under 2 minutes.

Human involvement: zero.

Try bugstack Free →

No credit card. 5-minute setup. Cancel anytime.

Deploying the Fix (Manual Path)

  1. Run go test ./... locally to confirm rate limiting works.
  2. Open a pull request with the rate limiter config changes.
  3. Wait for CI checks to pass on the PR.
  4. Have a teammate review and approve the PR.
  5. Merge to main and verify in staging.

Frequently Asked Questions

BugStack runs concurrent load tests, verifies the Retry-After header is present, and checks that per-IP isolation works correctly before marking it safe to deploy.

BugStack never pushes directly to production. Every fix goes through a pull request with full CI checks, so your team can review it before merging.

Yes, if you run multiple server instances. An in-memory store does not share state across instances, so each server enforces its own limit independently.

It depends on your API's capacity. Start with 60 requests/minute for public endpoints and 600/minute for authenticated users. Monitor and adjust based on usage patterns.