Fix RateLimitError: code=429, message=rate limit exceeded in Echo
This error occurs when Echo's rate limiting middleware blocks a request that exceeds the configured rate, but the implementation lacks proper client identification, store cleanup, or informative response headers. Fix it by using a token-bucket limiter with per-IP tracking, adding Retry-After headers, and implementing a cleanup goroutine to prevent memory leaks.
Reading the Stack Trace
Here's what each line means:
- github.com/labstack/echo/v4/middleware.RateLimiterWithConfig.func1.1({0x1029e4f80, 0x14000226000}): Echo's rate limiter middleware at line 87 determines the client has exceeded the configured rate and rejects the request.
- echo: GET /api/search -> echo/middleware.RateLimiterWithConfig.func1 | 429: The middleware returns HTTP 429 Too Many Requests to the client.
- github.com/labstack/echo/v4.(*Echo).ServeHTTP(0x14000128680, {0x1029e4f80, 0x140001c40e0}, 0x140002b4000): Echo processes the request through the middleware chain where the rate limiter intercepts it.
Common Causes
1. In-memory store leaks on high cardinality IPs
The rate limiter stores per-IP counters in memory without cleanup, consuming unbounded memory over time.
e.Use(middleware.RateLimiter(middleware.NewRateLimiterMemoryStore(10)))
// No TTL or cleanup — memory grows with every unique IP
2. No Retry-After header in 429 response
The rate limiter returns 429 but does not tell the client when it can retry.
e.Use(middleware.RateLimiterWithConfig(middleware.RateLimiterConfig{
Store: middleware.NewRateLimiterMemoryStore(10),
// No DenyHandler — default response has no Retry-After
}))
3. Global rate limit instead of per-client
The rate limiter applies a single global counter, causing one heavy client to block all other clients.
// Single global limiter for all clients
var globalLimiter = rate.NewLimiter(10, 10)
func RateLimitMiddleware(next echo.HandlerFunc) echo.HandlerFunc {
return func(c echo.Context) error {
if !globalLimiter.Allow() {
return c.JSON(429, "too many requests")
}
return next(c)
}
}
The Fix
Configure the rate limiter with a memory store that includes expiration to prevent memory leaks. Use IdentifierExtractor for per-IP limiting and a custom DenyHandler that returns a structured 429 response with a Retry-After header.
e.Use(middleware.RateLimiter(middleware.NewRateLimiterMemoryStore(10)))
e.Use(middleware.RateLimiterWithConfig(middleware.RateLimiterConfig{
Store: middleware.NewRateLimiterMemoryStoreWithConfig(
middleware.RateLimiterMemoryStoreConfig{
Rate: 10,
Burst: 20,
ExpiresIn: 3 * time.Minute,
},
),
IdentifierExtractor: func(c echo.Context) (string, error) {
return c.RealIP(), nil
},
DenyHandler: func(c echo.Context, identifier string, err error) error {
c.Response().Header().Set("Retry-After", "60")
return c.JSON(http.StatusTooManyRequests, map[string]string{
"error": "rate_limit_exceeded",
"message": "Too many requests. Please retry after 60 seconds.",
})
},
}))
Testing the Fix
package main_test
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/labstack/echo/v4"
"github.com/stretchr/testify/assert"
)
func TestRateLimit_AllowsNormalTraffic(t *testing.T) {
e := setupEchoWithRateLimit()
req := httptest.NewRequest(http.MethodGet, "/api/search", nil)
rec := httptest.NewRecorder()
e.ServeHTTP(rec, req)
assert.Equal(t, http.StatusOK, rec.Code)
}
func TestRateLimit_BlocksExcessTraffic(t *testing.T) {
e := setupEchoWithRateLimit() // configured with rate=1, burst=1
for i := 0; i < 5; i++ {
req := httptest.NewRequest(http.MethodGet, "/api/search", nil)
rec := httptest.NewRecorder()
e.ServeHTTP(rec, req)
if rec.Code == http.StatusTooManyRequests {
assert.NotEmpty(t, rec.Header().Get("Retry-After"))
assert.Contains(t, rec.Body.String(), "rate_limit_exceeded")
return
}
}
t.Fatal("expected rate limit to trigger")
}
Run your tests:
go test ./... -v
Pushing Through CI/CD
git checkout -b fix/echo-rate-limit-error,git add main.go,git commit -m "fix: configure per-IP rate limiting with Retry-After header and store cleanup",git push origin fix/echo-rate-limit-error
Your CI config should look something like this:
name: CI
on:
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: '1.22'
- run: go mod download
- run: go vet ./...
- run: go test ./... -race -coverprofile=coverage.out
- run: go build ./...
The Full Manual Process: 18 Steps
Here's every step you just went through to fix this one bug:
- Notice the error alert or see it in your monitoring tool
- Open the error dashboard and read the stack trace
- Identify the file and line number from the stack trace
- Open your IDE and navigate to the file
- Read the surrounding code to understand context
- Reproduce the error locally
- Identify the root cause
- Write the fix
- Run the test suite locally
- Fix any failing tests
- Write new tests covering the edge case
- Run the full test suite again
- Create a new git branch
- Commit and push your changes
- Open a pull request
- Wait for code review
- Merge and deploy to production
- Monitor production to confirm the error is resolved
Total time: 30-60 minutes. For one bug.
Or Let bugstack Fix It in Under 2 minutes
Every step above? bugstack does it automatically.
Step 1: Install the SDK
go get github.com/bugstack/sdk
Step 2: Initialize
import "github.com/bugstack/sdk"
func init() {
bugstack.Init(os.Getenv("BUGSTACK_API_KEY"))
}
Step 3: There is no step 3.
bugstack handles everything from here:
- Captures the stack trace and request context
- Pulls the relevant source files from your GitHub repo
- Analyzes the error and understands the code context
- Generates a minimal, verified fix
- Runs your existing test suite
- Pushes through your CI/CD pipeline
- Deploys to production (or opens a PR for review)
Time from error to fix deployed: Under 2 minutes.
Human involvement: zero.
Try bugstack Free →No credit card. 5-minute setup. Cancel anytime.
Deploying the Fix (Manual Path)
- Run go test ./... locally to confirm rate limiting works.
- Open a pull request with the rate limiter config changes.
- Wait for CI checks to pass on the PR.
- Have a teammate review and approve the PR.
- Merge to main and verify in staging.
Frequently Asked Questions
BugStack runs concurrent load tests, verifies the Retry-After header is present, and checks that per-IP isolation works correctly before marking it safe to deploy.
BugStack never pushes directly to production. Every fix goes through a pull request with full CI checks, so your team can review it before merging.
Yes, if you run multiple server instances. An in-memory store does not share state across instances, so each server enforces its own limit independently.
It depends on your API's capacity. Start with 60 requests/minute for public endpoints and 600/minute for authenticated users. Monitor and adjust based on usage patterns.