Fix RateLimitExceeded: rate limit exceeded: 100 requests per minute in Gin
This error occurs when your rate limiting middleware blocks requests that exceed the configured threshold, but the implementation either panics on concurrent map access or returns unhelpful error messages. Fix it by using a thread-safe rate limiter like golang.org/x/time/rate and returning proper 429 status codes with Retry-After headers.
Reading the Stack Trace
Here's what each line means:
- runtime.mapaccess1_faststr(0x1028a1e20, 0x14000196040, {0x14000116420, 0xc}): A concurrent read on an unsynchronized map causes a panic. Go maps are not safe for concurrent access.
- main.RateLimitMiddleware(0x14000226000) /app/middleware/ratelimit.go:22 +0x148: The rate limit middleware at line 22 accesses a shared map without locking, causing a race condition under load.
- github.com/gin-gonic/gin.RecoveryWithWriter.func1(0x14000226000): Gin's recovery middleware catches the map access panic and returns a 500 instead of crashing the process.
Common Causes
1. Unsynchronized map for tracking client rates
Using a plain map to track request counts per IP without a mutex causes concurrent map read/write panics.
var clients = map[string]int{}
func RateLimitMiddleware(c *gin.Context) {
ip := c.ClientIP()
clients[ip]++
if clients[ip] > 100 {
c.AbortWithStatus(500)
return
}
c.Next()
}
2. No rate limit window reset
The counter increments forever without resetting, eventually blocking every client permanently.
var mu sync.Mutex
var clients = map[string]int{}
func RateLimitMiddleware(c *gin.Context) {
mu.Lock()
clients[c.ClientIP()]++
count := clients[c.ClientIP()]
mu.Unlock()
if count > 100 {
c.AbortWithStatus(429)
return
}
c.Next()
}
3. Wrong HTTP status code for rate limiting
Returning 500 instead of 429 confuses clients and monitoring systems, and lacks Retry-After information.
if rateLimitExceeded {
c.AbortWithStatus(http.StatusInternalServerError) // should be 429
return
}
The Fix
Use golang.org/x/time/rate for token-bucket rate limiting with a sync.Mutex-protected map for per-client limiters. Return 429 with a Retry-After header so clients know when to retry.
var clients = map[string]int{}
func RateLimitMiddleware(c *gin.Context) {
ip := c.ClientIP()
clients[ip]++
if clients[ip] > 100 {
c.AbortWithStatus(500)
return
}
c.Next()
}
type RateLimiter struct {
mu sync.Mutex
clients map[string]*rate.Limiter
rps rate.Limit
burst int
}
func NewRateLimiter(rps rate.Limit, burst int) *RateLimiter {
return &RateLimiter{
clients: make(map[string]*rate.Limiter),
rps: rps,
burst: burst,
}
}
func (rl *RateLimiter) getLimiter(ip string) *rate.Limiter {
rl.mu.Lock()
defer rl.mu.Unlock()
limiter, exists := rl.clients[ip]
if !exists {
limiter = rate.NewLimiter(rl.rps, rl.burst)
rl.clients[ip] = limiter
}
return limiter
}
func (rl *RateLimiter) Middleware() gin.HandlerFunc {
return func(c *gin.Context) {
limiter := rl.getLimiter(c.ClientIP())
if !limiter.Allow() {
c.Header("Retry-After", "60")
c.AbortWithStatusJSON(http.StatusTooManyRequests, gin.H{"error": "rate limit exceeded"})
return
}
c.Next()
}
}
Testing the Fix
package middleware_test
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/gin-gonic/gin"
"github.com/stretchr/testify/assert"
"golang.org/x/time/rate"
)
func TestRateLimiter_AllowsNormalTraffic(t *testing.T) {
gin.SetMode(gin.TestMode)
r := gin.New()
rl := NewRateLimiter(rate.Limit(10), 10)
r.Use(rl.Middleware())
r.GET("/test", func(c *gin.Context) { c.Status(200) })
req := httptest.NewRequest(http.MethodGet, "/test", nil)
w := httptest.NewRecorder()
r.ServeHTTP(w, req)
assert.Equal(t, http.StatusOK, w.Code)
}
func TestRateLimiter_BlocksExcessTraffic(t *testing.T) {
gin.SetMode(gin.TestMode)
r := gin.New()
rl := NewRateLimiter(rate.Limit(1), 1)
r.Use(rl.Middleware())
r.GET("/test", func(c *gin.Context) { c.Status(200) })
for i := 0; i < 5; i++ {
req := httptest.NewRequest(http.MethodGet, "/test", nil)
w := httptest.NewRecorder()
r.ServeHTTP(w, req)
if w.Code == http.StatusTooManyRequests {
assert.NotEmpty(t, w.Header().Get("Retry-After"))
return
}
}
t.Fatal("expected rate limit to be triggered")
}
Run your tests:
go test ./middleware/... -v -race
Pushing Through CI/CD
git checkout -b fix/gin-rate-limit-error,git add middleware/ratelimit.go middleware/ratelimit_test.go,git commit -m "fix: use token-bucket rate limiter with mutex-protected client map",git push origin fix/gin-rate-limit-error
Your CI config should look something like this:
name: CI
on:
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: '1.22'
- run: go mod download
- run: go vet ./...
- run: go test ./... -race -coverprofile=coverage.out
- run: go build ./...
The Full Manual Process: 18 Steps
Here's every step you just went through to fix this one bug:
- Notice the error alert or see it in your monitoring tool
- Open the error dashboard and read the stack trace
- Identify the file and line number from the stack trace
- Open your IDE and navigate to the file
- Read the surrounding code to understand context
- Reproduce the error locally
- Identify the root cause
- Write the fix
- Run the test suite locally
- Fix any failing tests
- Write new tests covering the edge case
- Run the full test suite again
- Create a new git branch
- Commit and push your changes
- Open a pull request
- Wait for code review
- Merge and deploy to production
- Monitor production to confirm the error is resolved
Total time: 30-60 minutes. For one bug.
Or Let bugstack Fix It in Under 2 minutes
Every step above? bugstack does it automatically.
Step 1: Install the SDK
go get github.com/bugstack/sdk
Step 2: Initialize
import "github.com/bugstack/sdk"
func init() {
bugstack.Init(os.Getenv("BUGSTACK_API_KEY"))
}
Step 3: There is no step 3.
bugstack handles everything from here:
- Captures the stack trace and request context
- Pulls the relevant source files from your GitHub repo
- Analyzes the error and understands the code context
- Generates a minimal, verified fix
- Runs your existing test suite
- Pushes through your CI/CD pipeline
- Deploys to production (or opens a PR for review)
Time from error to fix deployed: Under 2 minutes.
Human involvement: zero.
Try bugstack Free →No credit card. 5-minute setup. Cancel anytime.
Deploying the Fix (Manual Path)
- Run go test ./... -race locally to confirm no data races.
- Open a pull request with the rate limiter changes.
- Wait for CI checks to pass on the PR.
- Have a teammate review and approve the PR.
- Merge to main and verify the deployment in staging before promoting to production.
Frequently Asked Questions
BugStack runs the fix with Go's race detector, generates concurrent load tests, and validates that the rate limiter behaves correctly under parallel requests before marking it safe to deploy.
BugStack never pushes directly to production. Every fix goes through a pull request with full CI checks, so your team can review it before merging.
Run a background goroutine with a ticker that periodically removes entries not accessed in the last few minutes to prevent memory leaks from abandoned client IPs.
Per-IP is a good default. For authenticated APIs, per-user is more accurate since multiple users may share an IP behind NAT or a proxy.