Fix RaceCondition: WARNING: DATA RACE — Read at 0x00c0000b4010 by goroutine 8, Previous write at 0x00c0000b4010 by goroutine 7 in Go
This error is reported by Go's race detector when two goroutines access the same memory location concurrently without synchronization, and at least one access is a write. Fix it by using sync.Mutex, sync.RWMutex, atomic operations, or channels to synchronize access to shared state.
Reading the Stack Trace
Here's what each line means:
- Read at 0x00c0000b4010 by goroutine 8: main.(*Counter).Get() /app/counter.go:15 +0x3c: Goroutine 8 reads the counter value at counter.go:15 without holding any lock.
- Previous write at 0x00c0000b4010 by goroutine 7: main.(*Counter).Increment() /app/counter.go:10 +0x48: Goroutine 7 previously wrote to the same memory location at counter.go:10 without synchronization.
- net/http.(*Server).Serve() /usr/local/go/src/net/http/server.go:3285 +0x3a0: Both goroutines were created by the HTTP server to handle concurrent requests, causing the data race.
Common Causes
1. Shared struct field accessed without mutex
Multiple goroutines read and write to a struct field concurrently without any synchronization.
type Counter struct {
count int
}
func (c *Counter) Increment() {
c.count++ // unsynchronized write
}
func (c *Counter) Get() int {
return c.count // unsynchronized read
}
2. Concurrent map access
Multiple goroutines read and write to a map without locking, which can also cause a runtime panic.
var cache = map[string]string{}
func Set(k, v string) { cache[k] = v } // concurrent write
func Get(k string) string { return cache[k] } // concurrent read
3. Global variable modified by handlers
A package-level variable is modified by HTTP handlers that run concurrently.
var requestCount int
func handler(w http.ResponseWriter, r *http.Request) {
requestCount++ // race condition
fmt.Fprintf(w, "request #%d", requestCount)
}
The Fix
Use sync/atomic for simple counters and sync.RWMutex for complex shared state like maps. Atomic operations are lock-free and faster for simple types. RWMutex allows concurrent reads while ensuring exclusive write access.
type Counter struct {
count int
}
func (c *Counter) Increment() {
c.count++
}
func (c *Counter) Get() int {
return c.count
}
type Counter struct {
count atomic.Int64
}
func (c *Counter) Increment() {
c.count.Add(1)
}
func (c *Counter) Get() int64 {
return c.count.Load()
}
// For more complex shared state, use sync.RWMutex:
type SafeCache struct {
mu sync.RWMutex
items map[string]string
}
func (c *SafeCache) Set(k, v string) {
c.mu.Lock()
defer c.mu.Unlock()
c.items[k] = v
}
func (c *SafeCache) Get(k string) (string, bool) {
c.mu.RLock()
defer c.mu.RUnlock()
v, ok := c.items[k]
return v, ok
}
Testing the Fix
package main_test
import (
"sync"
"testing"
"github.com/stretchr/testify/assert"
)
func TestCounter_ConcurrentAccess(t *testing.T) {
c := &Counter{}
var wg sync.WaitGroup
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
c.Increment()
}()
}
wg.Wait()
assert.Equal(t, int64(1000), c.Get())
}
func TestSafeCache_ConcurrentAccess(t *testing.T) {
cache := &SafeCache{items: make(map[string]string)}
var wg sync.WaitGroup
for i := 0; i < 100; i++ {
wg.Add(2)
go func(n int) {
defer wg.Done()
cache.Set(fmt.Sprintf("key%d", n), "value")
}(i)
go func(n int) {
defer wg.Done()
cache.Get(fmt.Sprintf("key%d", n))
}(i)
}
wg.Wait()
}
Run your tests:
go test ./... -v -race
Pushing Through CI/CD
git checkout -b fix/go-race-condition,git add counter.go cache.go,git commit -m "fix: use atomic and RWMutex to eliminate data races",git push origin fix/go-race-condition
Your CI config should look something like this:
name: CI
on:
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: '1.22'
- run: go mod download
- run: go vet ./...
- run: go test ./... -race -coverprofile=coverage.out
- run: go build -race ./...
The Full Manual Process: 18 Steps
Here's every step you just went through to fix this one bug:
- Notice the error alert or see it in your monitoring tool
- Open the error dashboard and read the stack trace
- Identify the file and line number from the stack trace
- Open your IDE and navigate to the file
- Read the surrounding code to understand context
- Reproduce the error locally
- Identify the root cause
- Write the fix
- Run the test suite locally
- Fix any failing tests
- Write new tests covering the edge case
- Run the full test suite again
- Create a new git branch
- Commit and push your changes
- Open a pull request
- Wait for code review
- Merge and deploy to production
- Monitor production to confirm the error is resolved
Total time: 30-60 minutes. For one bug.
Or Let bugstack Fix It in Under 2 minutes
Every step above? bugstack does it automatically.
Step 1: Install the SDK
go get github.com/bugstack/sdk
Step 2: Initialize
import "github.com/bugstack/sdk"
func init() {
bugstack.Init(os.Getenv("BUGSTACK_API_KEY"))
}
Step 3: There is no step 3.
bugstack handles everything from here:
- Captures the stack trace and request context
- Pulls the relevant source files from your GitHub repo
- Analyzes the error and understands the code context
- Generates a minimal, verified fix
- Runs your existing test suite
- Pushes through your CI/CD pipeline
- Deploys to production (or opens a PR for review)
Time from error to fix deployed: Under 2 minutes.
Human involvement: zero.
Try bugstack Free →No credit card. 5-minute setup. Cancel anytime.
Deploying the Fix (Manual Path)
- Run go test ./... -race locally to confirm no data races.
- Open a pull request with the synchronization changes.
- Wait for CI checks to pass on the PR.
- Have a teammate review and approve the PR.
- Merge to main and verify in staging.
Frequently Asked Questions
BugStack runs all tests with Go's race detector enabled, generates concurrent access tests, and validates that no data races are reported before marking it safe to deploy.
BugStack never pushes directly to production. Every fix goes through a pull request with full CI checks, so your team can review it before merging.
Yes. Add -race to your go test command in CI. It adds 2-10x overhead but catches races that are extremely hard to find otherwise.
Use channels when transferring ownership of data between goroutines. Use mutexes when protecting shared state that multiple goroutines read and write.