Fix ThreadError: deadlock; recursive locking (ThreadError) in Rails
This error occurs when a thread tries to acquire a lock it already holds, creating a deadlock. In Rails, this often happens with class-level mutable state accessed from multiple threads in a Puma multi-threaded server. Use Mutex correctly, avoid recursive locking, and prefer thread-local variables or Concurrent::Map for shared state.
Reading the Stack Trace
Here's what each line means:
- app/services/rate_limiter.rb:18:in `synchronize': The Mutex.synchronize call deadlocks because the mutex is already locked by the current thread.
- app/services/rate_limiter.rb:22:in `check': The check method is called recursively within the synchronized block, causing recursive locking.
- app/controllers/api_controller.rb:6:in `check_rate_limit': The before_action calls the rate limiter which triggers the deadlock.
Common Causes
1. Recursive mutex locking
A synchronized method calls another synchronized method that uses the same mutex.
class RateLimiter
MUTEX = Mutex.new
@@counts = {}
def self.throttle(key)
MUTEX.synchronize do
check(key) # check also tries to synchronize on MUTEX
end
end
def self.check(key)
MUTEX.synchronize do
@@counts[key] ||= 0
@@counts[key] += 1
end
end
end
2. Class variable mutation without synchronization
Multiple threads modify a class variable without any locking, causing race conditions.
class Counter
@@total = 0
def self.increment
@@total += 1 # Not thread-safe
end
end
3. Shared mutable state in initializer
A constant Hash is mutated at runtime from multiple threads.
SETTINGS = {}
class SettingsLoader
def self.get(key)
SETTINGS[key] ||= load_from_db(key) # Race condition
end
end
The Fix
Replace the Mutex and class variable with Concurrent::Map from the concurrent-ruby gem. Concurrent::Map provides thread-safe operations without explicit locking, eliminating the deadlock risk while remaining performant.
class RateLimiter
MUTEX = Mutex.new
@@counts = {}
def self.throttle(key)
MUTEX.synchronize do
check(key)
end
end
def self.check(key)
MUTEX.synchronize do
@@counts[key] ||= 0
@@counts[key] += 1
end
end
end
require 'concurrent'
class RateLimiter
COUNTS = Concurrent::Map.new
def self.throttle(key)
count = COUNTS.compute(key) do |val|
(val || 0) + 1
end
count <= rate_limit_threshold
end
def self.rate_limit_threshold
100
end
end
Testing the Fix
require 'rails_helper'
require 'concurrent'
RSpec.describe RateLimiter do
before { RateLimiter::COUNTS.clear }
it 'increments count for a key' do
5.times { RateLimiter.throttle('user:1') }
expect(RateLimiter::COUNTS['user:1']).to eq(5)
end
it 'is thread-safe' do
threads = 10.times.map do
Thread.new { 100.times { RateLimiter.throttle('user:2') } }
end
threads.each(&:join)
expect(RateLimiter::COUNTS['user:2']).to eq(1000)
end
end
Run your tests:
bundle exec rspec spec/services/rate_limiter_spec.rb
Pushing Through CI/CD
git checkout -b fix/rails-thread-safety,git add app/services/rate_limiter.rb Gemfile,git commit -m "fix: use Concurrent::Map for thread-safe rate limiting",git push origin fix/rails-thread-safety
Your CI config should look something like this:
name: CI
on:
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16
env:
POSTGRES_PASSWORD: postgres
ports: ['5432:5432']
steps:
- uses: actions/checkout@v4
- uses: ruby/setup-ruby@v1
with:
ruby-version: '3.3'
bundler-cache: true
- run: bin/rails db:setup
- run: bundle exec rspec
The Full Manual Process: 18 Steps
Here's every step you just went through to fix this one bug:
- Notice the error alert or see it in your monitoring tool
- Open the error dashboard and read the stack trace
- Identify the file and line number from the stack trace
- Open your IDE and navigate to the file
- Read the surrounding code to understand context
- Reproduce the error locally
- Identify the root cause
- Write the fix
- Run the test suite locally
- Fix any failing tests
- Write new tests covering the edge case
- Run the full test suite again
- Create a new git branch
- Commit and push your changes
- Open a pull request
- Wait for code review
- Merge and deploy to production
- Monitor production to confirm the error is resolved
Total time: 30-60 minutes. For one bug.
Or Let bugstack Fix It in Under 2 minutes
Every step above? bugstack does it automatically.
Step 1: Install the SDK
gem install bugstack
Step 2: Initialize
require 'bugstack'
Bugstack.init(api_key: ENV['BUGSTACK_API_KEY'])
Step 3: There is no step 3.
bugstack handles everything from here:
- Captures the stack trace and request context
- Pulls the relevant source files from your GitHub repo
- Analyzes the error and understands the code context
- Generates a minimal, verified fix
- Runs your existing test suite
- Pushes through your CI/CD pipeline
- Deploys to production (or opens a PR for review)
Time from error to fix deployed: Under 2 minutes.
Human involvement: zero.
Try bugstack Free →No credit card. 5-minute setup. Cancel anytime.
Deploying the Fix (Manual Path)
- Replace Mutex-based code with Concurrent::Map.
- Add thread-safety specs with multi-threaded assertions.
- Run specs under multi-threaded conditions.
- Open a pull request.
- Merge and monitor for thread-related errors in production.
Frequently Asked Questions
BugStack runs the fix through your existing test suite, generates additional edge-case tests, and validates that no other components are affected before marking it safe to deploy.
BugStack never pushes directly to production. Every fix goes through a pull request with full CI checks, so your team can review it before merging.
Rails itself is thread-safe when config.eager_load is true. However, your application code must also be thread-safe, especially class-level mutable state.
Use both. Workers provide process isolation for memory safety. Threads within each worker improve concurrency. A common setup is 2-4 workers with 5 threads each.