How It Works Features Pricing Blog Error Guides
Log In Start Free Trial
Rails · Ruby

Fix ThreadError: deadlock; recursive locking (ThreadError) in Rails

This error occurs when a thread tries to acquire a lock it already holds, creating a deadlock. In Rails, this often happens with class-level mutable state accessed from multiple threads in a Puma multi-threaded server. Use Mutex correctly, avoid recursive locking, and prefer thread-local variables or Concurrent::Map for shared state.

Reading the Stack Trace

ThreadError (deadlock; recursive locking): app/services/rate_limiter.rb:18:in `lock' app/services/rate_limiter.rb:18:in `synchronize' app/services/rate_limiter.rb:22:in `check' app/services/rate_limiter.rb:10:in `throttle' app/controllers/api_controller.rb:6:in `check_rate_limit' activesupport (7.1.3) lib/active_support/callbacks.rb:403:in `block in make_lambda'

Here's what each line means:

Common Causes

1. Recursive mutex locking

A synchronized method calls another synchronized method that uses the same mutex.

class RateLimiter
  MUTEX = Mutex.new
  @@counts = {}

  def self.throttle(key)
    MUTEX.synchronize do
      check(key)  # check also tries to synchronize on MUTEX
    end
  end

  def self.check(key)
    MUTEX.synchronize do
      @@counts[key] ||= 0
      @@counts[key] += 1
    end
  end
end

2. Class variable mutation without synchronization

Multiple threads modify a class variable without any locking, causing race conditions.

class Counter
  @@total = 0

  def self.increment
    @@total += 1  # Not thread-safe
  end
end

3. Shared mutable state in initializer

A constant Hash is mutated at runtime from multiple threads.

SETTINGS = {}

class SettingsLoader
  def self.get(key)
    SETTINGS[key] ||= load_from_db(key)  # Race condition
  end
end

The Fix

Replace the Mutex and class variable with Concurrent::Map from the concurrent-ruby gem. Concurrent::Map provides thread-safe operations without explicit locking, eliminating the deadlock risk while remaining performant.

Before (broken)
class RateLimiter
  MUTEX = Mutex.new
  @@counts = {}

  def self.throttle(key)
    MUTEX.synchronize do
      check(key)
    end
  end

  def self.check(key)
    MUTEX.synchronize do
      @@counts[key] ||= 0
      @@counts[key] += 1
    end
  end
end
After (fixed)
require 'concurrent'

class RateLimiter
  COUNTS = Concurrent::Map.new

  def self.throttle(key)
    count = COUNTS.compute(key) do |val|
      (val || 0) + 1
    end
    count <= rate_limit_threshold
  end

  def self.rate_limit_threshold
    100
  end
end

Testing the Fix

require 'rails_helper'
require 'concurrent'

RSpec.describe RateLimiter do
  before { RateLimiter::COUNTS.clear }

  it 'increments count for a key' do
    5.times { RateLimiter.throttle('user:1') }
    expect(RateLimiter::COUNTS['user:1']).to eq(5)
  end

  it 'is thread-safe' do
    threads = 10.times.map do
      Thread.new { 100.times { RateLimiter.throttle('user:2') } }
    end
    threads.each(&:join)
    expect(RateLimiter::COUNTS['user:2']).to eq(1000)
  end
end

Run your tests:

bundle exec rspec spec/services/rate_limiter_spec.rb

Pushing Through CI/CD

git checkout -b fix/rails-thread-safety,git add app/services/rate_limiter.rb Gemfile,git commit -m "fix: use Concurrent::Map for thread-safe rate limiting",git push origin fix/rails-thread-safety

Your CI config should look something like this:

name: CI
on:
  pull_request:
    branches: [main]
jobs:
  test:
    runs-on: ubuntu-latest
    services:
      postgres:
        image: postgres:16
        env:
          POSTGRES_PASSWORD: postgres
        ports: ['5432:5432']
    steps:
      - uses: actions/checkout@v4
      - uses: ruby/setup-ruby@v1
        with:
          ruby-version: '3.3'
          bundler-cache: true
      - run: bin/rails db:setup
      - run: bundle exec rspec

The Full Manual Process: 18 Steps

Here's every step you just went through to fix this one bug:

  1. Notice the error alert or see it in your monitoring tool
  2. Open the error dashboard and read the stack trace
  3. Identify the file and line number from the stack trace
  4. Open your IDE and navigate to the file
  5. Read the surrounding code to understand context
  6. Reproduce the error locally
  7. Identify the root cause
  8. Write the fix
  9. Run the test suite locally
  10. Fix any failing tests
  11. Write new tests covering the edge case
  12. Run the full test suite again
  13. Create a new git branch
  14. Commit and push your changes
  15. Open a pull request
  16. Wait for code review
  17. Merge and deploy to production
  18. Monitor production to confirm the error is resolved

Total time: 30-60 minutes. For one bug.

Or Let bugstack Fix It in Under 2 minutes

Every step above? bugstack does it automatically.

Step 1: Install the SDK

gem install bugstack

Step 2: Initialize

require 'bugstack'

Bugstack.init(api_key: ENV['BUGSTACK_API_KEY'])

Step 3: There is no step 3.

bugstack handles everything from here:

  1. Captures the stack trace and request context
  2. Pulls the relevant source files from your GitHub repo
  3. Analyzes the error and understands the code context
  4. Generates a minimal, verified fix
  5. Runs your existing test suite
  6. Pushes through your CI/CD pipeline
  7. Deploys to production (or opens a PR for review)

Time from error to fix deployed: Under 2 minutes.

Human involvement: zero.

Try bugstack Free →

No credit card. 5-minute setup. Cancel anytime.

Deploying the Fix (Manual Path)

  1. Replace Mutex-based code with Concurrent::Map.
  2. Add thread-safety specs with multi-threaded assertions.
  3. Run specs under multi-threaded conditions.
  4. Open a pull request.
  5. Merge and monitor for thread-related errors in production.

Frequently Asked Questions

BugStack runs the fix through your existing test suite, generates additional edge-case tests, and validates that no other components are affected before marking it safe to deploy.

BugStack never pushes directly to production. Every fix goes through a pull request with full CI checks, so your team can review it before merging.

Rails itself is thread-safe when config.eager_load is true. However, your application code must also be thread-safe, especially class-level mutable state.

Use both. Workers provide process isolation for memory safety. Threads within each worker improve concurrency. A common setup is 2-4 workers with 5 threads each.