How It Works Features Pricing Blog Error Guides
Log In Start Free Trial
Rails · Ruby

Fix NoMemoryError: failed to allocate memory (NoMemoryError) in Rails

This error occurs when your Ruby process exhausts available memory. Common causes include loading too many records into memory at once, large file processing without streaming, and memory leaks from global caches or retained references. Use find_each for batch processing, stream large files, and monitor memory with tools like derailed_benchmarks.

Reading the Stack Trace

NoMemoryError (failed to allocate memory): app/services/export_service.rb:12:in `generate_csv' app/controllers/exports_controller.rb:8:in `create' activerecord (7.1.3) lib/active_record/relation.rb:288:in `to_a' activerecord (7.1.3) lib/active_record/relation.rb:860:in `exec_queries' activesupport (7.1.3) lib/active_support/dependencies.rb:332:in `block in require'

Here's what each line means:

Common Causes

1. Loading all records into memory

Using Model.all.to_a loads every record into a Ruby array, consuming memory proportional to table size.

class ExportService
  def generate_csv
    orders = Order.all.to_a  # Loads millions of rows into memory
    CSV.generate do |csv|
      orders.each { |o| csv << [o.id, o.total, o.created_at] }
    end
  end
end

2. String concatenation in loop

Building a large string through concatenation creates many intermediate string objects.

def build_report
  result = ''
  Order.all.each do |order|
    result += order.to_json + "\n"  # Creates new string on each iteration
  end
  result
end

3. Unbounded cache growth

A cache without size limits grows indefinitely and consumes all available memory.

CACHE = {}
def fetch_data(key)
  CACHE[key] ||= expensive_computation(key)  # Never evicts entries
end

The Fix

Use find_each with batch_size to load records in batches of 1000 instead of all at once. Stream the CSV output to an IO object instead of building a giant string in memory.

Before (broken)
class ExportService
  def generate_csv
    orders = Order.all.to_a
    CSV.generate do |csv|
      orders.each { |o| csv << [o.id, o.total, o.created_at] }
    end
  end
end
After (fixed)
class ExportService
  def generate_csv(io = StringIO.new)
    csv = CSV.new(io)
    csv << ['ID', 'Total', 'Created At']
    Order.find_each(batch_size: 1000) do |order|
      csv << [order.id, order.total, order.created_at]
    end
    io.rewind
    io
  end
end

Testing the Fix

require 'rails_helper'

RSpec.describe ExportService do
  describe '#generate_csv' do
    it 'generates CSV without loading all records' do
      create_list(:order, 5)
      service = ExportService.new
      result = service.generate_csv
      csv = CSV.parse(result.string, headers: true)
      expect(csv.size).to eq(5)
    end

    it 'includes headers' do
      service = ExportService.new
      result = service.generate_csv
      headers = CSV.parse(result.string).first
      expect(headers).to eq(['ID', 'Total', 'Created At'])
    end
  end
end

Run your tests:

bundle exec rspec spec/services/export_service_spec.rb

Pushing Through CI/CD

git checkout -b fix/rails-memory-bloat,git add app/services/export_service.rb,git commit -m "fix: use find_each and streaming CSV to prevent memory bloat",git push origin fix/rails-memory-bloat

Your CI config should look something like this:

name: CI
on:
  pull_request:
    branches: [main]
jobs:
  test:
    runs-on: ubuntu-latest
    services:
      postgres:
        image: postgres:16
        env:
          POSTGRES_PASSWORD: postgres
        ports: ['5432:5432']
    steps:
      - uses: actions/checkout@v4
      - uses: ruby/setup-ruby@v1
        with:
          ruby-version: '3.3'
          bundler-cache: true
      - run: bin/rails db:setup
      - run: bundle exec rspec

The Full Manual Process: 18 Steps

Here's every step you just went through to fix this one bug:

  1. Notice the error alert or see it in your monitoring tool
  2. Open the error dashboard and read the stack trace
  3. Identify the file and line number from the stack trace
  4. Open your IDE and navigate to the file
  5. Read the surrounding code to understand context
  6. Reproduce the error locally
  7. Identify the root cause
  8. Write the fix
  9. Run the test suite locally
  10. Fix any failing tests
  11. Write new tests covering the edge case
  12. Run the full test suite again
  13. Create a new git branch
  14. Commit and push your changes
  15. Open a pull request
  16. Wait for code review
  17. Merge and deploy to production
  18. Monitor production to confirm the error is resolved

Total time: 30-60 minutes. For one bug.

Or Let bugstack Fix It in Under 2 minutes

Every step above? bugstack does it automatically.

Step 1: Install the SDK

gem install bugstack

Step 2: Initialize

require 'bugstack'

Bugstack.init(api_key: ENV['BUGSTACK_API_KEY'])

Step 3: There is no step 3.

bugstack handles everything from here:

  1. Captures the stack trace and request context
  2. Pulls the relevant source files from your GitHub repo
  3. Analyzes the error and understands the code context
  4. Generates a minimal, verified fix
  5. Runs your existing test suite
  6. Pushes through your CI/CD pipeline
  7. Deploys to production (or opens a PR for review)

Time from error to fix deployed: Under 2 minutes.

Human involvement: zero.

Try bugstack Free →

No credit card. 5-minute setup. Cancel anytime.

Deploying the Fix (Manual Path)

  1. Replace .all.to_a with find_each batch processing.
  2. Add streaming for large file generation.
  3. Profile memory usage with derailed_benchmarks.
  4. Open a pull request.
  5. Merge and monitor memory usage in production.

Frequently Asked Questions

BugStack runs the fix through your existing test suite, generates additional edge-case tests, and validates that no other components are affected before marking it safe to deploy.

BugStack never pushes directly to production. Every fix goes through a pull request with full CI checks, so your team can review it before merging.

Use the memory_profiler gem for detailed allocation reports, or derailed_benchmarks to measure memory per request. ObjectSpace can track object counts.

A typical Rails process uses 200-500MB. Set a memory limit around 512MB per worker and use a tool like puma_worker_killer to restart workers that exceed it.