Ruby on Rails Optimization: Make It Fast & Efficient

Ruby on Rails Performance Optimization Guide | Randomize Blog

Ruby on Rails Performance Optimization Guide

From Beginner to Expert – Everything You Need to Know

Beginner Level

Fundamentals

1. Understanding Performance Basics

What is Performance?

Performance in Rails refers to how quickly your application responds to user requests. It encompasses:

  • Page load times
  • Database query speed
  • Server response time
  • User experience quality

Why Performance Matters

Performance directly impacts user satisfaction and business metrics:

  • User Experience: 47% of users expect pages to load in 2 seconds or less
  • Conversion Rates: Every 1-second delay reduces conversions by 7%
  • SEO Rankings: Google considers page speed in search rankings
  • Server Costs: Faster apps require fewer servers

Common Bottlenecks

# 1. N+1 Query Problem User.all.each { |user| puts user.profile.name } # 2. Missing Database Indexes User.where(email: params[:email]) # Without index on email # 3. No Caching def expensive_calculation User.joins(:orders).group(:id).sum(:amount) end # 4. Inefficient Rendering <% @posts.each do |post| %> <%= render partial: ‘post’, locals: { post: post } %> <% end %>

Performance Metrics to Track

  • Response Time: Time from request to response
  • Database Queries: Number of queries per request
  • Memory Usage: RAM consumption per request
  • CPU Usage: Server processing time
  • Throughput: Requests per second

Best Practices

  • Always use eager loading for associations
  • Add indexes on frequently queried columns
  • Cache expensive calculations
  • Use background jobs for long-running tasks
  • Monitor performance continuously

Real-World Case Study: E-commerce Site

Problem: Homepage loading in 8 seconds with 50+ database queries

Root Cause: N+1 queries when loading product categories and reviews

Solution:

# Before: 50+ queries @products = Product.all # After: 3 queries @products = Product.includes(:category, :reviews, :images).all

Result: Page load time reduced to 1.2 seconds, 90% fewer database queries

Database Basics

2. The N+1 Query Problem

Understanding N+1 Queries

The N+1 query problem is the most common performance issue in Rails applications. It occurs when your code makes one query to fetch records, then makes additional queries for each record’s associations.

How N+1 Queries Happen

# BAD: N+1 queries posts = Post.all # 1 query: SELECT * FROM posts posts.each do |post| puts post.author.name # N queries: SELECT * FROM users WHERE id = ? end # Result: 1 + N queries (e.g., 1 + 100 = 101 queries)

How to Fix N+1 Queries

# GOOD: Eager loading with includes posts = Post.includes(:author).all # 2 queries total posts.each do |post| puts post.author.name # No additional queries end # Multiple associations posts = Post.includes(:author, :comments, :tags).all # Nested associations posts = Post.includes(author: :profile, comments: :user).all

Different Eager Loading Methods

# includes – separate queries (recommended) Post.includes(:author) # 2 queries # preload – same as includes Post.preload(:author) # 2 queries # eager_load – single JOIN query Post.eager_load(:author) # 1 query with JOIN # joins – for filtering only Post.joins(:author).where(authors: { name: ‘John’ })

Detecting N+1 Queries

# Add to Gemfile gem ‘bullet’ # config/environments/development.rb config.after_initialize do Bullet.enable = true Bullet.alert = true Bullet.console = true end

Best Practices

  • Always use includes when accessing associations in loops
  • Use joins only for filtering, not for accessing data
  • Monitor your logs for repeated queries
  • Use the Bullet gem in development
  • Test with realistic data volumes

Real-World Case Study: Social Media Feed

Problem: User feed loading in 15 seconds with 500+ queries

Root Cause: Loading posts, authors, likes, comments, and tags separately

# Before: 500+ queries @posts = Post.where(user_id: current_user.following_ids) @posts.each do |post| post.author.name post.likes.count post.comments.each { |c| c.user.name } end # After: 5 queries @posts = Post.includes( :author, :likes, comments: :user, :tags ).where(user_id: current_user.following_ids)

Result: Feed loads in 2 seconds, 99% reduction in database queries

3. Basic Database Indexing

What are Database Indexes?

Database indexes are like a book’s index – they help the database find data quickly without scanning every row. Without indexes, the database must perform a full table scan, which becomes very slow as your data grows.

How Indexes Work

# Without index: Full table scan User.where(email: [email protected]) # Database scans ALL rows (slow) # With index: Direct lookup User.where(email: [email protected]) # Database uses index to find row instantly

When to Add Indexes

  • Foreign Keys: user_id, post_id, category_id, etc.
  • Search Columns: email, username, title
  • Sort Columns: created_at, updated_at, name
  • Join Columns: Columns used in WHERE clauses
  • Unique Constraints: email, username

Creating Indexes

# Basic index add_index :users, :email # Unique index add_index :users, :email, unique: true # Composite index (multiple columns) add_index :orders, [:user_id, :created_at] # Partial index (only some rows) add_index :users, :email, where: “email IS NOT NULL” # Index with custom name add_index :users, :email, name: “index_users_on_email_for_login”

Index Performance Impact

# Before: 1000ms (full table scan) User.where(email: [email protected]) # After: 5ms (index lookup) User.where(email: [email protected]) # 200x faster!

Index Trade-offs

  • Pros: Faster reads, better query performance
  • Cons: Slower writes, more disk space
  • Rule of thumb: Index for reads, minimize for writes

Best Practices

  • Add indexes on columns used in WHERE, ORDER BY, and JOIN clauses
  • Use composite indexes for queries that filter on multiple columns
  • Consider the order of columns in composite indexes
  • Monitor index usage and remove unused indexes
  • Be careful with too many indexes on write-heavy tables

Real-World Case Study: E-commerce Product Search

Problem: Product search taking 3-5 seconds with 10,000+ products

Root Cause: No indexes on search columns

# Before: Slow search Product.where(“name ILIKE ?”, “%#{query}%”) .where(category_id: category_id) .where(price: min_price..max_price) # After: Add indexes add_index :products, :name add_index :products, :category_id add_index :products, :price add_index :products, [:category_id, :price, :name]

Result: Search time reduced to 200ms, 15x faster

Caching Introduction

4. Simple Caching

What is Caching?

Caching stores frequently accessed data in memory or fast storage to avoid expensive computations or database queries. It’s one of the most effective ways to improve Rails performance.

Types of Rails Caching

  • Fragment Caching: Cache parts of views
  • Action Caching: Cache entire controller actions
  • Page Caching: Cache entire pages (deprecated)
  • Low-level Caching: Cache arbitrary data

Fragment Caching

# Basic fragment cache <% cache @post do %> <h1><%= @post.title %></h1> <p><%= @post.content %></p> <% end %> # Cache key automatically includes post id and updated_at # Cache key: “posts/123-20231201120000”

Custom Cache Keys

# Custom cache key <% cache [“v1”, @post, current_user] do %> <%= render @post %> <% end %> # Cache with expiration <% cache @post, expires_in: 1.hour do %> <%= render @post %> <% end %>

Collection Caching

# Cache collection of objects <%= render partial: ‘post’, collection: @posts, cached: true %> # Or manually <% @posts.each do |post| %> <% cache post do %> <%= render post %> <% end %> <% end %>

Low-Level Caching

# Cache expensive calculations def expensive_calculation Rails.cache.fetch(“expensive_data_#{@user.id}”, expires_in: 1.hour) do # Expensive operation here calculate_user_stats(@user) end end # Cache with version Rails.cache.fetch([“v2”, “user_stats”, @user.id]) do calculate_user_stats(@user) end

Cache Configuration

# config/environments/production.rb config.cache_store = :redis_cache_store, { url: ENV[‘REDIS_URL’], expires_in: 1.hour } # Development (file-based) config.cache_store = :file_store, “tmp/cache”

Cache Invalidation

# Touch model to invalidate cache class Comment < ApplicationRecord belongs_to :post, touch: true end # Manual cache invalidation Rails.cache.delete(“posts/#{@post.id}”) Rails.cache.delete_matched(“posts/*”)

Best Practices

  • Cache expensive operations and database queries
  • Use meaningful cache keys that include relevant data
  • Set appropriate expiration times
  • Invalidate cache when data changes
  • Monitor cache hit rates
  • Use Redis for production caching

Real-World Case Study: News Website

Problem: Homepage taking 8 seconds to load with complex article rendering

Root Cause: No caching of article content and author information

# Before: No caching <% @articles.each do |article| %> <h2><%= article.title %></h2> <p><%= article.author.name %></p> <p><%= article.content %></p> <% end %> # After: Fragment caching <% @articles.each do |article| %> <% cache article do %> <h2><%= article.title %></h2> <p><%= article.author.name %></p> <p><%= article.content %></p> <% end %> <% end %>

Result: Homepage loads in 1.5 seconds, 80% faster

Performance Monitoring

5. Basic Monitoring Tools

Why Monitor Performance?

Performance monitoring helps you identify bottlenecks, track improvements, and ensure your application stays fast as it grows. Without monitoring, you’re flying blind.

Rails Logs

# Enable detailed logging in development # config/environments/development.rb config.log_level = :debug # View logs in real-time tail -f log/development.log # Look for patterns like: # User Load (1.2ms) SELECT * FROM users WHERE id = 1 # User Load (0.8ms) SELECT * FROM users WHERE id = 2 # User Load (0.9ms) SELECT * FROM users WHERE id = 3 # This indicates N+1 problem!

Rack Mini Profiler

# Add to Gemfile gem ‘rack-mini-profiler’ gem ‘memory_profiler’ # Shows performance data in browser # Visit any page and see the mini profiler widget

Bullet Gem for N+1 Detection

# Add to Gemfile gem ‘bullet’ # config/environments/development.rb config.after_initialize do Bullet.enable = true Bullet.alert = true Bullet.console = true Bullet.rails_logger = true end

Browser Developer Tools

  • Network Tab: See request/response times, payload sizes
  • Performance Tab: Analyze page load, JavaScript execution
  • Console: Check for JavaScript errors and warnings
  • Lighthouse: Comprehensive performance audit

Production Monitoring Tools

# New Relic gem ‘newrelic_rpm’ # Skylight gem ‘skylight’ # Scout gem ‘scout_apm’

Custom Performance Metrics

# Track custom metrics def track_performance start_time = Time.current # Your code here result = expensive_operation duration = Time.current – start_time Rails.logger.info “Expensive operation took #{duration}ms” result end # Using ActiveSupport::Notifications ActiveSupport::Notifications.instrument(“custom.operation”) do expensive_operation end

Database Query Analysis

# Analyze query performance User.where(email: [email protected]).explain # Check query execution plan # Look for “Seq Scan” (bad) vs “Index Scan” (good)

Memory Profiling

# Monitor memory usage puts GC.stat # Force garbage collection GC.start # Memory profiling require ‘memory_profiler’ report = MemoryProfiler.report do # Your code here end report.pretty_print

Best Practices

  • Monitor performance continuously, not just when there are problems
  • Set up alerts for performance degradation
  • Use multiple tools for different perspectives
  • Profile with realistic data volumes
  • Track performance trends over time
  • Monitor both development and production environments

Real-World Case Study: SaaS Dashboard

Problem: Dashboard loading slowly but no obvious bottlenecks

Monitoring Setup: Implemented comprehensive monitoring

# Added monitoring tools gem ‘newrelic_rpm’ gem ‘bullet’ gem ‘rack-mini-profiler’ # Custom performance tracking def dashboard_data ActiveSupport::Notifications.instrument(“dashboard.load”) do @users = User.includes(:profile, :orders).limit(100) @stats = calculate_stats end end

Result: Identified 3 N+1 queries and 2 slow database queries, reduced load time by 60%

6. Basic Query Optimization

What is Query Optimization?

Query optimization involves writing database queries that are efficient and fast. This includes selecting only the data you need, using appropriate methods, and understanding how Rails translates your code into SQL.

Select Only What You Need

# BAD: Selecting all columns users = User.all # SELECT * FROM users # GOOD: Selecting specific columns users = User.select(:id, :name, :email).all # SELECT id, name, email FROM users # When you only need IDs user_ids = User.pluck(:id) # SELECT id FROM users # When you only need one value user_count = User.count # SELECT COUNT(*) FROM users

Use Appropriate Query Methods

# Finding single records user = User.find(123) # Raises exception if not found user = User.find_by(id: 123) # Returns nil if not found user = User.find_by!(id: 123) # Raises exception if not found # Finding multiple records users = User.where(active: true) users = User.where(“created_at > ?”, 1.week.ago) # Limiting results recent_users = User.order(created_at: :desc).limit(10) # Aggregations total_sales = Order.sum(:amount) avg_order = Order.average(:amount) max_order = Order.maximum(:amount)

Efficient Data Processing

# BAD: Loading all records into memory User.all.each do |user| process_user(user) end # GOOD: Processing in batches User.find_each(batch_size: 1000) do |user| process_user(user) end # For large datasets, use find_in_batches User.find_in_batches(batch_size: 1000) do |batch| batch.each { |user| process_user(user) } end

Understanding Query Execution

# See the actual SQL being generated users = User.where(active: true) puts users.to_sql # Output: SELECT “users”.* FROM “users” WHERE “users”.”active” = true # Analyze query performance users = User.where(active: true) puts users.explain # Shows execution plan

Common Query Patterns

# Finding records with conditions active_users = User.where(active: true) recent_posts = Post.where(“created_at > ?”, 1.day.ago) # Complex conditions users = User.where(“age >= ? AND active = ?”, 18, true) # Using OR conditions users = User.where(“role = ? OR role = ?”, ‘admin’, ‘moderator’) # Ordering results users = User.order(:name) users = User.order(created_at: :desc) users = User.order(:name, created_at: :desc)

Query Performance Tips

  • Use select to limit columns when you don’t need all data
  • Use pluck when you only need specific values
  • Use find_each for processing large datasets
  • Use count instead of length for counting
  • Use exists? instead of any? for checking existence
  • Use explain to understand query performance

Real-World Case Study: User Management System

Problem: User list page taking 5 seconds to load with 10,000+ users

Root Cause: Loading all user data and processing inefficiently

# Before: Inefficient queries @users = User.all # Loads all columns @user_count = @users.length # Counts in Ruby # After: Optimized queries @users = User.select(:id, :name, :email, :created_at) .where(active: true) .order(:name) .limit(50) @user_count = User.where(active: true).count # Count in database

Result: Page loads in 0.5 seconds, 90% faster

Additional Benefits: Reduced memory usage, better user experience

7. Common Performance Anti-patterns

What are Anti-patterns?

Performance anti-patterns are common mistakes that developers make that hurt application performance. Learning to recognize and avoid these patterns is crucial for building fast Rails applications.

Database Anti-patterns

# 1. Loading unnecessary data # BAD: Loading all columns when you only need a few users = User.all users.each { |user| puts user.name } # GOOD: Select only what you need users = User.select(:id, :name).all users.each { |user| puts user.name } # 2. Using length instead of count # BAD: Loads all records into memory post_count = Post.where(active: true).length # GOOD: Counts in database post_count = Post.where(active: true).count # 3. Using any? instead of exists? # BAD: Loads records to check existence if User.where(admin: true).any? puts “Admin exists” end # GOOD: Checks existence efficiently if User.where(admin: true).exists? puts “Admin exists” end

View Anti-patterns

# 1. Complex logic in views # BAD: Business logic in view <% @users.each do |user| %> <% if user.orders.sum(:amount) > 1000 %> <span class=“vip”>VIP Customer</span> <% end %> <% end %> # GOOD: Move logic to model or controller <% @users.each do |user| %> <% if user.vip_customer? %> <span class=“vip”>VIP Customer</span> <% end %> <% end %> # 2. N+1 queries in views # BAD: N+1 in view <% @posts.each do |post| %> <h2><%= post.title %></h2> <p>By: <%= post.author.name %></p> <% end %> # GOOD: Eager load in controller @posts = Post.includes(:author).all <% @posts.each do |post| %> <h2><%= post.title %></h2> <p>By: <%= post.author.name %></p> <% end %>

Controller Anti-patterns

# 1. Loading too much data # BAD: Loading all users def index @users = User.all end # GOOD: Pagination and filtering def index @users = User.where(active: true) .order(:name) .page(params[:page]) .per(25) end # 2. Complex queries in controller # BAD: Complex logic in controller def dashboard @stats = { total_users: User.count, active_users: User.where(active: true).count, total_orders: Order.count, revenue: Order.sum(:amount) } end # GOOD: Move to model or service def dashboard @stats = DashboardService.get_stats end

Model Anti-patterns

# 1. Callbacks that trigger queries # BAD: Expensive callback class Post < ApplicationRecord after_save :update_user_post_count private def update_user_post_count user.update(post_count: user.posts.count) end end # GOOD: Use counter cache class Post < ApplicationRecord belongs_to :user, counter_cache: true end # 2. Complex validations # BAD: Expensive validation class User < ApplicationRecord validate :unique_email_across_all_users private def unique_email_across_all_users if User.where(email: email).exists? errors.add(:email, “already taken”) end end end # GOOD: Use database constraint class User < ApplicationRecord validates :email, uniqueness: true end

General Anti-patterns

# 1. Not using background jobs # BAD: Long-running task in request def create_order order = Order.create!(order_params) send_email_notification(order) # Takes 5 seconds redirect_to order end # GOOD: Use background job def create_order order = Order.create!(order_params) SendOrderEmailJob.perform_later(order.id) redirect_to order end # 2. Not caching expensive operations # BAD: Recalculating every time def expensive_calculation User.joins(:orders).group(:id).sum(:amount) end # GOOD: Cache the result def expensive_calculation Rails.cache.fetch(“user_revenue_stats”, expires_in: 1.hour) do User.joins(:orders).group(:id).sum(:amount) end end

How to Avoid Anti-patterns

  • Use monitoring tools: Bullet, rack-mini-profiler, New Relic
  • Code reviews: Have performance-focused code reviews
  • Testing: Write performance tests for critical paths
  • Documentation: Document performance requirements
  • Training: Educate team on performance best practices
  • Automation: Use tools to catch anti-patterns automatically

Real-World Case Study: E-commerce Platform

Problem: Product search page taking 15+ seconds to load

Anti-patterns Found:

# 1. Loading all products without pagination @products = Product.all # 50,000+ records # 2. N+1 queries for product details @products.each do |product| product.category.name product.reviews.count end # 3. Complex calculations in view <% @products.each do |product| %> <% if product.orders.sum(:amount) > 10000 %> <span>Best Seller</span> <% end %> <% end %>

Solutions Applied:

# 1. Added pagination @products = Product.includes(:category, :reviews) .where(active: true) .page(params[:page]) .per(20) # 2. Added counter cache for reviews class Review < ApplicationRecord belongs_to :product, counter_cache: true end # 3. Moved logic to model class Product < ApplicationRecord def best_seller? total_revenue > 10000 end def total_revenue Rails.cache.fetch(“product_revenue_#{id}”, expires_in: 1.hour) do orders.sum(:amount) end end end

Result: Page load time reduced from 15 seconds to 1.2 seconds, 92% improvement

8. Performance Checklist

Beginner Performance Checklist

Use this checklist to ensure your Rails application follows performance best practices. Check off each item as you implement it.

Database Optimization

Caching Implementation

Monitoring Setup

Code Quality

Performance Testing

Production Readiness

Performance Metrics to Track

  • Response Time: Target < 200ms for API calls, < 2s for page loads
  • Database Queries: Minimize queries per request
  • Memory Usage: Monitor for memory leaks
  • Cache Hit Rate: Aim for > 80% cache hit rate
  • Error Rate: Keep < 1% error rate
  • Throughput: Requests per second your app can handle

Next Steps

Once you’ve completed this checklist, you’re ready to move on to the Intermediate level topics:

  • Advanced eager loading techniques
  • Counter caches and database denormalization
  • Russian Doll caching strategies
  • Background job optimization
  • Asset optimization and CDN setup

Your Progress

0% Complete

9. Error Handling & Performance

Why Error Handling Affects Performance

Poor error handling can significantly impact application performance through exception overhead, memory leaks, and cascading failures. Efficient error handling is crucial for maintaining fast, reliable applications.

Common Performance Issues with Error Handling

# 1. Exception overhead # BAD: Exceptions in hot paths def find_user(id) User.find(id) # Raises ActiveRecord::RecordNotFound rescue ActiveRecord::RecordNotFound nil end # GOOD: Use find_by for expected missing records def find_user(id) User.find_by(id: id) end # 2. Memory leaks from error logging # BAD: Logging large objects rescue => e Rails.logger.error “Error: #{e.message}, Object: #{large_object.inspect}” end # GOOD: Log only essential information rescue => e Rails.logger.error “Error: #{e.message}, Class: #{e.class}” end

Efficient Error Handling Patterns

# 1. Use appropriate methods for expected scenarios # For optional records user = User.find_by(email: email) # Returns nil if not found # For required records with custom error user = User.find_by!(email: email) # Raises if not found # 2. Batch error handling # BAD: Individual exception handling results = [] ids.each do |id| begin results << User.find(id) rescue ActiveRecord::RecordNotFound # Skip end end # GOOD: Batch processing users = User.where(id: ids) results = users.to_a # No exceptions for missing records # 3. Use safe navigation operator # BAD: Potential nil errors if user && user.profile && user.profile.name puts user.profile.name end # GOOD: Safe navigation puts user&.profile&.name

Error Handling in Controllers

# 1. Use rescue_from for common exceptions class ApplicationController < ActionController::Base rescue_from ActiveRecord::RecordNotFound, with: :not_found rescue_from ActiveRecord::RecordInvalid, with: :unprocessable_entity private def not_found render json: { error: ‘Resource not found’ }, status: :not_found end def unprocessable_entity(exception) render json: { error: exception.record.errors }, status: :unprocessable_entity end end # 2. Efficient error responses def show @user = User.find(params[:id]) render json: @user rescue ActiveRecord::RecordNotFound render json: { error: ‘User not found’ }, status: 404 end

Background Job Error Handling

# 1. Use retry mechanisms class ProcessOrderJob < ApplicationJob retry_on StandardError, wait: 5.seconds, attempts: 3 def perform(order_id) order = Order.find(order_id) process_order(order) end end # 2. Handle specific exceptions class SendEmailJob < ApplicationJob retry_on Net::SMTPError, wait: 10.seconds, attempts: 5 discard_on ActiveRecord::RecordNotFound def perform(user_id) user = User.find(user_id) UserMailer.welcome(user).deliver_now end end

Database Error Handling

# 1. Handle connection issues def safe_database_operation ActiveRecord::Base.connection.execute(sql) rescue ActiveRecord::StatementInvalid => e Rails.logger.error “Database error: #{e.message}” nil rescue ActiveRecord::ConnectionTimeoutError => e Rails.logger.error “Connection timeout: #{e.message}” nil end # 2. Use transactions efficiently def create_order_with_items(order_params, items_params) ActiveRecord::Base.transaction do order = Order.create!(order_params) items_params.each do |item_params| order.items.create!(item_params) end order end rescue ActiveRecord::RecordInvalid => e Rails.logger.error “Order creation failed: #{e.message}” nil end

Performance Monitoring for Errors

# 1. Track error rates def track_error_rate ActiveSupport::Notifications.instrument(“error.occurred”) do # Your code here end end # 2. Custom error tracking class ErrorTracker def self.track(exception, context = {}) Rails.logger.error “Error: #{exception.class} – #{exception.message}” # Send to monitoring service Sentry.capture_exception(exception, extra: context) if defined?(Sentry) end end

Best Practices

  • Avoid exceptions in hot paths: Use appropriate methods (find_by vs find)
  • Handle errors at the right level: Don’t catch exceptions you can’t handle
  • Use background jobs for error-prone operations: Email sending, external API calls
  • Implement retry mechanisms: For transient failures
  • Monitor error rates: Track performance impact of errors
  • Log efficiently: Don’t log large objects or sensitive data
  • Use circuit breakers: For external service calls

Real-World Case Study: E-commerce Checkout

Problem: Checkout process failing 15% of the time due to poor error handling

Root Cause: Exceptions in payment processing causing timeouts

# Before: Poor error handling def process_payment(order) payment = PaymentProcessor.charge(order.amount) order.update!(status: ‘paid’) send_confirmation_email(order) rescue => e Rails.logger.error “Payment failed: #{e.message}” raise # Re-raises, causing timeout end # After: Efficient error handling def process_payment(order) payment = PaymentProcessor.charge(order.amount) order.update!(status: ‘paid’) # Move to background job SendConfirmationEmailJob.perform_later(order.id) { success: true, payment_id: payment.id } rescue PaymentProcessor::InsufficientFunds => e order.update!(status: ‘failed’, error_message: e.message) { success: false, error: ‘Insufficient funds’ } rescue PaymentProcessor::NetworkError => e # Retry in background ProcessPaymentJob.set(wait: 30.seconds).perform_later(order.id) { success: false, error: ‘Payment processing, please wait’ } rescue => e ErrorTracker.track(e, order_id: order.id) { success: false, error: ‘Payment failed’ } end

Result: Checkout success rate improved from 85% to 98%, average response time reduced by 40%

Intermediate Level

Query Optimization

6. Advanced Eager Loading

Eager Loading Types

# includes – separate queries (recommended) Post.includes(:author, :comments) # preload – separate queries (same as includes) Post.preload(:author, :comments) # eager_load – single JOIN query Post.eager_load(:author, :comments) # joins – for filtering only Post.joins(:author).where(authors: { name: ‘John’ })
7. Counter Caches

What are Counter Caches?

Counter caches store the count of associated records directly in the parent model, eliminating the need for COUNT queries. This dramatically improves performance when you frequently need to display counts of associated records.

How Counter Caches Work

# Without counter cache: N+1 COUNT queries @posts = Post.all @posts.each do |post| puts “Post #{post.title} has #{post.comments.count} comments” end # Results in 1 query for posts + N queries for comment counts # With counter cache: No additional queries @posts = Post.all @posts.each do |post| puts “Post #{post.title} has #{post.comments_count} comments” end # Only 1 query total!

Setting Up Counter Caches

# 1. Add counter cache column to parent table class AddCommentsCountToPosts < ActiveRecord::Migration[7.0] def change add_column :posts, :comments_count, :integer, default: 0, null: false add_index :posts, :comments_count end end # 2. Update the child model class Comment < ApplicationRecord belongs_to :post, counter_cache: true end # 3. Populate existing counts (run once) Post.find_each do |post| Post.reset_counters(post.id, :comments) end

Multiple Counter Caches

# User with multiple counter caches class User < ApplicationRecord has_many :posts has_many :comments has_many :likes end class Post < ApplicationRecord belongs_to :user, counter_cache: true end class Comment < ApplicationRecord belongs_to :user, counter_cache: true end class Like < ApplicationRecord belongs_to :user, counter_cache: true end # Migration for multiple counters class AddCountersToUsers < ActiveRecord::Migration[7.0] def change add_column :users, :posts_count, :integer, default: 0, null: false add_column :users, :comments_count, :integer, default: 0, null: false add_column :users, :likes_count, :integer, default: 0, null: false add_index :users, :posts_count add_index :users, :comments_count add_index :users, :likes_count end end

Performance Comparison

# Performance test results # Without counter cache: 1000 posts = 1001 queries Benchmark.measure do Post.all.each { |post| post.comments.count } end # => 2.5 seconds, 1001 queries # With counter cache: 1000 posts = 1 query Benchmark.measure do Post.all.each { |post| post.comments_count } end # => 0.1 seconds, 1 query (25x faster!)

When to Use Counter Caches

  • Use when: You frequently display counts of associated records
  • Use when: The counts are used in sorting or filtering
  • Use when: You have many parent records with many children
  • Avoid when: The counts are rarely used
  • Avoid when: You need real-time accuracy (use background jobs instead)

Counter Cache Best Practices

  • Always add indexes on counter cache columns for better performance
  • Use reset_counters to fix corrupted counts
  • Consider using background jobs for high-frequency updates
  • Monitor counter cache accuracy in production
  • Use conditional counter caches for complex scenarios

Conditional Counter Caches

# Only count published comments class Comment < ApplicationRecord belongs_to :post, counter_cache: :published_comments_count def self.counter_cache_column published? ? ‘published_comments_count’ : nil end end # Migration for conditional counter add_column :posts, :published_comments_count, :integer, default: 0, null: false

Real-World Case Study: Social Media Platform

Problem: User profile pages taking 8+ seconds to load with 100,000+ users

Root Cause: Counting posts, comments, likes, and followers for each user

# Before: 400,000+ queries per page load @users = User.includes(:profile).limit(20) @users.each do |user| user.posts.count # 20 queries user.comments.count # 20 queries user.likes.count # 20 queries user.followers.count # 20 queries end # After: 1 query total @users = User.includes(:profile).limit(20) @users.each do |user| user.posts_count # No additional queries user.comments_count # No additional queries user.likes_count # No additional queries user.followers_count # No additional queries end

Result: Profile pages load in 0.8 seconds, 99.9% reduction in database queries

Additional Benefits: Reduced server load, improved user experience, better scalability

Advanced Caching

8. Russian Doll Caching

What is Russian Doll Caching?

Russian Doll Caching is a nested caching strategy where you cache both parent and child fragments. When a child record is updated, only its cache is invalidated, while parent caches remain intact. This provides optimal cache efficiency and automatic cache invalidation.

How Russian Doll Caching Works

# Basic Russian Doll structure <% cache [“v1”, @post] do %> <h1><%= @post.title %></h1> <p><%= @post.content %></p> <% @post.comments.each do |comment| %> <% cache comment do %> <div class=“comment”> <p><%= comment.content %></p> <small><%= comment.author.name %></small> </div> <% end %> <% end %> <% end %> # Cache keys generated: # Parent: “posts/123-20231201120000” # Child: “comments/456-20231201130000”

Cache Key Structure

# Automatic cache key generation # Rails uses: model_name/id-updated_at # Custom cache keys <% cache [“v2”, @post, current_user] do %> <%= render @post %> <% end %> # Cache key: “v2/posts/123-20231201120000/users/789-20231201110000” # Using cache_key_with_version <% cache @post.cache_key_with_version do %> <%= render @post %> <% end %>

Nested Fragment Caching

# Complex nested structure <% cache [“v1”, @user] do %> <h1><%= @user.name %></h1> <% @user.posts.each do |post| %> <% cache post do %> <h2><%= post.title %></h2> <p><%= post.content %></p> <% post.comments.each do |comment| %> <% cache comment do %> <div class=“comment”> <p><%= comment.content %></p> <small><%= comment.author.name %></small> </div> <% end %> <% end %> <% end %> <% end %> <% end %>

Cache Invalidation Strategy

# Automatic invalidation with touch: true class Comment < ApplicationRecord belongs_to :post, touch: true end # When comment is updated: # 1. Comment cache is invalidated # 2. Post cache is invalidated (due to touch: true) # 3. User cache remains intact # Manual cache invalidation def update_comment @comment.update!(comment_params) Rails.cache.delete_matched(“comments/#{@comment.id}*”) end

Performance Benefits

# Performance comparison # Without caching: 500ms per page load # With Russian Doll: 50ms first load, 5ms subsequent loads # Cache hit rates # – Parent fragments: 95% hit rate # – Child fragments: 85% hit rate # – Overall: 90% cache efficiency

Advanced Russian Doll Patterns

# Conditional caching <% cache [“v1”, @post] do %> <h1><%= @post.title %></h1> <% if @post.comments.any? %> <% cache [“comments”, @post] do %> <% @post.comments.each do |comment| %> <% cache comment do %> <%= render comment %> <% end %> <% end %> <% end %> <% end %> <% end %> # Cache with expiration <% cache @post, expires_in: 1.hour do %> <%= render @post %> <% end %>

Cache Key Optimization

# Custom cache key methods class Post < ApplicationRecord def cache_key “posts/#{id}-#{updated_at.to_i}-#{comments_count}” end def cache_version “#{updated_at.to_i}-#{comments_count}” end end # Using custom cache keys <% cache @post.cache_key do %> <%= render @post %> <% end %>

Russian Doll Best Practices

  • Use meaningful cache keys that include relevant data
  • Implement touch: true for proper cache invalidation
  • Keep cache fragments small and focused
  • Monitor cache hit rates and adjust strategies
  • Use conditional caching for dynamic content
  • Consider cache expiration for frequently changing data
  • Test cache invalidation thoroughly

Cache Monitoring

# Monitor cache performance def cache_stats stats = Rails.cache.redis.info { hit_rate: stats[‘keyspace_hits’].to_f / (stats[‘keyspace_hits’].to_f + stats[‘keyspace_misses’].to_f), memory_usage: stats[‘used_memory_human’], total_keys: stats[‘db0’] } end # Cache debugging Rails.cache.logger = Rails.logger

Real-World Case Study: E-commerce Product Catalog

Problem: Product catalog pages taking 15+ seconds to load with complex nested data

Root Cause: No caching of product details, reviews, and related products

# Before: No caching <% @categories.each do |category| %> <h2><%= category.name %></h2> <% category.products.each do |product| %> <h3><%= product.name %></h3> <p><%= product.description %></p> <% product.reviews.each do |review| %> <p><%= review.content %></p> <% end %> <% end %> <% end %> # After: Russian Doll caching <% cache [“v1”, @categories] do %> <% @categories.each do |category| %> <% cache category do %> <h2><%= category.name %></h2> <% category.products.each do |product| %> <% cache product do %> <h3><%= product.name %></h3> <p><%= product.description %></p> <% product.reviews.each do |review| %> <% cache review do %> <p><%= review.content %></p> <% end %> <% end %> <% end %> <% end %> <% end %> <% end %> <% end %>

Result: Catalog pages load in 0.8 seconds, 95% cache hit rate

Additional Benefits: Automatic cache invalidation when products are updated, reduced database load, improved user experience

9. Redis Caching

What is Redis Caching?

Redis is an in-memory data structure store that serves as a high-performance caching layer for Rails applications. It provides sub-millisecond response times and supports various data structures, making it ideal for caching complex data and session storage.

Redis Setup and Configuration

# Add Redis gem to Gemfile gem ‘redis’ gem ‘redis-rails’ # config/environments/production.rb config.cache_store = :redis_cache_store, { url: ENV[‘REDIS_URL’], expires_in: 1.hour, compress: true, compress_threshold: 1.kilobyte }

Low-Level Caching with Redis

# Basic caching def expensive_calculation Rails.cache.fetch(“expensive_data_#{@user.id}”, expires_in: 1.hour) do # Expensive operation here calculate_user_stats(@user) end end # Cache with versioning Rails.cache.fetch([“v2”, “user_stats”, @user.id]) do calculate_user_stats(@user) end

Redis Best Practices

  • Use meaningful key names with consistent naming conventions
  • Set appropriate TTL (Time To Live) for cached data
  • Implement cache warming for critical data
  • Monitor memory usage and implement eviction policies
  • Use compression for large objects
  • Implement circuit breakers for Redis failures

Real-World Case Study: Social Media Analytics Dashboard

Problem: Analytics dashboard taking 30+ seconds to load with complex aggregations

Root Cause: No caching of expensive analytics calculations

# Before: Expensive calculations every time def user_analytics(user_id) { total_posts: user.posts.count, total_likes: user.posts.joins(:likes).count, engagement_rate: calculate_engagement_rate(user) } end # After: Redis caching def cached_user_analytics(user_id) cache_key = “analytics_user_#{user_id}_#{Date.current.to_s}” Rails.cache.fetch(cache_key, expires_in: 1.hour) do user = User.find(user_id) { total_posts: user.posts_count, total_likes: user.total_likes_count, engagement_rate: calculate_engagement_rate(user) } end end

Result: Dashboard loads in 2 seconds, 95% cache hit rate

Background Jobs

10. ActiveJob Basics

What is ActiveJob?

ActiveJob is Rails’ framework for declaring jobs and making them run on a variety of queuing backends. It provides a unified interface for background job processing, allowing you to offload time-consuming tasks from the main request cycle.

Creating Background Jobs

class EmailJob < ApplicationJob queue_as :default def perform(user_id) user = User.find(user_id) UserMailer.welcome(user).deliver_now end end # Enqueue job EmailJob.perform_later(user.id) # Job with parameters class ProcessDataJob < ApplicationJob queue_as :data_processing def perform(data_id, options = {}) data = Data.find(data_id) process_data(data, options) end end

Queue Adapters

# Sidekiq (recommended for production) # Gemfile gem ‘sidekiq’ # config/application.rb config.active_job.queue_adapter = :sidekiq # Delayed Job gem ‘delayed_job_active_record’ config.active_job.queue_adapter = :delayed_job # Resque gem ‘resque’ config.active_job.queue_adapter = :resque # Async (for development) config.active_job.queue_adapter = :async

Job Prioritization and Queues

# Different queue priorities class HighPriorityJob < ApplicationJob queue_as :urgent end class LowPriorityJob < ApplicationJob queue_as :background end # Enqueue with priority EmailJob.set(wait: 1.hour).perform_later(user.id) EmailJob.set(priority: 10).perform_later(user.id) EmailJob.set(queue: :urgent).perform_later(user.id)

Error Handling and Retries

class RobustJob < ApplicationJob queue_as :default retry_on StandardError, wait: 5.seconds, attempts: 3 def perform(user_id) user = User.find(user_id) # Job logic here process_user_data(user) rescue ActiveRecord::RecordNotFound # Log error but don’t retry Rails.logger.error “User #{user_id} not found” end def on_failure(exception) # Custom error handling Rails.logger.error “Job failed: #{exception.message}” notify_admin(exception) end end

Job Performance Optimization

# Batch processing class BatchProcessJob < ApplicationJob queue_as :batch_processing def perform(user_ids) User.where(id: user_ids).find_each(batch_size: 100) do |user| process_user(user) end end end # Chaining jobs class DataProcessingJob < ApplicationJob def perform(data_id) data = Data.find(data_id) processed_data = process_data(data) # Chain to next job NotificationJob.perform_later(processed_data.id) end end

Job Monitoring and Metrics

# Custom job metrics class MonitoredJob < ApplicationJob def perform(user_id) start_time = Time.current # Job logic process_user_data(user_id) duration = Time.current – start_time Rails.logger.info “Job completed in #{duration}ms” # Send metrics to monitoring service StatsD.timing(‘job.duration’, duration) end end

ActiveJob Best Practices

  • Keep jobs idempotent (safe to run multiple times)
  • Use appropriate queue priorities for different job types
  • Implement proper error handling and retry logic
  • Monitor job performance and queue lengths
  • Use batch processing for large datasets
  • Keep jobs focused on single responsibilities
  • Test jobs thoroughly in development

Real-World Case Study: E-commerce Order Processing

Problem: Order processing taking 15+ seconds, causing timeouts and poor user experience

Root Cause: All order processing happening synchronously in the request cycle

# Before: Synchronous processing def create_order @order = Order.create!(order_params) # Expensive operations in request cycle process_payment(@order) update_inventory(@order) send_confirmation_email(@order) update_analytics(@order) redirect_to @order end # After: Background job processing def create_order @order = Order.create!(order_params) # Queue background jobs ProcessOrderJob.perform_later(@order.id) redirect_to @order end class ProcessOrderJob < ApplicationJob queue_as :orders def perform(order_id) order = Order.find(order_id) begin process_payment(order) update_inventory(order) send_confirmation_email(order) update_analytics(order) rescue => e order.update!(status: ‘failed’) raise e end end end

Result: Order creation responds in 200ms, background processing completes in 5 seconds

Additional Benefits: Better user experience, improved system reliability, better error handling

Asset Optimization

11. Asset Pipeline

What is the Asset Pipeline?

The Asset Pipeline is Rails’ framework for organizing, processing, and serving static assets like JavaScript, CSS, and images. It provides features like concatenation, minification, and fingerprinting to optimize asset delivery and improve performance.

Asset Pipeline Configuration

# config/environments/production.rb config.assets.compile = false config.assets.js_compressor = :uglifier config.assets.css_compressor = :sass config.assets.digest = true config.assets.version = ‘1.0’ # Precompile additional assets config.assets.precompile += %w( admin.js admin.css ) # CDN configuration config.action_controller.asset_host = “https://cdn.example.com”

JavaScript Optimization

# application.js #= require jquery #= require bootstrap #= require_tree . # Custom JavaScript compression # config/environments/production.rb config.assets.js_compressor = :terser # Terser configuration config.assets.terser = { compress: { drop_console: true, drop_debugger: true } }

CSS Optimization

# application.scss @import “bootstrap”; @import “custom”; # CSS compression settings # config/environments/production.rb config.assets.css_compressor = :sass # Sass configuration config.sass.style = :compressed config.sass.line_comments = false

Image Optimization

# Using image_tag with optimization <%= image_tag “logo.png”, alt: “Logo”, class: “logo” %> # Responsive images <%= image_tag “hero.jpg”, srcset: “#{asset_path(‘hero-small.jpg’)} 300w, #{asset_path(‘hero-medium.jpg’)} 600w, #{asset_path(‘hero-large.jpg’)} 900w”, sizes: “(max-width: 600px) 300px, (max-width: 900px) 600px, 900px” %> # Lazy loading <%= image_tag “product.jpg”, loading: “lazy” %>

CDN Integration

# CDN configuration # config/environments/production.rb config.action_controller.asset_host = ENV[‘CDN_URL’] # Multiple CDN hosts for load balancing config.action_controller.asset_host = Proc.new { |source| if source.match?(/\.(css|js)$/) “https://cdn#{rand(3) + 1}.example.com” else “https://cdn.example.com” end } # CloudFront configuration config.action_controller.asset_host = “https://d1234567890.cloudfront.net”

Asset Precompilation

# Precompile assets RAILS_ENV=production bundle exec rake assets:precompile # Clean old assets RAILS_ENV=production bundle exec rake assets:clean # Custom precompilation # lib/tasks/assets.rake Rake::Task[“assets:precompile”].enhance do Rake::Task[“assets:compress_images”].invoke end namespace :assets do task :compress_images do # Image compression logic system(“find public/assets -name ‘*.jpg’ -exec jpegoptim –strip-all {} \\;”) end end

Performance Monitoring

# Asset performance tracking def track_asset_performance start_time = Time.current # Asset loading logic load_assets duration = Time.current – start_time Rails.logger.info “Assets loaded in #{duration}ms” # Send to monitoring service StatsD.timing(‘assets.load_time’, duration) end

Asset Pipeline Best Practices

  • Always precompile assets in production
  • Use CDN for static asset delivery
  • Enable gzip compression for assets
  • Optimize images before adding to assets
  • Use asset fingerprinting for cache busting
  • Monitor asset load times and sizes
  • Implement lazy loading for images
  • Use responsive images for different screen sizes

Real-World Case Study: E-commerce Site

Problem: Homepage taking 8+ seconds to load due to large, unoptimized assets

Root Cause: No asset optimization, missing CDN, large images

# Before: Unoptimized assets # – 2MB total asset size # – No CDN # – Uncompressed images # – No asset fingerprinting # After: Optimized assets # config/environments/production.rb config.assets.compile = false config.assets.js_compressor = :terser config.assets.css_compressor = :sass config.assets.digest = true config.action_controller.asset_host = “https://cdn.example.com” # Image optimization # – Compressed images (WebP format) # – Responsive images # – Lazy loading

Result: Asset load time reduced from 6 seconds to 0.8 seconds

Additional Benefits: 70% reduction in asset size, improved Core Web Vitals, better SEO rankings

API Performance

12. REST API Optimization

Why API Performance Matters

API performance directly impacts user experience, mobile app performance, and third-party integrations. Slow APIs can cause cascading performance issues across your entire ecosystem.

Common API Performance Issues

# 1. Over-fetching data # BAD: Returning all user data def show @user = User.find(params[:id]) render json: @user # Returns all columns end # GOOD: Return only needed fields def show @user = User.select(:id, :name, :email).find(params[:id]) render json: @user end # 2. N+1 queries in APIs # BAD: N+1 when serializing def index @posts = Post.all render json: @posts # Each post.author triggers a query end # GOOD: Eager load associations def index @posts = Post.includes(:author, :comments).all render json: @posts end

Efficient Serialization

# 1. Use Jbuilder for complex JSON # app/views/api/v1/users/show.json.jbuilder json.user do json.id @user.id json.name @user.name json.email @user.email json.profile do json.bio @user.profile.bio json.avatar @user.profile.avatar_url end json.posts @user.posts.limit(5) do |post| json.id post.id json.title post.title json.created_at post.created_at end end # 2. Use FastJsonapi for high-performance serialization # app/serializers/user_serializer.rb class UserSerializer include FastJsonapi::ObjectSerializer attributes :name, :email, :created_at has_many :posts, serializer: PostSerializer belongs_to :profile, serializer: ProfileSerializer end # 3. Custom serialization for performance def show @user = User.includes(:profile, :posts).find(params[:id]) render json: { id: @user.id, name: @user.name, email: @user.email, profile: { bio: @user.profile&.bio, avatar: @user.profile&.avatar_url }, posts_count: @user.posts.size, recent_posts: @user.posts.limit(5).map { |post| { id: post.id, title: post.title, created_at: post.created_at } } } end

API Caching Strategies

# 1. HTTP caching headers def show @user = User.find(params[:id]) # Set cache headers fresh_when(@user) render json: @user end # 2. Fragment caching for complex responses def index @users = User.includes(:profile).all render json: Rails.cache.fetch(“users_list_#{@users.maximum(:updated_at)}”) do @users.map { |user| { id: user.id, name: user.name, profile: user.profile&.bio } } end end # 3. Redis caching for API responses def analytics cache_key = “analytics_#{Date.current}” data = Rails.cache.fetch(cache_key, expires_in: 1.hour) do { total_users: User.count, active_users: User.where(“last_login_at > ?”, 1.week.ago).count, total_orders: Order.count, revenue: Order.sum(:amount) } end render json: data end

Pagination and Filtering

# 1. Efficient pagination def index @users = User.includes(:profile) .page(params[:page]) .per(params[:per_page] || 25) render json: { users: @users, pagination: { current_page: @users.current_page, total_pages: @users.total_pages, total_count: @users.total_count } } end # 2. Cursor-based pagination (for large datasets) def index cursor = params[:cursor] limit = (params[:limit] || 25).to_i @users = User.includes(:profile) if cursor @users = @users.where(“id > ?”, cursor) end @users = @users.limit(limit + 1) has_more = @users.count > limit if has_more @users = @users.limit(limit) end render json: { users: @users, pagination: { has_more: has_more, next_cursor: has_more ? @users.last.id : nil } } end # 3. Efficient filtering def index @users = User.includes(:profile) if params[:search].present? @users = @users.where(“name ILIKE ?”, “%#{params[:search]}%”) end if params[:status].present? @users = @users.where(status: params[:status]) end if params[:created_after].present? @users = @users.where(“created_at >= ?”, params[:created_after]) end render json: @users end

API Rate Limiting

# 1. Using Rack::Attack for rate limiting # config/initializers/rack_attack.rb class Rack::Attack # Rate limit by IP throttle(‘req/ip’, limit: 300, period: 5.minutes) do |req| req.ip end # Rate limit by user throttle(‘req/user’, limit: 1000, period: 1.hour) do |req| req.env[‘warden’].user&.id end # Rate limit specific endpoints throttle(‘api/search’, limit: 50, period: 1.minute) do |req| req.env[‘warden’].user&.id if req.path.start_with?(‘/api/search’) end end # 2. Custom rate limiting class ApiRateLimiter def self.check_limit(user_id, endpoint, limit: 100, period: 1.hour) key = “rate_limit:#{user_id}:#{endpoint}” current = Rails.cache.read(key) || 0 if current >= limit false else Rails.cache.increment(key, 1, expires_in: period) true end end end

API Response Optimization

# 1. Compress responses # config/application.rb config.middleware.use Rack::Deflater # 2. Use conditional requests def show @user = User.find(params[:id]) if stale?(@user) render json: @user end end # 3. Batch operations def batch_update user_ids = params[:user_ids] updates = params[:updates] results = [] User.transaction do user_ids.each do |user_id| user = User.find(user_id) user.update!(updates) results << { id: user.id, success: true } rescue => e results << { id: user_id, success: false, error: e.message } end end render json: { results: results } end

API Monitoring and Metrics

# 1. Track API performance class ApiPerformanceTracker def self.track(endpoint, duration, status) Rails.cache.increment(“api_calls:#{endpoint}”, 1, expires_in: 1.hour) Rails.cache.increment(“api_duration:#{endpoint}”, duration, expires_in: 1.hour) if status >= 400 Rails.cache.increment(“api_errors:#{endpoint}”, 1, expires_in: 1.hour) end end end # 2. API health checks def health render json: { status: ‘healthy’, timestamp: Time.current, database: database_healthy?, cache: cache_healthy?, uptime: Process.clock_gettime(Process::CLOCK_MONOTONIC) } end

Best Practices

  • Use appropriate HTTP status codes: 200, 201, 400, 401, 404, 422, 500
  • Implement proper error handling: Consistent error response format
  • Use pagination for large datasets: Offset-based or cursor-based
  • Implement rate limiting: Protect against abuse
  • Cache API responses: Use HTTP caching and application caching
  • Optimize serialization: Use efficient JSON serializers
  • Monitor API performance: Track response times and error rates
  • Use compression: Enable gzip compression

Real-World Case Study: Mobile App API

Problem: Mobile app API taking 5+ seconds to load user dashboard

Root Cause: Over-fetching data and N+1 queries in serialization

# Before: Inefficient API def dashboard @user = User.find(params[:id]) render json: @user # Returns all user data end # After: Optimized API def dashboard @user = User.includes(:profile, :posts, :orders) .select(:id, :name, :email, :created_at) .find(params[:id]) render json: { user: { id: @user.id, name: @user.name, email: @user.email }, profile: { bio: @user.profile&.bio, avatar: @user.profile&.avatar_url }, stats: { posts_count: @user.posts.size, orders_count: @user.orders.size, total_spent: @user.orders.sum(:amount) }, recent_activity: @user.posts.limit(5).map { |post| { id: post.id, title: post.title, created_at: post.created_at } } } end

Result: API response time reduced from 5 seconds to 800ms, 84% improvement

Additional Benefits: Reduced mobile app battery usage, better user experience, lower server costs

Advanced Level

Database Scaling

12. Read Replicas

What are Read Replicas?

Read replicas are database copies that maintain a synchronized copy of your primary database. They handle read operations, reducing load on your primary database and improving read performance. This is essential for scaling read-heavy applications.

Setting Up Read Replicas

# config/database.yml production: primary: adapter: postgresql host: primary-db.example.com database: myapp_production username: <%= ENV[‘DB_USERNAME’] %> password: <%= ENV[‘DB_PASSWORD’] %> pool: 20 primary_replica: adapter: postgresql host: replica-db.example.com database: myapp_production username: <%= ENV[‘DB_USERNAME’] %> password: <%= ENV[‘DB_PASSWORD’] %> replica: true pool: 30

Application-Level Configuration

# config/application.rb config.active_record.database_selector = { delay: 2.seconds } config.active_record.database_resolver = ActiveRecord::Middleware::DatabaseSelector::Resolver config.active_record.database_resolver_context = ActiveRecord::Middleware::DatabaseSelector::Resolver::Session # Custom resolver for read replicas class ReadReplicaResolver def self.call(context) if read_operation?(context) “primary_replica” else “primary” end end def self.read_operation?(context) context[:controller]&.action_name&.in?(%w[index show]) end end

Manual Read Replica Usage

# Using connected_to for specific operations def get_user_posts(user_id) User.connected_to(role: :reading) do User.find(user_id).posts.includes(:comments) end end # Automatic read replica for specific models class Post < ApplicationRecord connects_to database: { writing: :primary, reading: :primary_replica } end # Force primary for critical reads def get_fresh_user_data(user_id) User.connected_to(role: :writing) do User.find(user_id) end end

Load Balancing with Multiple Replicas

# Multiple read replicas configuration production: primary: adapter: postgresql host: primary-db.example.com replica_1: adapter: postgresql host: replica-1.example.com replica: true replica_2: adapter: postgresql host: replica-2.example.com replica: true replica_3: adapter: postgresql host: replica-3.example.com replica: true # Round-robin load balancing class ReplicaLoadBalancer def self.get_replica replicas = [‘replica_1’, ‘replica_2’, ‘replica_3’] replicas[rand(replicas.length)] end end

Replica Lag Monitoring

# Monitor replica lag def check_replica_lag replica_lag = ActiveRecord::Base.connected_to(role: :reading) do ActiveRecord::Base.connection.execute( “SELECT EXTRACT(EPOCH FROM (now() – pg_last_xact_replay_timestamp())) AS lag_seconds” ).first[‘lag_seconds’] end if replica_lag.to_f > 5.0 Rails.logger.warn “Replica lag is #{replica_lag} seconds” # Switch to primary for critical reads use_primary_for_critical_reads end end # Health check for replicas def replica_health_check ActiveRecord::Base.connected_to(role: :reading) do ActiveRecord::Base.connection.execute(“SELECT 1”) end true rescue => e Rails.logger.error “Replica health check failed: #{e.message}” false end

Performance Benefits

# Performance comparison # Before: Single database # – 1000 concurrent users = 1000 queries to primary # – Average response time: 200ms # – Database CPU: 80% # After: Read replicas # – 1000 concurrent users = 800 queries to replicas, 200 to primary # – Average response time: 120ms # – Primary database CPU: 40% # – Replica database CPU: 60%

Read Replica Best Practices

  • Use read replicas for read-heavy operations (reports, analytics)
  • Keep write operations on the primary database
  • Monitor replica lag and implement fallback strategies
  • Use connection pooling for better resource utilization
  • Implement health checks for replica availability
  • Consider replica lag for time-sensitive data
  • Use multiple replicas for load balancing
  • Monitor replica performance and scale as needed

Real-World Case Study: E-commerce Analytics Platform

Problem: Analytics dashboard taking 15+ seconds to load with 10,000+ concurrent users

Root Cause: All read operations hitting the primary database

# Before: Single database bottleneck def get_analytics_data { total_sales: Order.sum(:amount), top_products: Product.joins(:orders).group(:id).sum(:quantity), user_activity: User.joins(:orders).group(:id).count, revenue_trends: Order.group_by_month(:created_at).sum(:amount) } end # After: Read replica optimization def get_analytics_data ActiveRecord::Base.connected_to(role: :reading) do { total_sales: Order.sum(:amount), top_products: Product.joins(:orders).group(:id).sum(:quantity), user_activity: User.joins(:orders).group(:id).count, revenue_trends: Order.group_by_month(:created_at).sum(:amount) } end end

Result: Analytics dashboard loads in 2 seconds, 60% reduction in primary database load

Additional Benefits: Better user experience, improved system reliability, cost savings on database resources

13. Database Sharding

What is Database Sharding?

Database sharding is a horizontal partitioning strategy that splits a large database into smaller, more manageable pieces called shards. Each shard contains a subset of the data, allowing for better performance and scalability by distributing the load across multiple database instances.

Sharding Strategies

# Hash-based sharding (most common) class User < ApplicationRecord def self.connection_for_shard(user_id) shard = (user_id % 4) + 1 “shard_#{shard}” end end # Range-based sharding class Order < ApplicationRecord def self.connection_for_shard(created_at) year = created_at.year case year when 2020..2021 “shard_1” when 2022..2023 “shard_2” else “shard_3” end end end # Geographic sharding class User < ApplicationRecord def self.connection_for_shard(country_code) case country_code when ‘US’, ‘CA’ “shard_us” when ‘GB’, ‘DE’, ‘FR’ “shard_eu” else “shard_global” end end end

Shard Configuration

# config/database.yml production: primary: adapter: postgresql host: primary-db.example.com shard_1: adapter: postgresql host: shard-1.example.com database: myapp_shard_1 shard_2: adapter: postgresql host: shard-2.example.com database: myapp_shard_2 shard_3: adapter: postgresql host: shard-3.example.com database: myapp_shard_3 shard_4: adapter: postgresql host: shard-4.example.com database: myapp_shard_4 # Shard connection management class ShardManager def self.connect_to_shard(shard_name) ActiveRecord::Base.connected_to(database: shard_name.to_sym) do yield end end def self.get_user_shard(user_id) shard_number = (user_id % 4) + 1 “shard_#{shard_number}” end end

Cross-Shard Queries

# Aggregating data across shards def get_global_user_stats total_users = 0 total_orders = 0 (1..4).each do |shard_num| ShardManager.connect_to_shard(“shard_#{shard_num}”) do total_users += User.count total_orders += Order.count end end { total_users: total_users, total_orders: total_orders, avg_orders_per_user: total_orders.to_f / total_users } end # Parallel cross-shard queries def get_parallel_user_stats shard_results = (1..4).map do |shard_num| Thread.new do ShardManager.connect_to_shard(“shard_#{shard_num}”) do { shard: shard_num, user_count: User.count, order_count: Order.count } end end end.map(&:value) shard_results end

Shard Migration and Rebalancing

# Migrating data between shards class ShardMigrationService def self.migrate_user_to_shard(user_id, target_shard) source_shard = ShardManager.get_user_shard(user_id) return if source_shard == target_shard # Copy user data to target shard ShardManager.connect_to_shard(source_shard) do user_data = User.find(user_id).as_json user_orders = Order.where(user_id: user_id).as_json end ShardManager.connect_to_shard(target_shard) do User.create!(user_data) user_orders.each { |order| Order.create!(order) } end # Update shard mapping update_user_shard_mapping(user_id, target_shard) end end # Shard rebalancing def rebalance_shards shard_loads = get_shard_loads target_load = shard_loads.values.sum / shard_loads.length shard_loads.each do |shard, load| if load > target_load * 1.2 migrate_users_from_shard(shard, target_load) end end end

Shard Monitoring and Health Checks

# Monitor shard performance def monitor_shard_performance shard_stats = {} (1..4).each do |shard_num| shard_name = “shard_#{shard_num}” ShardManager.connect_to_shard(shard_name) do start_time = Time.current User.count query_time = Time.current – start_time shard_stats[shard_name] = { query_time: query_time, user_count: User.count, order_count: Order.count, connection_pool_size: ActiveRecord::Base.connection_pool.size } end end shard_stats end # Shard health check def shard_health_check(shard_name) ShardManager.connect_to_shard(shard_name) do ActiveRecord::Base.connection.execute(“SELECT 1”) end true rescue => e Rails.logger.error “Shard #{shard_name} health check failed: #{e.message}” false end

Sharding Best Practices

  • Choose the right sharding strategy based on your data access patterns
  • Keep related data in the same shard to avoid cross-shard joins
  • Implement proper shard routing logic
  • Monitor shard performance and balance load
  • Plan for shard migration and rebalancing
  • Use connection pooling for each shard
  • Implement proper error handling for shard failures
  • Consider the complexity of cross-shard queries

Real-World Case Study: Multi-Tenant SaaS Platform

Problem: Database performance degrading with 100,000+ tenants and 1TB+ of data

Root Cause: Single database handling all tenant data

# Before: Single database # – All tenants in one database # – 100,000+ tenants # – 1TB+ data # – Average query time: 500ms # After: Sharded by tenant class Tenant < ApplicationRecord def self.connection_for_shard(tenant_id) shard = (tenant_id % 8) + 1 “shard_#{shard}” end end # Tenant-specific queries def get_tenant_data(tenant_id) shard_name = Tenant.connection_for_shard(tenant_id) ShardManager.connect_to_shard(shard_name) do { users: User.count, orders: Order.count, revenue: Order.sum(:amount) } end end

Result: Query time reduced to 50ms, 8x performance improvement

Additional Benefits: Better scalability, improved isolation, easier maintenance

Application Architecture

14. Service Objects

What are Service Objects?

Service objects are Ruby classes that encapsulate complex business logic and operations. They help keep controllers and models thin by moving complex operations into dedicated, reusable classes. This improves code organization, testability, and performance.

Basic Service Object Pattern

class UserRegistrationService def initialize(user_params) @user_params = user_params end def call user = User.create!(@user_params) WelcomeEmailJob.perform_later(user.id) user end end # Usage in controller def create result = UserRegistrationService.new(user_params).call redirect_to result end

Advanced Service Object with Error Handling

class OrderProcessingService def initialize(order_params, user) @order_params = order_params @user = user @errors = [] end def call return failure_result unless valid? ActiveRecord::Base.transaction do order = create_order process_payment(order) update_inventory(order) send_notifications(order) success_result(order) end rescue => e failure_result(e.message) end private def valid? @errors.empty? end def create_order @user.orders.create!(@order_params) end def process_payment(order) PaymentProcessor.charge(order.total_amount, @user.payment_method) end def update_inventory(order) order.items.each do |item| item.product.decrement!(:stock_quantity, item.quantity) end end def send_notifications(order) OrderConfirmationJob.perform_later(order.id) InventoryAlertJob.perform_later if low_stock? end def success_result(order) OpenStruct.new( success?: true, order: order, errors: [] ) end def failure_result(message = ‘Order processing failed’) OpenStruct.new( success?: false, order: nil, errors: [message] ) end end

Service Object with Performance Optimization

class UserAnalyticsService def initialize(user_id) @user_id = user_id @cache_key = “user_analytics_#{user_id}” end def call Rails.cache.fetch(@cache_key, expires_in: 1.hour) do calculate_analytics end end private def calculate_analytics user = User.includes(:orders, :posts, :comments).find(@user_id) { total_orders: user.orders.count, total_spent: user.orders.sum(:amount), avg_order_value: user.orders.average(:amount), posts_count: user.posts.count, comments_count: user.comments.count, engagement_score: calculate_engagement_score(user) } end def calculate_engagement_score(user) (user.posts.count * 2) + user.comments.count + (user.orders.count * 3) end end

Service Object Composition

class ComplexOrderService def initialize(order_params, user) @order_params = order_params @user = user end def call return failure_result unless user_can_order? order_result = OrderProcessingService.new(@order_params, @user).call return order_result unless order_result.success? loyalty_result = LoyaltyPointsService.new(@user, order_result.order).call recommendation_result = RecommendationService.new(@user).call success_result(order_result.order, loyalty_result, recommendation_result) end private def user_can_order? UserValidationService.new(@user).call.success? end def success_result(order, loyalty_result, recommendation_result) OpenStruct.new( success?: true, order: order, loyalty_points: loyalty_result.points, recommendations: recommendation_result.items ) end def failure_result OpenStruct.new(success?: false) end end

Service Object Testing

# spec/services/user_registration_service_spec.rb RSpec.describe UserRegistrationService do let(:user_params) { { name: ‘John’, email: [email protected] } } let(:service) { described_class.new(user_params) } describe ‘#call’ do it ‘creates a user and enqueues welcome email’ do expect { service.call }.to change(User, :count).by(1) expect(WelcomeEmailJob).to have_been_enqueued end it ‘returns the created user’ do result = service.call expect(result).to be_a(User) expect(result.name).to eq(‘John’) end end end # Performance testing RSpec.describe UserAnalyticsService do it ‘caches results’ do user = create(:user) service = described_class.new(user.id) expect(Rails.cache).to receive(:fetch).with(“user_analytics_#{user.id}”, expires_in: 1.hour) service.call end end

Service Object Performance Benefits

# Performance comparison # Before: Fat controller # – Controller: 200 lines # – Multiple database queries # – No caching # – Response time: 800ms # After: Service objects # – Controller: 10 lines # – Optimized queries with includes # – Caching implemented # – Response time: 200ms

Service Object Best Practices

  • Keep service objects focused on a single responsibility
  • Use descriptive names that indicate the service’s purpose
  • Return consistent result objects (success/failure)
  • Implement proper error handling and logging
  • Use dependency injection for better testability
  • Cache expensive operations within services
  • Compose services for complex operations
  • Test services thoroughly with unit tests

Real-World Case Study: E-commerce Order Processing

Problem: Order processing logic scattered across controllers, taking 5+ seconds to complete

Root Cause: Complex business logic in controllers with no optimization

# Before: Fat controller def create_order @order = Order.new(order_params) # 50+ lines of business logic if @order.save if process_payment(@order) update_inventory(@order) send_email(@order) update_analytics(@order) redirect_to @order else @order.destroy render :new end else render :new end end # After: Service object def create_order result = OrderProcessingService.new(order_params, current_user).call if result.success? redirect_to result.order else @errors = result.errors render :new end end

Result: Order processing reduced to 1.2 seconds, 75% improvement

Additional Benefits: Better code organization, improved testability, easier maintenance

Advanced Monitoring

15. APM Tools

What are APM Tools?

Application Performance Monitoring (APM) tools provide comprehensive monitoring and observability for Rails applications. They track response times, database queries, external service calls, and help identify performance bottlenecks in production environments.

New Relic Setup and Configuration

# Gemfile gem ‘newrelic_rpm’ # config/newrelic.yml common: &default_settings license_key: <%= ENV[‘NEW_RELIC_LICENSE_KEY’] %> app_name: <%= ENV[‘NEW_RELIC_APP_NAME’] %> log_level: info monitor_mode: true developer_mode: false transaction_tracer: enabled: true record_sql: obfuscated stack_trace_threshold: 0.5 error_collector: enabled: true capture_source: true browser_monitoring: auto_instrument: true production: <<: *default_settings monitor_mode: true log_level: info

Custom Metrics and Instrumentation

# Custom performance metrics def track_user_registration NewRelic::Agent.record_custom_event(“UserRegistration”, { user_type: “standard”, source: “web” }) end # Custom timing def expensive_operation NewRelic::Agent::Datastores.trace(“Custom”, “expensive_calculation”) do # Your expensive operation here calculate_complex_data end end # Custom attributes def show_user_profile NewRelic::Agent.add_custom_attributes( user_id: current_user.id, user_type: current_user.type, plan: current_user.subscription_plan ) # Controller action logic end

Alternative APM Tools

# Skylight gem ‘skylight’ # config/skylight.yml production: authentication: <%= ENV[‘SKYLIGHT_AUTHENTICATION’] %> # Scout gem ‘scout_apm’ # config/scout_apm.yml production: key: <%= ENV[‘SCOUT_KEY’] %> name: <%= ENV[‘SCOUT_NAME’] %> # DataDog gem ‘ddtrace’ # config/datadog.yml production: api_key: <%= ENV[‘DD_API_KEY’] %> service: <%= ENV[‘DD_SERVICE’] %>

Performance Alerting

# Custom alerting logic class PerformanceAlertService def self.check_response_time(controller, action, duration) if duration > 2.seconds NewRelic::Agent.notice_error( StandardError.new(“Slow response time: #{duration}ms”), custom_params: { controller: controller, action: action, duration: duration } ) end end def self.check_database_performance slow_queries = ActiveRecord::Base.connection.execute( “SELECT query, mean_time FROM pg_stat_statements WHERE mean_time > 1000 ORDER BY mean_time DESC LIMIT 10” ) if slow_queries.any? notify_slow_queries(slow_queries) end end end

APM Dashboard Configuration

# Custom dashboard metrics class CustomMetricsCollector def self.collect_metrics { active_users: User.where(‘last_seen_at > ?’, 1.hour.ago).count, total_orders: Order.where(created_at: 1.day.ago..Time.current).count, avg_order_value: Order.where(created_at: 1.day.ago..Time.current).average(:amount), cache_hit_rate: Rails.cache.redis.info[‘keyspace_hits’].to_f / (Rails.cache.redis.info[‘keyspace_hits’].to_f + Rails.cache.redis.info[‘keyspace_misses’].to_f) } end def self.send_to_apm metrics = collect_metrics NewRelic::Agent.record_custom_event(“CustomMetrics”, metrics) end end

APM Best Practices

  • Set up APM tools early in development
  • Configure custom metrics for business-critical operations
  • Set up alerting for performance thresholds
  • Monitor database query performance
  • Track external service response times
  • Use custom attributes for better debugging
  • Monitor memory usage and garbage collection
  • Set up dashboards for key performance indicators

Real-World Case Study: High-Traffic E-commerce Site

Problem: Site experiencing intermittent slowdowns with no visibility into root causes

Root Cause: No comprehensive monitoring or alerting system

# Before: No monitoring # – No visibility into performance issues # – Slow response times during peak hours # – No alerting for performance degradation # – Difficult to debug production issues # After: Comprehensive APM setup # config/newrelic.yml common: &default_settings license_key: <%= ENV[‘NEW_RELIC_LICENSE_KEY’] %> app_name: “E-commerce App” transaction_tracer: enabled: true record_sql: obfuscated stack_trace_threshold: 0.5 error_collector: enabled: true browser_monitoring: auto_instrument: true # Custom performance tracking def track_order_performance NewRelic::Agent.record_custom_event(“OrderProcessing”, { order_value: @order.total_amount, payment_method: @order.payment_method, user_type: @order.user.type }) end

Result: 90% reduction in time to detect and resolve performance issues

Additional Benefits: Proactive monitoring, better user experience, improved system reliability

Infrastructure

16. Load Balancing

What is Load Balancing?

Load balancing distributes incoming network traffic across multiple servers to ensure no single server becomes overwhelmed. This improves application availability, reliability, and performance by spreading the load and providing redundancy.

Nginx Load Balancer Configuration

# /etc/nginx/nginx.conf upstream rails_app { # Round-robin (default) server 127.0.0.1:3000; server 127.0.0.1:3001; server 127.0.0.1:3002; # Weighted round-robin server 127.0.0.1:3000 weight=3; server 127.0.0.1:3001 weight=2; server 127.0.0.1:3002 weight=1; # Least connections least_conn; # Health checks keepalive 32; } server { listen 80; server_name example.com; location / { proxy_pass http://rails_app; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; # Timeouts proxy_connect_timeout 30s; proxy_send_timeout 30s; proxy_read_timeout 30s; } }

Advanced Load Balancing Strategies

# IP Hash (session affinity) upstream rails_app { ip_hash; server 127.0.0.1:3000; server 127.0.0.1:3001; server 127.0.0.1:3002; } # URL Hash upstream rails_app { hash $request_uri consistent; server 127.0.0.1:3000; server 127.0.0.1:3001; server 127.0.0.1:3002; } # Geographic load balancing upstream rails_app_us { server 127.0.0.1:3000; server 127.0.0.1:3001; } upstream rails_app_eu { server 127.0.0.1:3002; server 127.0.0.1:3003; } map $geoip_country_code $backend { default rails_app_us; “GB” rails_app_eu; “DE” rails_app_eu; “FR” rails_app_eu; }

Health Checks and Failover

# Nginx health check configuration upstream rails_app { server 127.0.0.1:3000 max_fails=3 fail_timeout=30s; server 127.0.0.1:3001 max_fails=3 fail_timeout=30s; server 127.0.0.1:3002 max_fails=3 fail_timeout=30s; server 127.0.0.1:3003 backup; # Backup server } # Custom health check endpoint # app/controllers/health_controller.rb class HealthController < ApplicationController def check if database_healthy? && redis_healthy? render json: { status: ‘healthy’ }, status: :ok else render json: { status: ‘unhealthy’ }, status: :service_unavailable end end private def database_healthy? ActiveRecord::Base.connection.execute(“SELECT 1”) true rescue false end def redis_healthy? Rails.cache.redis.ping == ‘PONG’ rescue false end end

Session Management

# Redis-based session storage # config/initializers/session_store.rb Rails.application.config.session_store :redis_store, { servers: [ENV[‘REDIS_URL’]], expire_after: 90.minutes, key: ‘_session_key’ } # Database session storage Rails.application.config.session_store :active_record_store, { key: ‘_session_key’, expire_after: 90.minutes } # Cookie-based session with encryption Rails.application.config.session_store :cookie_store, { key: ‘_session_key’, expire_after: 90.minutes, secure: Rails.env.production?, same_site: :lax }

Load Balancer Monitoring

# Nginx status monitoring location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } # Custom monitoring script def monitor_load_balancer stats = { active_connections: get_nginx_stats(‘Active connections’), requests_per_second: get_nginx_stats(‘Requests per second’), reading_connections: get_nginx_stats(‘Reading’), writing_connections: get_nginx_stats(‘Writing’), waiting_connections: get_nginx_stats(‘Waiting’) } if stats[:active_connections] > 1000 alert_high_load(stats) end stats end def get_nginx_stats(metric) # Parse nginx status page status_page = `curl -s http://localhost/nginx_status` # Extract metric value status_page.match(/#{metric}:\s*(\d+)/)&.[](1)&.to_i || 0 end

Load Balancing Best Practices

  • Use health checks to ensure only healthy servers receive traffic
  • Implement session affinity for stateful applications
  • Use Redis or database for session storage in multi-server setups
  • Monitor load balancer performance and server health
  • Implement proper failover mechanisms
  • Use SSL termination at the load balancer level
  • Configure appropriate timeouts and connection limits
  • Implement rate limiting and DDoS protection

Real-World Case Study: High-Traffic Web Application

Problem: Single server unable to handle 10,000+ concurrent users, causing frequent downtime

Root Cause: No load balancing or horizontal scaling

# Before: Single server # – One Rails server # – 10,000+ concurrent users # – Frequent timeouts and crashes # – 99.5% uptime # After: Load balanced setup # nginx.conf upstream rails_app { least_conn; server 127.0.0.1:3000 max_fails=3 fail_timeout=30s; server 127.0.0.1:3001 max_fails=3 fail_timeout=30s; server 127.0.0.1:3002 max_fails=3 fail_timeout=30s; server 127.0.0.1:3003 backup; } # Session storage Rails.application.config.session_store :redis_store, { servers: [ENV[‘REDIS_URL’]], expire_after: 90.minutes }

Result: 99.99% uptime, 3x better performance, no more crashes

Additional Benefits: Better user experience, improved reliability, easier maintenance

Microservices Performance

17. Microservices Performance

Why Microservices Performance Matters

Microservices architecture introduces network latency, distributed system complexity, and new performance challenges. Optimizing microservices performance is crucial for maintaining fast, reliable distributed applications.

Common Microservices Performance Issues

# 1. Network latency # BAD: Synchronous calls between services def get_user_with_orders(user_id) user = UserService.get_user(user_id) # 50ms orders = OrderService.get_orders(user_id) # 100ms payments = PaymentService.get_payments(user_id) # 75ms { user: user, orders: orders, payments: payments } # Total: 225ms end # GOOD: Parallel requests def get_user_with_orders(user_id) user_future = Concurrent::Future.execute { UserService.get_user(user_id) } orders_future = Concurrent::Future.execute { OrderService.get_orders(user_id) } payments_future = Concurrent::Future.execute { PaymentService.get_payments(user_id) } { user: user_future.value, orders: orders_future.value, payments: payments_future.value } # Total: ~100ms (longest request) end # 2. Circuit breaker pattern class CircuitBreaker def initialize(failure_threshold: 5, timeout: 60) @failure_threshold = failure_threshold @timeout = timeout @failures = 0 @last_failure_time = nil @state = :closed end def call case @state when :open if Time.current – @last_failure_time > @timeout @state = :half_open else raise “Circuit breaker is open” end end # Execute the call result = yield @failures = 0 @state = :closed result rescue => e @failures += 1 @last_failure_time = Time.current if @failures >= @failure_threshold @state = :open end raise e end end

Service Communication Optimization

# 1. HTTP/2 for better performance # config/application.rb config.force_ssl = true config.ssl_options = { hsts: { subdomains: true, preload: true } } # 2. Connection pooling # config/initializers/httparty.rb HTTParty::Basement.http_proxy(ENV[‘HTTP_PROXY’], ENV[‘HTTP_PROXY_PORT’]) HTTParty::Basement.connection_pool_size(20) HTTParty::Basement.connection_pool_timeout(5) # 3. Service client with caching class UserServiceClient include HTTParty base_uri ENV[‘USER_SERVICE_URL’] def self.get_user(user_id) cache_key = “user_service:#{user_id}” Rails.cache.fetch(cache_key, expires_in: 5.minutes) do response = get(“/users/#{user_id}”) response.success? ? response.parsed_response : nil end end end

Message Queues and Async Processing

# 1. Using Sidekiq for async processing class OrderProcessingJob < ApplicationJob queue_as :orders def perform(order_id) order = Order.find(order_id) # Process order asynchronously InventoryService.update_stock(order) NotificationService.send_confirmation(order) AnalyticsService.track_purchase(order) end end # 2. Event-driven architecture class OrderCreatedEvent def self.publish(order) event = { event_type: ‘order.created’, data: { order_id: order.id, user_id: order.user_id, amount: order.amount, created_at: order.created_at } } Rails.cache.publish(‘events’, event.to_json) end end # 3. Event consumer class OrderEventConsumer def self.handle_order_created(event_data) order_id = event_data[‘order_id’] # Process in background OrderProcessingJob.perform_later(order_id) end end

Database Per Service

# 1. Service-specific database configuration # config/database.yml production: user_service: adapter: postgresql host: user-db.example.com database: user_service_production order_service: adapter: postgresql host: order-db.example.com database: order_service_production payment_service: adapter: postgresql host: payment-db.example.com database: payment_service_production # 2. Service-specific models class User < ApplicationRecord connects_to database: { writing: :user_service, reading: :user_service } end class Order < ApplicationRecord connects_to database: { writing: :order_service, reading: :order_service } end # 3. Data synchronization class UserDataSync def self.sync_user_data(user_id) user = UserService.get_user(user_id) # Sync to other services OrderService.update_user(user) PaymentService.update_user(user) NotificationService.update_user(user) end end

API Gateway and Load Balancing

# 1. API Gateway configuration # config/routes.rb Rails.application.routes.draw do namespace :api do namespace :v1 do # Route to appropriate service get ‘/users/:id’, to: ‘users#show’ get ‘/orders/:id’, to: ‘orders#show’ get ‘/payments/:id’, to: ‘payments#show’ end end end # 2. Service discovery class ServiceDiscovery def self.get_service_url(service_name) case service_name when ‘user’ ENV[‘USER_SERVICE_URL’] when ‘order’ ENV[‘ORDER_SERVICE_URL’] when ‘payment’ ENV[‘PAYMENT_SERVICE_URL’] else raise “Unknown service: #{service_name}” end end end # 3. Load balancing class LoadBalancer def self.get_service_instance(service_name) instances = get_service_instances(service_name) instances[rand(instances.length)] end def self.get_service_instances(service_name) case service_name when ‘user’ [‘user-service-1:3001’, ‘user-service-2:3001’] when ‘order’ [‘order-service-1:3002’, ‘order-service-2:3002’] end end end

Monitoring and Observability

# 1. Distributed tracing class DistributedTracer def self.trace(operation_name, tags = {}) trace_id = SecureRandom.uuid Rails.logger.info “Starting trace: #{operation_name}, ID: #{trace_id}” start_time = Time.current result = yield duration = Time.current – start_time Rails.logger.info “Completed trace: #{operation_name}, Duration: #{duration}ms” result end end # 2. Service health checks class ServiceHealthChecker def self.check_all_services services = [‘user’, ‘order’, ‘payment’] services.map do |service| { service: service, status: check_service_health(service), response_time: measure_response_time(service) } end end def self.check_service_health(service_name) url = ServiceDiscovery.get_service_url(service_name) response = HTTParty.get(“#{url}/health”) response.success? rescue => e false end end

Performance Optimization Strategies

# 1. Caching at multiple levels class MultiLevelCache def self.get_user(user_id) # L1: Application cache user = Rails.cache.fetch(“user:#{user_id}”, expires_in: 5.minutes) do UserService.get_user(user_id) end # L2: CDN cache for public data if user&.public_profile? CDN.cache(“users/#{user_id}”, user.public_data, expires_in: 1.hour) end user end end # 2. Bulk operations class BulkUserProcessor def self.process_users(user_ids) # Process in batches user_ids.each_slice(100) do |batch| UserService.get_users_bulk(batch) end end end # 3. Read replicas per service class UserService def self.get_user(user_id) # Use read replica for read operations User.connected_to(role: :reading) do User.find(user_id) end end end

Best Practices

  • Use asynchronous communication: Message queues for non-critical operations
  • Implement circuit breakers: Prevent cascading failures
  • Use connection pooling: Reuse HTTP connections
  • Cache at multiple levels: Application, CDN, and database caching
  • Monitor service health: Track response times and error rates
  • Use distributed tracing: Track requests across services
  • Implement bulk operations: Reduce network overhead
  • Use read replicas: Scale read operations per service

Real-World Case Study: E-commerce Platform Migration

Problem: Monolithic application taking 8+ seconds to process orders

Root Cause: All services running in single application, blocking operations

# Before: Monolithic architecture def process_order(order_id) order = Order.find(order_id) # Sequential processing validate_inventory(order) # 2s process_payment(order) # 3s update_inventory(order) # 1s send_notifications(order) # 2s order.update!(status: ‘processed’) end # After: Microservices architecture def process_order(order_id) order = Order.find(order_id) # Parallel processing futures = [ Concurrent::Future.execute { InventoryService.validate(order) }, Concurrent::Future.execute { PaymentService.process(order) }, Concurrent::Future.execute { NotificationService.prepare(order) } ] # Wait for critical operations inventory_result = futures[0].value payment_result = futures[1].value # Update inventory asynchronously InventoryService.update_async(order) # Send notifications asynchronously NotificationService.send_async(order) order.update!(status: ‘processed’) end

Result: Order processing time reduced from 8 seconds to 1.5 seconds, 81% improvement

Additional Benefits: Better scalability, improved fault tolerance, easier maintenance

Microservices Performance Metrics

# Key performance indicators for microservices class MicroservicesMetrics def self.track_service_metrics(service_name) { response_time: measure_response_time(service_name), throughput: measure_throughput(service_name), error_rate: calculate_error_rate(service_name), availability: calculate_availability(service_name), resource_usage: measure_resource_usage(service_name) } end def self.measure_response_time(service_name) start_time = Time.current result = yield duration = Time.current – start_time Rails.cache.increment(“response_time:#{service_name}”, duration) duration end def self.calculate_error_rate(service_name) total_requests = Rails.cache.read(“total_requests:#{service_name}”) || 0 error_requests = Rails.cache.read(“error_requests:#{service_name}”) || 0 total_requests > 0 ? (error_requests.to_f / total_requests * 100) : 0 end end

Service Mesh Implementation

# Service mesh configuration for Rails # config/initializers/service_mesh.rb class ServiceMesh def self.configure { retry_policy: { max_retries: 3, base_delay: 100, # milliseconds max_delay: 1000 }, circuit_breaker: { failure_threshold: 5, recovery_timeout: 60, # seconds half_open_max_calls: 2 }, load_balancing: { strategy: ’round_robin’, health_check_interval: 30 }, timeout: { request_timeout: 5000, # milliseconds connection_timeout: 2000 } } end end # Service mesh middleware class ServiceMeshMiddleware def initialize(app) @app = app @circuit_breakers = {} end def call(env) service_name = extract_service_name(env) if circuit_breaker_open?(service_name) return [503, { ‘Content-Type’ => ‘application/json’ }, [{ error: ‘Service unavailable’ }.to_json]] end start_time = Time.current status, headers, response = @app.call(env) duration = Time.current – start_time track_metrics(service_name, status, duration) [status, headers, response] rescue => e record_failure(extract_service_name(env)) raise e end end

Distributed Caching Strategies

# Multi-level distributed caching class DistributedCache def self.get_user_data(user_id) # L1: Local application cache user = Rails.cache.fetch(“user:#{user_id}”, expires_in: 5.minutes) do # L2: Distributed Redis cache RedisCache.get(“user:#{user_id}”) || fetch_from_database(user_id) end # L3: CDN cache for public data if user&.public_profile? CDNCache.set(“users/#{user_id}”, user.public_data, expires_in: 1.hour) end user end def self.invalidate_user_cache(user_id) Rails.cache.delete(“user:#{user_id}”) RedisCache.delete(“user:#{user_id}”) CDNCache.delete(“users/#{user_id}”) end end # Cache warming for microservices class CacheWarmer def self.warm_user_cache(user_ids) user_ids.each_slice(100) do |batch| UserService.get_users_bulk(batch).each do |user| DistributedCache.get_user_data(user.id) end end end def self.warm_popular_content popular_user_ids = AnalyticsService.get_popular_users warm_user_cache(popular_user_ids) end end

Advanced Load Balancing

# Intelligent load balancing with health checks class IntelligentLoadBalancer def self.get_healthy_instance(service_name) instances = get_service_instances(service_name) healthy_instances = instances.select { |instance| healthy?(instance) } if healthy_instances.empty? raise “No healthy instances available for #{service_name}” end select_best_instance(healthy_instances) end def self.select_best_instance(instances) case load_balancing_strategy when ’round_robin’ instances[rand(instances.length)] when ‘least_connections’ instances.min_by { |instance| get_connection_count(instance) } when ‘response_time’ instances.min_by { |instance| get_average_response_time(instance) } when ‘weighted’ select_weighted_instance(instances) end end def self.healthy?(instance) response = HTTParty.get(“#{instance}/health”, timeout: 5) response.success? rescue => e Rails.logger.error “Health check failed for #{instance}: #{e.message}” false end end # Auto-scaling based on metrics class AutoScaler def self.scale_service(service_name) metrics = get_service_metrics(service_name) if should_scale_up?(metrics) scale_up_service(service_name) elsif should_scale_down?(metrics) scale_down_service(service_name) end end def self.should_scale_up?(metrics) metrics[:cpu_usage] > 80 || metrics[:response_time] > 1000 end def self.should_scale_down?(metrics) metrics[:cpu_usage] < 30 && metrics[:response_time] < 200 end end

Event Sourcing and CQRS

# Event sourcing implementation class EventStore def self.append_events(stream_id, events) events.each do |event| event_record = { stream_id: stream_id, event_type: event.class.name, event_data: event.to_json, timestamp: Time.current, version: get_next_version(stream_id) } EventRecord.create!(event_record) end end def self.get_events(stream_id, from_version = 0) EventRecord.where(stream_id: stream_id) .where(“version > ?”, from_version) .order(:version) end end # CQRS implementation class UserCommandHandler def self.handle_create_user(command) events = [ UserCreatedEvent.new( user_id: command.user_id, name: command.name, email: command.email ) ] EventStore.append_events(command.user_id, events) UserProjection.update(command.user_id, events) end end class UserProjection def self.update(user_id, events) events.each do |event| case event when UserCreatedEvent UserReadModel.create!( id: event.user_id, name: event.name, email: event.email, created_at: event.timestamp ) when UserUpdatedEvent UserReadModel.find(event.user_id).update!( name: event.name, email: event.email, updated_at: event.timestamp ) end end end end

Saga Pattern for Distributed Transactions

# Saga pattern implementation class OrderSaga def self.execute(order_id) saga_id = SecureRandom.uuid saga = Saga.create!(id: saga_id, status: ‘started’) begin # Step 1: Reserve inventory inventory_result = InventoryService.reserve_items(order_id) saga.add_step(‘inventory_reserved’, inventory_result) # Step 2: Process payment payment_result = PaymentService.process_payment(order_id) saga.add_step(‘payment_processed’, payment_result) # Step 3: Update inventory inventory_update = InventoryService.update_stock(order_id) saga.add_step(‘inventory_updated’, inventory_update) # Step 4: Send notifications notification_result = NotificationService.send_order_confirmation(order_id) saga.add_step(‘notification_sent’, notification_result) saga.complete! { success: true, saga_id: saga_id } rescue => e saga.fail! compensate(saga) { success: false, error: e.message, saga_id: saga_id } end end def self.compensate(saga) steps = saga.steps.reverse steps.each do |step| case step.name when ‘inventory_reserved’ InventoryService.release_items(step.data[‘order_id’]) when ‘payment_processed’ PaymentService.refund_payment(step.data[‘payment_id’]) when ‘inventory_updated’ InventoryService.restore_stock(step.data[‘order_id’]) end end end end

API Gateway with Rate Limiting

# Advanced API Gateway implementation class ApiGateway def self.route_request(request) service_name = determine_service(request.path) if rate_limit_exceeded?(request) return rate_limit_response end if authentication_required?(request.path) unless authenticate_request(request) return authentication_error_response end end forward_request(request, service_name) end def self.rate_limit_exceeded?(request) key = “rate_limit:#{request.ip}:#{request.path}” current_count = Rails.cache.read(key) || 0 if current_count >= rate_limit_threshold(request.path) true else Rails.cache.increment(key, 1, expires_in: rate_limit_window(request.path)) false end end def self.forward_request(request, service_name) service_url = ServiceDiscovery.get_service_url(service_name) instance = IntelligentLoadBalancer.get_healthy_instance(service_name) response = HTTParty.send( request.method.downcase, “#{instance}#{request.path}”, headers: request.headers, body: request.body, timeout: 30 ) log_request(request, response, service_name) response end end

Distributed Tracing with OpenTelemetry

# OpenTelemetry integration for Rails # config/initializers/opentelemetry.rb require ‘opentelemetry/sdk’ require ‘opentelemetry/exporter/jaeger’ OpenTelemetry::SDK.configure do |c| c.service_name = ‘rails-app’ c.add_span_processor( OpenTelemetry::SDK::Trace::Export::BatchSpanProcessor.new( OpenTelemetry::Exporter::Jaeger::AgentExporter.new( endpoint: ‘http://localhost:14268/api/traces’ ) ) ) end # Custom tracing for microservices class DistributedTracer def self.trace_service_call(service_name, operation, tags = {}) tracer = OpenTelemetry.tracer_provider.tracer(‘rails-app’) tracer.in_span(“#{service_name}.#{operation}”, attributes: tags) do |span| start_time = Time.current result = yield duration = Time.current – start_time span.set_attribute(‘duration_ms’, duration * 1000) span.set_attribute(‘service.name’, service_name) span.set_attribute(‘operation’, operation) result rescue => e span.record_exception(e) span.set_status(OpenTelemetry::Trace::Status.error(e.message)) raise e end end end # Usage in service calls def get_user_with_orders(user_id) DistributedTracer.trace_service_call(‘user-service’, ‘get_user’, { user_id: user_id }) do UserService.get_user(user_id) end DistributedTracer.trace_service_call(‘order-service’, ‘get_orders’, { user_id: user_id }) do OrderService.get_orders(user_id) end end

Performance Monitoring and Alerting

# Comprehensive monitoring system class MicroservicesMonitor def self.monitor_all_services services = [‘user-service’, ‘order-service’, ‘payment-service’, ‘inventory-service’] services.each do |service| metrics = collect_service_metrics(service) check_alerts(service, metrics) store_metrics(service, metrics) end end def self.collect_service_metrics(service_name) { response_time: measure_response_time(service_name), throughput: measure_throughput(service_name), error_rate: calculate_error_rate(service_name), cpu_usage: measure_cpu_usage(service_name), memory_usage: measure_memory_usage(service_name), active_connections: count_active_connections(service_name), queue_length: measure_queue_length(service_name) } end def self.check_alerts(service_name, metrics) alerts = [] if metrics[:response_time] > 1000 alerts << { type: ‘high_response_time’, service: service_name, value: metrics[:response_time] } end if metrics[:error_rate] > 5 alerts << { type: ‘high_error_rate’, service: service_name, value: metrics[:error_rate] } end if metrics[:cpu_usage] > 90 alerts << { type: ‘high_cpu_usage’, service: service_name, value: metrics[:cpu_usage] } end send_alerts(alerts) unless alerts.empty? end end # Performance dashboard data class PerformanceDashboard def self.get_dashboard_data { services: get_all_services_status, overall_metrics: calculate_overall_metrics, recent_alerts: get_recent_alerts, performance_trends: get_performance_trends, resource_usage: get_resource_usage_summary } end def self.get_all_services_status services = [‘user-service’, ‘order-service’, ‘payment-service’, ‘inventory-service’] services.map do |service| { name: service, status: get_service_status(service), response_time: get_average_response_time(service), error_rate: get_error_rate(service), throughput: get_throughput(service) } end end end

Advanced Performance Patterns

# Bulkhead pattern for fault isolation class BulkheadPattern def self.with_bulkhead(service_name, max_concurrent_calls: 10) semaphore = get_semaphore(service_name) if semaphore.try_acquire begin yield ensure semaphore.release end else raise “Bulkhead full for #{service_name}” end end end # Retry with exponential backoff class RetryWithBackoff def self.with_retry(max_attempts: 3, base_delay: 100) attempts = 0 begin attempts += 1 yield rescue => e if attempts < max_attempts delay = base_delay * (2 ** (attempts - 1)) sleep(delay / 1000.0) retry else raise e end end end end # Timeout pattern class TimeoutPattern def self.with_timeout(timeout_seconds: 5) result = nil error = nil thread = Thread.new do begin result = yield rescue => e error = e end end if thread.join(timeout_seconds) if error raise error else result end else thread.kill raise “Operation timed out after #{timeout_seconds} seconds” end end end

Performance Testing for Microservices

# Load testing microservices class MicroservicesLoadTester def self.load_test_service(service_name, concurrent_users: 100, duration: 300) results = { total_requests: 0, successful_requests: 0, failed_requests: 0, average_response_time: 0, p95_response_time: 0, p99_response_time: 0, throughput: 0 } start_time = Time.current threads = [] concurrent_users.times do threads << Thread.new do duration.times do request_start = Time.current begin response = make_service_request(service_name) results[:successful_requests] += 1 rescue => e results[:failed_requests] += 1 end response_time = Time.current – request_start results[:total_requests] += 1 sleep(1) # 1 request per second per thread end end end threads.each(&:join) end_time = Time.current results[:throughput] = results[:total_requests] / (end_time – start_time) results end end # Chaos engineering for microservices class ChaosEngineering def self.run_chaos_test(service_name) scenarios = [ { name: ‘network_latency’, action: -> { simulate_network_latency(service_name) } }, { name: ‘service_failure’, action: -> { simulate_service_failure(service_name) } }, { name: ‘high_load’, action: -> { simulate_high_load(service_name) } }, { name: ‘memory_leak’, action: -> { simulate_memory_leak(service_name) } } ] scenarios.each do |scenario| Rails.logger.info “Running chaos test: #{scenario[:name]} for #{service_name}” begin scenario[:action].call sleep(30) # Run scenario for 30 seconds verify_system_stability(service_name) ensure cleanup_chaos_test(service_name) end end end end

Enhanced Real-World Case Study: E-commerce Platform Migration

Problem: Monolithic application taking 8+ seconds to process orders with 15% error rate

Root Cause: All services running in single application, blocking operations, no fault isolation

# Before: Monolithic architecture def process_order(order_id) order = Order.find(order_id) # Sequential processing validate_inventory(order) # 2s process_payment(order) # 3s update_inventory(order) # 1s send_notifications(order) # 2s order.update!(status: ‘processed’) end # After: Advanced microservices architecture def process_order(order_id) order = Order.find(order_id) # Use saga pattern for distributed transaction saga_result = OrderSaga.execute(order_id) if saga_result[:success] # Process additional operations asynchronously OrderProcessingJob.perform_later(order_id) AnalyticsJob.perform_later(order_id) RecommendationJob.perform_later(order_id) end saga_result end # Performance improvements achieved: # – Order processing: 8s → 1.5s (81% improvement) # – System throughput: 100 → 500 orders/second # – Error rate: 15% → 2% # – Availability: 95% → 99.9% # – Scalability: Linear scaling with load # – Fault tolerance: Circuit breakers prevent cascading failures # – Monitoring: Real-time observability across all services

Result: Order processing time reduced from 8 seconds to 1.5 seconds, 81% improvement

Additional Benefits: Better scalability, improved fault tolerance, easier maintenance, 5x throughput increase, 99.9% availability

Expert Level

Memory Optimization

17. Ruby Memory Management

What is Ruby Memory Management?

Ruby memory management involves understanding how the Ruby interpreter allocates and deallocates memory, garbage collection mechanisms, and techniques to optimize memory usage for high-performance Rails applications. This is critical for applications handling large datasets or high concurrency.

Garbage Collection Tuning

# Enable GC profiling GC::Profiler.enable # Force garbage collection GC.start # Get detailed GC statistics gc_stats = GC.stat puts “Heap allocated pages: #{gc_stats[:heap_allocated_pages]}” puts “Heap sorted length: #{gc_stats[:heap_sorted_length]}” puts “Total allocated objects: #{gc_stats[:total_allocated_objects]}” puts “Total freed objects: #{gc_stats[:total_freed_objects]}” puts “GC count: #{gc_stats[:count]}” # Custom GC tuning # config/environments/production.rb config.after_initialize do GC.auto_compact = true # Set GC parameters for better performance GC.interval_ratio = 20 GC.step_ratio = 40 end # Memory monitoring service class MemoryMonitor def self.monitor memory_usage = GetProcessMem.new.mb gc_stats = GC.stat Rails.logger.info “Memory: #{memory_usage}MB, GC count: #{gc_stats[:count]}” if memory_usage > 1000 GC.start Rails.logger.warn “High memory usage detected: #{memory_usage}MB” end end end

Object Allocation Optimization

# Frozen string optimization # config/application.rb config.after_initialize do # Freeze commonly used strings “id”.freeze “created_at”.freeze “updated_at”.freeze “user_id”.freeze “status”.freeze end # Object pooling for expensive objects class ObjectPool def self.pool @pool ||= Queue.new end def self.get pool.pop rescue ThreadError create_new_object end def self.release(object) pool.push(object) if pool.size < 10 end private def self.create_new_object # Create new expensive object ExpensiveObject.new end end # Memory-efficient data processing def process_large_dataset # Use lazy enumeration to avoid loading everything in memory User.find_each(batch_size: 1000) do |user| process_user(user) GC.start if GC.stat[:count] % 10 == 0 end end

Memory Leak Detection

# Memory leak detection service class MemoryLeakDetector def self.detect_leaks initial_memory = GetProcessMem.new.mb initial_objects = ObjectSpace.count_objects # Run suspected leaky operation yield final_memory = GetProcessMem.new.mb final_objects = ObjectSpace.count_objects memory_increase = final_memory – initial_memory object_increase = final_objects[:TOTAL] – initial_objects[:TOTAL] if memory_increase > 50 || object_increase > 1000 Rails.logger.warn “Potential memory leak detected: #{memory_increase}MB, #{object_increase} objects” end end def self.track_object_growth ObjectSpace.trace_object_allocations do yield end end end # Usage MemoryLeakDetector.detect_leaks do # Your potentially leaky code here process_large_dataset end

Advanced Memory Techniques

# Memory-mapped files for large datasets require ‘mmap’ class LargeDataProcessor def self.process_with_mmap(file_path) Mmap.new(file_path, ‘r’) do |mmap| mmap.each_line do |line| process_line(line) end end end end # Weak references for caching require ‘weakref’ class WeakCache def initialize @cache = {} end def get(key) ref = @cache[key] ref&.weakref_alive? ? ref.__getobj__ : nil end def set(key, value) @cache[key] = WeakRef.new(value) end end # Memory profiling with memory_profiler gem require ‘memory_profiler’ report = MemoryProfiler.report do # Your code here process_large_dataset end report.pretty_print

Memory Optimization Best Practices

  • Use frozen strings for commonly used string literals
  • Implement object pooling for expensive objects
  • Use lazy enumeration for large datasets
  • Monitor memory usage and GC statistics
  • Implement memory leak detection
  • Use weak references for caching
  • Optimize string processing for large data
  • Set appropriate GC parameters for your workload

Real-World Case Study: Data Processing Platform

Problem: Memory usage growing to 8GB+ during large data processing, causing server crashes

Root Cause: Inefficient object allocation and no memory management

# Before: Memory-intensive processing def process_large_dataset # Load all data into memory all_records = User.all.to_a all_records.each do |user| process_user_data(user) end end # After: Memory-optimized processing # config/application.rb config.after_initialize do # Freeze common strings “id”.freeze “created_at”.freeze “updated_at”.freeze end def process_large_dataset_optimized # Use find_each for memory efficiency User.find_each(batch_size: 1000) do |user| process_user_data(user) # Force GC every 1000 records GC.start if GC.stat[:count] % 10 == 0 end end # Memory monitoring MemoryMonitor.monitor

Result: Memory usage reduced to 2GB, 75% improvement, no more crashes

Additional Benefits: Better stability, improved performance, cost savings on server resources

Concurrency & Threading

18. Thread-Safe Caching

What is Thread-Safe Caching?

Thread-safe caching ensures that cache operations are safe when multiple threads access the same cache simultaneously. This is crucial for Rails applications running in multi-threaded environments to prevent race conditions and data corruption.

Basic Thread-Safe Patterns

# Use mutex for thread safety class ThreadSafeCache def initialize @cache = {} @mutex = Mutex.new end def fetch(key) @mutex.synchronize do @cache[key] ||= yield end end def get(key) @mutex.synchronize do @cache[key] end end def set(key, value) @mutex.synchronize do @cache[key] = value end end end

Advanced Thread-Safe Implementations

# Read-write lock for better performance require ‘concurrent’ class ConcurrentCache def initialize @cache = Concurrent::Map.new @read_write_lock = Concurrent::ReadWriteLock.new end def fetch(key) # Try to get without lock first value = @cache[key] return value if value # Use write lock for cache miss @read_write_lock.with_write_lock do # Double-check pattern value = @cache[key] return value if value # Compute and store @cache[key] = yield end end def get(key) @read_write_lock.with_read_lock do @cache[key] end end end # Atomic operations with compare-and-swap class AtomicCache def initialize @cache = Concurrent::Map.new end def fetch(key) # Atomic get-or-set operation @cache.compute_if_absent(key) do yield end end end

Thread-Safe Cache with Expiration

# Cache with TTL and thread safety class ThreadSafeTTLCache def initialize(default_ttl: 1.hour) @cache = Concurrent::Map.new @default_ttl = default_ttl @cleanup_thread = start_cleanup_thread end def fetch(key, ttl: nil) ttl ||= @default_ttl @cache.compute_if_absent(key) do { value: yield, expires_at: Time.current + ttl } end[:value] end def get(key) entry = @cache[key] return nil unless entry if entry[:expires_at] < Time.current @cache.delete(key) return nil end entry[:value] end private def start_cleanup_thread Thread.new do loop do sleep 60 cleanup_expired_entries end end end def cleanup_expired_entries now = Time.current @cache.each do |key, entry| @cache.delete(key) if entry[:expires_at] < now end end end

Thread-Safe Cache with Statistics

# Cache with performance monitoring class MonitoredThreadSafeCache def initialize @cache = Concurrent::Map.new @stats = Concurrent::AtomicReference.new({ hits: 0, misses: 0, sets: 0 }) end def fetch(key) value = get(key) if value increment_hits return value end increment_misses @cache.compute_if_absent(key) do increment_sets yield end end def get(key) @cache[key] end def stats current_stats = @stats.get total_requests = current_stats[:hits] + current_stats[:misses] hit_rate = total_requests > 0 ? (current_stats[:hits].to_f / total_requests * 100).round(2) : 0 current_stats.merge( total_requests: total_requests, hit_rate: hit_rate ) end private def increment_hits @stats.update { |stats| stats.merge(hits: stats[:hits] + 1) } end def increment_misses @stats.update { |stats| stats.merge(misses: stats[:misses] + 1) } end def increment_sets @stats.update { |stats| stats.merge(sets: stats[:sets] + 1) } end end

Thread-Safe Cache Best Practices

  • Use appropriate synchronization mechanisms (Mutex, ReadWriteLock)
  • Implement atomic operations when possible
  • Use Concurrent::Map for better performance
  • Implement proper cache expiration and cleanup
  • Monitor cache performance and hit rates
  • Use double-check pattern to avoid unnecessary locks
  • Implement cache statistics for monitoring
  • Test thread safety thoroughly with concurrent access

Real-World Case Study: High-Concurrency API

Problem: Cache corruption and race conditions with 1000+ concurrent requests

Root Cause: Non-thread-safe cache implementation

# Before: Non-thread-safe cache class UnsafeCache def initialize @cache = {} end def fetch(key) @cache[key] ||= yield end end # After: Thread-safe cache class ThreadSafeCache def initialize @cache = Concurrent::Map.new end def fetch(key) @cache.compute_if_absent(key) do yield end end end # Usage in controller def show @user_data = ThreadSafeCache.new.fetch(“user_#{params[:id]}”) do User.includes(:profile, :orders).find(params[:id]) end end

Result: Zero cache corruption, 99.9% cache hit rate, 5x better performance

Additional Benefits: Improved reliability, better user experience, reduced database load

Performance Testing

19. Load Testing

What is Load Testing?

Load testing simulates real-world usage patterns to determine how your Rails application performs under various load conditions. It helps identify bottlenecks, capacity limits, and performance degradation points before they affect real users.

Load Testing Tools and Setup

# Using Apache Bench (ab) # Basic load test ab -n 1000 -c 10 http://localhost:3000/ # Advanced load test with headers ab -n 5000 -c 50 -H “Authorization: Bearer token” -H “Content-Type: application/json” http://localhost:3000/api/users # POST request with data ab -n 1000 -c 20 -p post_data.json -T “application/json” http://localhost:3000/api/orders # Using wrk for more realistic testing wrk -t12 -c400 -d30s http://localhost:3000/ # wrk with Lua scripting wrk -t12 -c400 -d30s –script=load_test.lua http://localhost:3000/

Custom Load Testing with Ruby

# Custom load testing framework require ‘net/http’ require ‘json’ require ‘concurrent’ class LoadTester def initialize(base_url, concurrency: 10, duration: 60) @base_url = base_url @concurrency = concurrency @duration = duration @results = Concurrent::Array.new end def run_test start_time = Time.current threads = [] @concurrency.times do threads << Thread.new do run_worker(start_time) end end threads.each(&:join) generate_report end private def run_worker(start_time) while Time.current – start_time < @duration begin response_time = measure_request @results << { timestamp: Time.current, response_time: response_time, success: true } rescue => e @results << { timestamp: Time.current, response_time: nil, success: false, error: e.message } end end end def measure_request start_time = Time.current uri = URI(@base_url) http = Net::HTTP.new(uri.host, uri.port) http.use_ssl = uri.scheme == ‘https’ request = Net::HTTP::Get.new(uri) request[‘Authorization’] = ‘Bearer test_token’ response = http.request(request) Time.current – start_time end def generate_report successful_requests = @results.select { |r| r[:success] } failed_requests = @results.reject { |r| r[:success] } response_times = successful_requests.map { |r| r[:response_time] } { total_requests: @results.length, successful_requests: successful_requests.length, failed_requests: failed_requests.length, success_rate: (successful_requests.length.to_f / @results.length * 100).round(2), avg_response_time: response_times.sum / response_times.length, min_response_time: response_times.min, max_response_time: response_times.max, p95_response_time: percentile(response_times, 95), p99_response_time: percentile(response_times, 99) } end def percentile(values, p) sorted = values.sort index = (p / 100.0 * (sorted.length – 1)).round sorted[index] end end

Stress Testing and Capacity Planning

# Stress testing framework class StressTester def initialize(base_url) @base_url = base_url end def find_breaking_point concurrency = 1 max_concurrency = 1000 while concurrency <= max_concurrency puts “Testing with #{concurrency} concurrent users…” result = LoadTester.new(@base_url, concurrency: concurrency, duration: 30).run_test if result[:success_rate] < 95 || result[:avg_response_time] > 2.0 puts “Breaking point found at #{concurrency} concurrent users” return concurrency end concurrency *= 2 end max_concurrency end end # Capacity planning def calculate_capacity_requirements expected_users = 10000 peak_multiplier = 3 concurrent_percentage = 0.1 peak_concurrent_users = expected_users * peak_multiplier * concurrent_percentage breaking_point = StressTester.new(‘http://localhost:3000’).find_breaking_point { required_servers: (peak_concurrent_users.to_f / breaking_point).ceil, safety_factor: (breaking_point.to_f / peak_concurrent_users).round(2) } end

Performance Regression Testing

# Automated performance regression testing class PerformanceRegressionTester def initialize(baseline_results) @baseline = baseline_results end def test_regression current_results = LoadTester.new(‘http://localhost:3000’, concurrency: 50, duration: 60).run_test regression_detected = false issues = [] if current_results[:avg_response_time] > @baseline[:avg_response_time] * 1.2 regression_detected = true issues << “Average response time increased by #{((current_results[:avg_response_time] / @baseline[:avg_response_time] – 1) * 100).round(2)}%” end if current_results[:success_rate] < @baseline[:success_rate] - 5 regression_detected = true issues << “Success rate decreased by #{@baseline[:success_rate] – current_results[:success_rate]}%” end { regression_detected: regression_detected, issues: issues, current_results: current_results, baseline_results: @baseline } end end

Load Testing Best Practices

  • Start with realistic load levels and gradually increase
  • Test both read and write operations
  • Monitor system resources during testing
  • Use production-like data and environments
  • Test different user scenarios and workflows
  • Set up automated performance regression testing
  • Document performance baselines and thresholds
  • Test failure scenarios and recovery

Real-World Case Study: E-commerce Black Friday

Problem: Site crashing during peak traffic with 50,000+ concurrent users

Root Cause: No load testing or capacity planning

# Before: No load testing # – Site crashed during peak traffic # – No performance baselines # – Unknown capacity limits # – 100% downtime during peak # After: Comprehensive load testing # Load test script def run_black_friday_load_test # Test different scenarios scenarios = [ { name: ‘homepage’, url: ‘/’, concurrency: 100 }, { name: ‘product_listing’, url: ‘/products’, concurrency: 200 }, { name: ‘product_detail’, url: ‘/products/1’, concurrency: 150 }, { name: ‘checkout’, url: ‘/checkout’, concurrency: 50 } ] results = {} scenarios.each do |scenario| tester = LoadTester.new(“http://localhost:3000#{scenario[:url]}”, concurrency: scenario[:concurrency], duration: 300) results[scenario[:name]] = tester.run_test end results end # Capacity planning capacity = calculate_capacity_requirements puts “Required servers: #{capacity[:required_servers]}” puts “Safety factor: #{capacity[:safety_factor]}”

Result: 99.9% uptime during Black Friday, 5x capacity increase

Additional Benefits: Better user experience, increased revenue, improved reliability

System-Level Optimization

20. OS-Level Tuning

What is OS-Level Tuning?

OS-level tuning involves optimizing the operating system configuration to maximize Rails application performance. This includes tuning file descriptors, TCP settings, memory management, and kernel parameters to handle high concurrency and throughput efficiently.

File Descriptor Limits

# Check current limits ulimit -a # Increase file descriptor limits ulimit -n 65536 # Permanent limits in /etc/security/limits.conf * soft nofile 65536 * hard nofile 65536 root soft nofile 65536 root hard nofile 65536 # System-wide limits in /etc/sysctl.conf fs.file-max = 2097152 # Apply changes sysctl -p

TCP and Network Optimization

# TCP connection optimization # /etc/sysctl.conf # Increase TCP connection backlog net.core.somaxconn = 65535 net.core.netdev_max_backlog = 5000 # TCP buffer sizes net.core.rmem_default = 262144 net.core.rmem_max = 16777216 net.core.wmem_default = 262144 net.core.wmem_max = 16777216 # TCP keepalive settings net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_intvl = 30 net.ipv4.tcp_keepalive_probes = 3 # TCP congestion control net.ipv4.tcp_congestion_control = bbr # Apply changes sysctl -p

Memory and Swap Optimization

# Memory management tuning # /etc/sysctl.conf # Swappiness (0-100, lower = less swap usage) vm.swappiness = 10 # Memory pressure settings vm.dirty_ratio = 15 vm.dirty_background_ratio = 5 vm.dirty_expire_centisecs = 3000 # Huge pages for better performance vm.nr_hugepages = 1024 vm.hugetlb_shm_group = 1000 # Memory overcommit settings vm.overcommit_memory = 1 vm.overcommit_ratio = 50 # Apply changes sysctl -p

I/O and Disk Optimization

# I/O scheduler optimization # Check current scheduler cat /sys/block/sda/queue/scheduler # Set scheduler to deadline or noop for SSDs echo ‘deadline’ > /sys/block/sda/queue/scheduler # I/O queue depth echo 1024 > /sys/block/sda/queue/nr_requests # Read-ahead buffer blockdev –setra 32768 /dev/sda # Filesystem optimization # For ext4 filesystem mount -o noatime,nodiratime,data=writeback /dev/sda1 /data # /etc/fstab entry /dev/sda1 /data ext4 defaults,noatime,nodiratime,data=writeback 0 2

Process and Thread Limits

# Process limits # /etc/security/limits.conf * soft nproc 32768 * hard nproc 32768 root soft nproc 32768 root hard nproc 32768 # Thread limits * soft nproc 65536 * hard nproc 65536 # Stack size * soft stack 32768 * hard stack 32768 # Core dump size * soft core 0 * hard core 0

System Monitoring and Tuning

# System monitoring script #!/bin/bash # Monitor system resources function monitor_system { echo “=== System Resources ===” echo “CPU Usage: $(top -bn1 | grep ‘Cpu(s)’ | awk ‘{print $2}’)” echo “Memory Usage: $(free -m | awk ‘NR==2{printf “%.2f%%”, $3*100/$2}’)” echo “Disk Usage: $(df -h | awk ‘$NF==”/”{printf “%s”, $5}’)” echo “Load Average: $(uptime | awk -F’load average:’ ‘{print $2}’)” echo “=== Network Connections ===” netstat -an | grep :80 | wc -l netstat -an | grep :443 | wc -l echo “=== File Descriptors ===” lsof | wc -l cat /proc/sys/fs/file-nr } # Auto-tuning script function auto_tune { # Adjust based on load load=$(uptime | awk -F’load average:’ ‘{print $2}’ | awk ‘{print $1}’ | sed ‘s/,//’) if (( $(echo “$load > 2.0” | bc -l) )); then echo “High load detected, adjusting settings…” # Increase TCP backlog echo 131072 > /proc/sys/net/core/somaxconn # Adjust memory pressure echo 5 > /proc/sys/vm/dirty_background_ratio fi } # Run monitoring monitor_system auto_tune

OS-Level Tuning Best Practices

  • Monitor system resources continuously
  • Set appropriate file descriptor limits
  • Optimize TCP settings for your workload
  • Configure memory management parameters
  • Use appropriate I/O schedulers for your storage
  • Set up process and thread limits
  • Monitor and tune based on actual usage patterns
  • Test changes in staging before production

Real-World Case Study: High-Traffic Web Server

Problem: Server hitting file descriptor limits and TCP connection drops with 10,000+ concurrent connections

Root Cause: Default OS limits too low for high-traffic application

# Before: Default OS settings # – File descriptor limit: 1024 # – TCP backlog: 128 # – Memory pressure: default # – Connection drops: 15% # After: Optimized OS settings # /etc/security/limits.conf * soft nofile 65536 * hard nofile 65536 # /etc/sysctl.conf net.core.somaxconn = 65535 net.core.netdev_max_backlog = 5000 vm.swappiness = 10 vm.dirty_ratio = 15 vm.dirty_background_ratio = 5 # Apply changes sysctl -p ulimit -n 65536 # Monitor results function monitor_performance { echo “Active connections: $(netstat -an | grep :80 | wc -l)” echo “File descriptors: $(lsof | wc -l)” echo “Memory usage: $(free -m | awk ‘NR==2{printf “%.2f%%”, $3*100/$2}’)” }

Result: Zero connection drops, 5x more concurrent connections, 99.9% uptime

Additional Benefits: Better user experience, improved reliability, cost savings on infrastructure

21. Application Server Optimization

What is Application Server Optimization?

Application server optimization involves configuring and tuning your Rails application server (like Puma, Unicorn, or Passenger) to handle maximum concurrent requests efficiently while maintaining stability and performance. This includes worker processes, threading, memory management, and load balancing.

Puma Server Configuration

# config/puma.rb # Optimized Puma configuration for production # Worker processes (CPU cores) workers ENV.fetch(“WEB_CONCURRENCY”) { 2 } # Threads per worker threads_count = ENV.fetch(“RAILS_MAX_THREADS”) { 5 } threads threads_count, threads_count # Preload application for better performance preload_app! # Worker timeout worker_timeout 3600 # Worker boot timeout worker_boot_timeout 60 # Worker shutdown timeout worker_shutdown_timeout 30 # Bind to socket bind “unix://#{shared_dir}/tmp/sockets/puma.sock” # Logging stdout_redirect “#{shared_dir}/log/puma.stdout.log”, “#{shared_dir}/log/puma.stderr.log”, true # Worker lifecycle hooks on_worker_boot do ActiveRecord::Base.establish_connection end on_worker_shutdown do ActiveRecord::Base.connection_pool.disconnect! end # Memory management before_fork do ActiveRecord::Base.connection_pool.disconnect! end on_worker_fork do ActiveRecord::Base.establish_connection end

Unicorn Server Configuration

# config/unicorn.rb # Optimized Unicorn configuration # Worker processes worker_processes 4 # Application path app_path = “/var/www/myapp” working_directory app_path # Socket file listen “#{app_path}/tmp/sockets/unicorn.sock”, backlog: 64 # Logging stderr_path “#{app_path}/log/unicorn.stderr.log” stdout_path “#{app_path}/log/unicorn.stdout.log” # Process management pid “#{app_path}/tmp/pids/unicorn.pid” # Timeout settings timeout 30 # Worker lifecycle before_fork do |server, worker| ActiveRecord::Base.connection.disconnect! end after_fork do |server, worker| ActiveRecord::Base.establish_connection end # Memory management preload_app true

Passenger Configuration

# config/passenger.conf # Optimized Passenger configuration # Process management PassengerMaxPoolSize 6 PassengerMinInstances 2 PassengerMaxInstancesPerApp 4 # Memory management PassengerMemoryLimit 512 PassengerMaxRequests 1000 # Timeout settings PassengerPoolIdleTime 300 PassengerMaxPreloaderIdleTime 300 # Security PassengerUserSwitching true PassengerDefaultUser www-data # Logging PassengerLogLevel 3 PassengerLogFile /var/log/passenger.log

Worker Process Optimization

# Calculate optimal worker count def optimal_worker_count cpu_count = Etc.nprocessors memory_gb = `free -g`.split(“\n”)[1].split[1].to_i # Rule of thumb: 1 worker per CPU core, but consider memory workers = [cpu_count, memory_gb / 2].min # Ensure minimum of 2 workers [workers, 2].max end # Memory monitoring for workers class WorkerMonitor def self.monitor_workers worker_pids = get_worker_pids worker_pids.each do |pid| memory_usage = get_process_memory(pid) if memory_usage > 500 # MB Rails.logger.warn “Worker #{pid} using #{memory_usage}MB memory” end end end def self.get_process_memory(pid) # Get memory usage for process `ps -o rss= -p #{pid}`.to_i / 1024 end end

Load Balancing Strategies

# Nginx upstream configuration upstream rails_app { # Round-robin (default) server unix:/var/www/myapp/tmp/sockets/puma.sock; server unix:/var/www/myapp/tmp/sockets/puma2.sock; # Weighted round-robin server unix:/var/www/myapp/tmp/sockets/puma.sock weight=3; server unix:/var/www/myapp/tmp/sockets/puma2.sock weight=1; # Least connections least_conn; } # Health checks upstream rails_app { server unix:/var/www/myapp/tmp/sockets/puma.sock max_fails=3 fail_timeout=30s; server unix:/var/www/myapp/tmp/sockets/puma2.sock max_fails=3 fail_timeout=30s; # Health check endpoint health_check interval=5s fails=3 passes=2; }

Application Server Monitoring

# Puma stats endpoint class PumaStatsController < ApplicationController def stats stats = Puma.stats render json: { workers: stats[‘workers’], booted_workers: stats[‘booted_workers’], old_workers: stats[‘old_workers’], worker_status: stats[‘worker_status’] } end end # Worker health monitoring class WorkerHealthCheck def self.check_workers worker_pids = get_worker_pids worker_pids.each do |pid| begin Process.getpgid(pid) memory_usage = get_process_memory(pid) if memory_usage > 1000 # 1GB restart_worker(pid) end rescue Errno::ESRCH Rails.logger.error “Worker #{pid} not found” end end end end

Application Server Best Practices

  • Use preload_app for better memory efficiency
  • Set appropriate worker and thread counts based on CPU and memory
  • Implement proper worker lifecycle hooks
  • Monitor worker memory usage and restart when needed
  • Use Unix sockets instead of TCP for better performance
  • Implement health checks and load balancing
  • Set appropriate timeouts for your application
  • Use process monitoring tools (God, Monit, systemd)

Real-World Case Study: High-Traffic E-commerce Site

Problem: Application server crashing under load with 5000+ concurrent users

Root Cause: Poor worker configuration and memory leaks

# Before: Poor configuration # – 1 worker process # – No memory limits # – No health checks # – Crashes every 2 hours # After: Optimized configuration # config/puma.rb workers 4 # 4 CPU cores threads 5, 5 # 5 threads per worker preload_app! # Memory management before_fork do ActiveRecord::Base.connection_pool.disconnect! end on_worker_fork do ActiveRecord::Base.establish_connection end # Health monitoring class WorkerMonitor def self.restart_if_needed worker_pids.each do |pid| memory = get_process_memory(pid) restart_worker(pid) if memory > 800 end end end

Result: Zero crashes, 10x more concurrent users, 99.9% uptime

Additional Benefits: Better response times, improved reliability, cost savings

22. Web Server Optimization

What is Web Server Optimization?

Web server optimization involves configuring and tuning your web server (like Nginx or Apache) to efficiently serve static content, handle SSL termination, implement caching, and properly proxy requests to your Rails application server. This is crucial for overall application performance and security.

Nginx Configuration Optimization

# /etc/nginx/sites-available/myapp # Optimized Nginx configuration server { listen 80; server_name myapp.com; # Redirect to HTTPS return 301 https://$server_name$request_uri; } server { listen 443 ssl http2; server_name myapp.com; # SSL configuration ssl_certificate /etc/ssl/certs/myapp.crt; ssl_certificate_key /etc/ssl/private/myapp.key; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512; ssl_prefer_server_ciphers off; # Security headers add_header X-Frame-Options DENY; add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection “1; mode=block”; add_header Strict-Transport-Security “max-age=31536000; includeSubDomains”; # Gzip compression gzip on; gzip_vary on; gzip_min_length 1024; gzip_proxied any; gzip_comp_level 6; gzip_types text/plain text/css text/xml text/javascript application/javascript application/xml+rss application/json; # Static file serving location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg)$ { expires 1y; add_header Cache-Control “public, immutable”; access_log off; } # Application proxy location / { proxy_pass http://rails_app; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; # Timeouts proxy_connect_timeout 30s; proxy_send_timeout 30s; proxy_read_timeout 30s; # Buffering proxy_buffering on; proxy_buffer_size 4k; proxy_buffers 8 4k; } # Health check location /health { access_log off; return 200 “healthy\n”; } }

Apache Configuration Optimization

# /etc/apache2/sites-available/myapp.conf # Optimized Apache configuration <VirtualHost *:80> ServerName myapp.com # Redirect to HTTPS Redirect permanent / https://myapp.com/ </VirtualHost> <VirtualHost *:443> ServerName myapp.com # SSL configuration SSLEngine on SSLCertificateFile /etc/ssl/certs/myapp.crt SSLCertificateKeyFile /etc/ssl/private/myapp.key # Security headers Header always set X-Frame-Options DENY Header always set X-Content-Type-Options nosniff Header always set X-XSS-Protection “1; mode=block” Header always set Strict-Transport-Security “max-age=31536000; includeSubDomains” # Compression LoadModule deflate_module modules/mod_deflate.so SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png|rar|zip|exe|flv|mov|wma|mp3|avi|swf|mp?g|mp4|webm|webp)$ no-gzip dont-vary # Static file caching <LocationMatch “\.(js|css|png|jpg|jpeg|gif|ico|svg)$”> ExpiresActive On ExpiresDefault “access plus 1 year” Header set Cache-Control “public, immutable” </LocationMatch> # Application proxy ProxyPreserveHost On ProxyPass / http://127.0.0.1:3000/ ProxyPassReverse / http://127.0.0.1:3000/ # Timeouts ProxyTimeout 30 ProxyBadHeader Ignore </VirtualHost>

Static File Optimization

# Rails asset pipeline optimization # config/environments/production.rb config.public_file_server.enabled = true config.public_file_server.headers = { ‘Cache-Control’ => ‘public, max-age=31536000’ } # Asset compression config.assets.compress = true config.assets.js_compressor = :terser config.assets.css_compressor = :sass # Asset fingerprinting config.assets.digest = true # CDN configuration config.action_controller.asset_host = ‘https://cdn.myapp.com’ # Nginx static file serving location /assets/ { expires 1y; add_header Cache-Control “public, immutable”; add_header Vary Accept-Encoding; access_log off; # Try files in order try_files $uri $uri/ @rails; } # Gzip static files location ~* \.(css|js)$ { gzip_static on; expires 1y; add_header Cache-Control “public, immutable”; }

Load Balancing and High Availability

# Nginx upstream with health checks upstream rails_app { # Primary servers server 10.0.1.10:3000 max_fails=3 fail_timeout=30s; server 10.0.1.11:3000 max_fails=3 fail_timeout=30s; # Backup servers server 10.0.1.12:3000 backup; server 10.0.1.13:3000 backup; # Load balancing method least_conn; } # Rate limiting limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s; limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s; location /api/ { limit_req zone=api burst=20 nodelay; proxy_pass http://rails_app; } location /login { limit_req zone=login burst=5 nodelay; proxy_pass http://rails_app; } # DDoS protection limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m; limit_conn conn_limit_per_ip 10;

SSL/TLS Optimization

# Modern SSL configuration ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384; ssl_prefer_server_ciphers off; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; # OCSP Stapling ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /etc/ssl/certs/ca-certificates.crt; resolver 8.8.8.8 8.8.4.4 valid=300s; resolver_timeout 5s; # HSTS add_header Strict-Transport-Security “max-age=63072000; includeSubDomains; preload” always; # Security headers add_header X-Frame-Options DENY always; add_header X-Content-Type-Options nosniff always; add_header X-XSS-Protection “1; mode=block” always; add_header Referrer-Policy “strict-origin-when-cross-origin” always;

Web Server Monitoring

# Nginx status monitoring location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } # Custom monitoring script #!/bin/bash # monitor_nginx.sh function check_nginx_status { if ! systemctl is-active –quiet nginx; then echo “Nginx is down!” systemctl restart nginx fi } function check_ssl_cert { cert_file=“/etc/ssl/certs/myapp.crt” expiry_date=$(openssl x509 -enddate -noout -in $cert_file | cut -d= -f2) expiry_epoch=$(date -d “$expiry_date” +%s) current_epoch=$(date +%s) days_left=$(( (expiry_epoch – current_epoch) / 86400 )) if [ $days_left -lt 30 ]; then echo “SSL certificate expires in $days_left days” fi } # Run checks check_nginx_status check_ssl_cert

Web Server Best Practices

  • Use HTTP/2 for better performance
  • Enable gzip compression for text-based content
  • Set appropriate cache headers for static assets
  • Implement rate limiting to prevent abuse
  • Use modern SSL/TLS configurations
  • Set up monitoring and health checks
  • Configure proper logging and log rotation
  • Use CDN for global content delivery
  • Implement security headers
  • Optimize for mobile and different screen sizes

Real-World Case Study: Global Content Delivery

Problem: Slow page loads for international users, 8+ second load times

Root Cause: Single server location, no CDN, poor static file optimization

# Before: Single server setup # – One server in US # – No CDN # – Poor caching # – 8+ second load times for international users # After: Optimized setup # 1. CDN configuration config.action_controller.asset_host = ‘https://cdn.myapp.com’ # 2. Nginx optimization gzip on; gzip_vary on; gzip_min_length 1024; gzip_comp_level 6; # 3. Static file caching location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg)$ { expires 1y; add_header Cache-Control “public, immutable”; access_log off; } # 4. HTTP/2 and SSL optimization listen 443 ssl http2; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512;

Result: Page load times reduced to 1.5 seconds globally, 80% improvement

Additional Benefits: Better user experience, improved SEO rankings, reduced server costs

Learn more about Rails

55 thoughts on “Ruby on Rails Optimization: Make It Fast & Efficient”

Comments are closed.

Scroll to Top