Sidekiq Tutorial for Rails: Async Jobs Made Easy

Sidekiq Complete Guide: High-Performance Background Jobs | Randomize Blog

πŸš€ Sidekiq Complete Guide: High-Performance Background Jobs

πŸ”1. Fundamentals & Core Concepts

οΏ½οΏ½ What is Sidekiq?

Sidekiq is a high-performance background job processing system for Ruby that uses Redis as its storage backend and threads for concurrent job execution. It’s designed for speed, reliability, and scalability.

βœ… Pros

  • Extremely high performance (10,000+ jobs/second)
  • Thread-based concurrency (efficient memory usage)
  • Excellent monitoring with Web UI
  • Rich feature set (batches, scheduled jobs, retries)
  • Mature and well-documented
  • Active community and development

❌ Cons

  • Requires Redis infrastructure
  • Thread safety considerations
  • More complex setup than simpler alternatives
  • Memory management challenges
  • Learning curve for advanced features

πŸ—οΈ Sidekiq Architecture

πŸ”„ Sidekiq Processing Flow:

  1. Job Creation: Worker class defined with include Sidekiq::Worker
  2. Job Enqueueing: Job serialized and stored in Redis
  3. Worker Polling: Sidekiq workers poll Redis for jobs
  4. Thread Execution: Jobs executed in worker threads
  5. Result Handling: Success/failure logged and processed

πŸ”§ Key Components

πŸš€ Core Components:

  • Workers: Job classes that process tasks
  • Redis: Storage backend for job queues
  • Processes: Sidekiq worker processes
  • Threads: Concurrent job execution

πŸ“Š Data Structures:

  • Queues: Redis lists for job storage
  • Sets: Failed jobs, scheduled jobs
  • Hashes: Job metadata and statistics
  • Strings: Configuration and locks

βš™οΈ2. Installation & Setup

πŸ“¦ Installation Steps

1. Add Sidekiq to Gemfile

# Gemfile
gem 'sidekiq', '~> 7.0'
gem 'redis', '~> 5.0'

2. Install Dependencies

# Install gems
bundle install

# Install Redis (Ubuntu/Debian)
sudo apt-get install redis-server

# Install Redis (macOS)
brew install redis

# Start Redis
redis-server

3. Configure Sidekiq

# config/initializers/sidekiq.rb
Sidekiq.configure_server do |config|
  config.redis = { url: 'redis://localhost:6379/0' }
end

Sidekiq.configure_client do |config|
  config.redis = { url: 'redis://localhost:6379/0' }
end

4. Create Sidekiq Configuration

# config/sidekiq.yml
:concurrency: 10
:queues:
  - [critical, 5]
  - [high_priority, 3]
  - [default, 2]
  - [low_priority, 1]
  - [mailers, 1]

:max_retries: 3
:retry_interval: 5

5. Configure Rails Integration

# config/application.rb
config.active_job.queue_adapter = :sidekiq

# config/routes.rb
require 'sidekiq/web'
Rails.application.routes.draw do
  mount Sidekiq::Web => '/sidekiq'
end

πŸš€3. Basic Usage & Workers

πŸ“ Creating Your First Worker

1. Generate Worker

# Generate worker
rails generate sidekiq:worker EmailWorker

# Or create manually
# app/sidekiq/email_worker.rb

2. Basic Worker Structure

class EmailWorker
  include Sidekiq::Worker
  
  def perform(user_id)
    user = User.find(user_id)
    UserMailer.welcome_email(user).deliver_now
  end
end

3. Enqueue Jobs

# Enqueue job
EmailWorker.perform_async(user.id)

# Enqueue with delay
EmailWorker.perform_in(5.minutes, user.id)

# Enqueue at specific time
EmailWorker.perform_at(1.hour.from_now, user.id)

🏷️ Queue Management

Custom Queue Assignment

class UrgentEmailWorker
  include Sidekiq::Worker
  sidekiq_options queue: 'critical'
  
  def perform(user_id)
    # Process urgent email
  end
end

class NewsletterWorker
  include Sidekiq::Worker
  sidekiq_options queue: 'low_priority'
  
  def perform(user_id)
    # Process newsletter
  end
end

🎯 Queue Priority System

πŸ’‘ How Queue Priority Works:

Queue priority determines the order in which jobs are processed. Sidekiq processes queues in the order they’re defined in the configuration, with higher priority queues processed first.

πŸ“Š Priority Configuration

# config/sidekiq.yml
:concurrency: 10
:queues:
  - [critical, 10]      # Highest priority - processed first
  - [high_priority, 5]  # High priority - processed second
  - [default, 2]        # Normal priority - processed third
  - [low_priority, 1]   # Low priority - processed last
  - [mailers, 1]        # Email jobs - processed last

πŸ”’ Understanding Queue Weights

πŸ’‘ What is the Number?

The number in [queue_name, weight] is the queue weight or priority weight. It determines how much attention workers give to each queue relative to other queues.

πŸ“Š How Queue Weights Work:
πŸ”„ Weight Distribution Example:
# Configuration
:queues:
  - [critical, 10]      # 10/18 = 55.6% of worker attention
  - [high_priority, 5]  # 5/18 = 27.8% of worker attention
  - [default, 2]        # 2/18 = 11.1% of worker attention
  - [low_priority, 1]   # 1/18 = 5.6% of worker attention

# Total weight: 10 + 5 + 2 + 1 = 18
# Each queue gets its weight / total weight percentage of attention
🎯 Weight Calculation Formula:
Queue Attention % = (Queue Weight / Total Weights) Γ— 100

# Example with [critical, 10]:
# Critical attention = (10 / 18) Γ— 100 = 55.6%
# High Priority attention = (5 / 18) Γ— 100 = 27.8%
# Default attention = (2 / 18) Γ— 100 = 11.1%
# Low Priority attention = (1 / 18) Γ— 100 = 5.6%
βš–οΈ Weight Guidelines:
🎯 High Priority Strategy:
  • Critical: 10 (50% attention)
  • High: 5 (25% attention)
  • Normal: 3 (15% attention)
  • Low: 2 (10% attention)
βš–οΈ Balanced Strategy:
  • Critical: 4 (25% attention)
  • High: 4 (25% attention)
  • Normal: 4 (25% attention)
  • Low: 4 (25% attention)
πŸ”§ How to Define Queue Weights:
πŸ“‹ Step-by-Step Process:
  1. Identify Queue Types: List all your job types and their importance
  2. Assign Relative Weights: Give higher numbers to more important queues
  3. Calculate Percentages: Ensure weights add up to reasonable total
  4. Test and Monitor: Adjust based on actual processing patterns
  5. Optimize: Fine-tune based on performance metrics
πŸ“Š Weight Configuration Examples:
🎯 E-commerce Platform:
# High-priority order processing
:queues:
  - [order_processing, 10]    # 40% - Critical for business
  - [payment_processing, 8]   # 32% - Financial transactions
  - [email_notifications, 4]  # 16% - User communication
  - [analytics, 2]            # 8% - Background analytics
  - [maintenance, 1]          # 4% - Cleanup tasks
πŸ“± Social Media App:
# User experience focused
:queues:
  - [user_actions, 10]        # 40% - Real-time user interactions
  - [content_processing, 8]   # 32% - Media processing
  - [notifications, 4]        # 16% - Push notifications
  - [feed_generation, 2]      # 8% - Content feeds
  - [background_tasks, 1]     # 4% - Maintenance
πŸ“Š Analytics Platform:
# Data processing focused
:queues:
  - [data_processing, 10]     # 40% - Core data analysis
  - [report_generation, 6]    # 24% - User reports
  - [data_import, 4]          # 16% - External data
  - [alerts, 3]               # 12% - User alerts
  - [cleanup, 2]              # 8% - Data cleanup
🚨 Common Weight Mistakes:
❌ Problem: All queues have weight 1
βœ… Solution: Use different weights to prioritize important jobs
❌ Problem: Too many different weights (1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
βœ… Solution: Use 3-5 weight levels for simplicity
❌ Problem: Critical queue has low weight
βœ… Solution: Give highest weight to most important queue
❌ Problem: Weights don’t reflect business priorities
βœ… Solution: Align weights with business criticality
πŸ“ˆ Monitoring Weight Effectiveness:
# Check queue processing rates
Sidekiq::Queue.all.each do |queue|
  puts "#{queue.name}: #{queue.size} jobs, weight: #{get_queue_weight(queue.name)}"
end

# Monitor processing distribution
stats = Sidekiq::Stats.new
puts "Total processed: #{stats.processed}"
puts "Queue distribution: #{get_queue_distribution}"
πŸ”§ Weight Optimization Tips:
🎯 Best Practices:
  • βœ… Start with 3-5 weight levels for simplicity
  • βœ… Use higher weights (8-10) for critical business functions
  • βœ… Use lower weights (1-3) for background/maintenance tasks
  • βœ… Monitor queue sizes and adjust weights accordingly
  • βœ… Test weight changes in staging before production
  • βœ… Document weight decisions and business rationale

πŸ” How Queue Attention Works Behind the Scenes

πŸ’‘ The Backend Implementation:

Sidekiq uses a weighted round-robin algorithm to distribute worker attention across queues. The weights determine how frequently each queue is polled relative to others.

πŸ”„ Weighted Round-Robin Algorithm:

πŸ“‹ How Sidekiq Implements Queue Attention:
# Sidekiq's internal queue polling logic (simplified)
class QueuePoller
  def poll_queues
    # Calculate polling frequency based on weights
    queue_weights = {
      'critical' => 10,
      'high_priority' => 5,
      'default' => 2,
      'low_priority' => 1
    }
    
    total_weight = queue_weights.values.sum  # 18
    
    # Poll each queue based on its weight
    queue_weights.each do |queue_name, weight|
      polling_frequency = weight.to_f / total_weight
      
      # Poll this queue 'weight' times for every cycle
      weight.times do
        poll_single_queue(queue_name)
      end
    end
  end
  
  def poll_single_queue(queue_name)
    # Check if queue has jobs
    if jobs_available?(queue_name)
      # Pick up job from this queue
      job = dequeue_job(queue_name)
      process_job(job)
    end
  end
end

βš™οΈ Detailed Polling Mechanism:

🎯 Step-by-Step Process:
πŸ”„ Queue Polling Cycle:
  1. Weight Calculation: Sidekiq calculates total weight (10+5+2+1 = 18)
  2. Polling Frequency: Each queue gets polled based on its weight
  3. Round-Robin Cycle: Critical polled 10 times, High Priority 5 times, etc.
  4. Job Selection: If queue has jobs, worker picks up the oldest job
  5. Processing: Job is processed by available worker thread
  6. Cycle Repeat: Process repeats continuously
πŸ“Š Example Polling Cycle:
# With weights [critical: 10, high_priority: 5, default: 2, low_priority: 1]
# One complete polling cycle looks like:

Cycle 1:  critical β†’ critical β†’ critical β†’ critical β†’ critical β†’ 
          critical β†’ critical β†’ critical β†’ critical β†’ critical β†’ 
          high_priority β†’ high_priority β†’ high_priority β†’ high_priority β†’ high_priority β†’ 
          default β†’ default β†’ 
          low_priority

# This cycle repeats continuously
# Critical gets 10/18 = 55.6% of polling attention
# High Priority gets 5/18 = 27.8% of polling attention
# Default gets 2/18 = 11.1% of polling attention  
# Low Priority gets 1/18 = 5.6% of polling attention

πŸ” Redis Implementation Details:

πŸ“‹ How Sidekiq Stores and Retrieves Jobs:
# Redis data structure for queues
# Each queue is a Redis LIST
redis.lpush("queue:critical", job_data)      # Add job to critical queue
redis.lpush("queue:high_priority", job_data) # Add job to high priority queue
redis.lpush("queue:default", job_data)       # Add job to default queue
redis.lpush("queue:low_priority", job_data)  # Add job to low priority queue

# Sidekiq polls queues in weighted order
def poll_queues_with_weights
  weights = {critical: 10, high_priority: 5, default: 2, low_priority: 1}
  
  weights.each do |queue_name, weight|
    weight.times do
      # Try to get job from this queue
      job_data = redis.brpop("queue:#{queue_name}", timeout: 0.1)
      if job_data
        process_job(job_data)
        break  # Found a job, move to next queue
      end
    end
  end
end

⚑ Worker Thread Behavior:

🎯 How Workers Handle Queue Attention:
πŸ‘₯ Multi-Worker Queue Distribution:
# Multiple workers polling simultaneously
Worker 1: [critical] β†’ [critical] β†’ [high_priority] β†’ [default]
Worker 2: [critical] β†’ [high_priority] β†’ [high_priority] β†’ [low_priority]  
Worker 3: [critical] β†’ [default] β†’ [default] β†’ [low_priority]

# Each worker follows the same weighted polling pattern
# But they may pick up different jobs due to timing
πŸ“Š Concurrency and Queue Attention:
# With 3 workers and weights [10, 5, 2, 1]
# Total polling attempts per cycle: 3 workers Γ— 18 polls = 54 polls

# Distribution across queues:
# Critical: 3 workers Γ— 10 polls = 30 polling attempts (55.6%)
# High Priority: 3 workers Γ— 5 polls = 15 polling attempts (27.8%)
# Default: 3 workers Γ— 2 polls = 6 polling attempts (11.1%)
# Low Priority: 3 workers Γ— 1 poll = 3 polling attempts (5.6%)

πŸ”§ Advanced Queue Attention Features:

🎯 Dynamic Weight Adjustment:
# Sidekiq can dynamically adjust polling based on queue size
def adaptive_polling
  queue_sizes = get_queue_sizes
  
  queue_sizes.each do |queue_name, size|
    if size > threshold
      # Increase polling frequency for busy queues
      increase_polling_frequency(queue_name)
    end
  end
end
πŸ“Š Queue Starvation Prevention:
πŸ›‘οΈ Anti-Starvation Mechanisms:
  • Maximum Polling Delay: Prevents low-priority queues from being ignored too long
  • Queue Size Monitoring: Adjusts polling if queues grow too large
  • Fairness Algorithm: Ensures all queues get some attention
  • Dynamic Weight Adjustment: Can modify weights based on queue backlog

🚨 Queue Attention Edge Cases:

Issue: High-weight queue has no jobs
Solution: Sidekiq quickly moves to next queue, maintaining overall attention distribution
Issue: Low-weight queue has many jobs
Solution: Jobs wait longer but eventually get processed; consider adjusting weights
Issue: All queues have jobs
Solution: Higher-weight queues get more processing time, maintaining priority
Issue: Worker threads are busy
Solution: Polling continues but job processing waits for available threads

πŸ“ˆ Monitoring Queue Attention:

πŸ” How to Verify Queue Attention is Working:
# Monitor actual queue processing rates
Sidekiq::Queue.all.each do |queue|
  puts "#{queue.name}:"
  puts "  Size: #{queue.size}"
  puts "  Weight: #{get_queue_weight(queue.name)}"
  puts "  Processing rate: #{get_processing_rate(queue.name)}"
end

# Check worker distribution
Sidekiq::Workers.new.each do |worker|
  puts "Worker #{worker['pid']}:"
  puts "  Current job: #{worker['payload']}"
  puts "  Queue: #{worker['queue']}"
  puts "  Busy: #{worker['busy']}"
end
πŸ“Š Queue Attention Metrics:
πŸ“‹ Key Metrics to Monitor:
  • Polling Frequency: How often each queue is checked
  • Processing Rate: Jobs processed per minute per queue
  • Queue Wait Time: Average time jobs wait in each queue
  • Worker Distribution: Which queues workers are processing
  • Weight Effectiveness: Whether weights match actual processing

πŸ”§ Tuning Queue Attention:

🎯 Optimization Strategies:
πŸ“Š Performance Tuning:
  1. Monitor Queue Sizes: Track if queues are growing or shrinking
  2. Measure Processing Rates: Calculate jobs/minute per queue
  3. Analyze Wait Times: Check if jobs wait too long in any queue
  4. Adjust Weights: Modify weights based on actual performance
  5. Test Changes: Validate weight changes in staging
  6. Monitor Results: Track improvements after weight adjustments

πŸ” Priority Processing Rules

πŸ”„ Priority Processing Flow:
  1. Queue Order: Sidekiq processes queues in configuration order
  2. Job Selection: Takes jobs from highest priority queue first
  3. FIFO within Queue: Jobs within same queue processed first-in-first-out
  4. Queue Depletion: Only moves to next queue when current queue is empty
  5. Round Robin: Cycles through queues based on priority weights

πŸ‘€ Single Worker Priority

🎯 Single Worker Behavior:

With one worker, Sidekiq processes jobs sequentially based on queue priority. The worker focuses on one queue at a time until it’s empty.

πŸ“‹ Single Worker Example

# Worker processes jobs in this order:
# 1. All critical jobs (highest priority)
# 2. All high_priority jobs
# 3. All default jobs
# 4. All low_priority jobs
# 5. All mailers jobs (lowest priority)

# Example job processing sequence:
# Critical: Job A, Job B, Job C
# High Priority: Job D, Job E
# Default: Job F, Job G, Job H
# Low Priority: Job I
# Mailers: Job J

⚑ Single Worker Advantages

βœ… Benefits:
  • Guaranteed priority order
  • Simple to understand and debug
  • No resource contention
  • Predictable processing
❌ Limitations:
  • Lower throughput
  • Slower job processing
  • No parallel processing
  • Bottleneck for high volume

πŸ‘₯ Multiple Workers Priority

πŸš€ Multiple Workers Behavior:

With multiple workers, Sidekiq distributes jobs across workers while maintaining priority order. Each worker follows the same priority rules but can process jobs in parallel.

πŸ“Š Multiple Workers Example

# Configuration with 3 workers
# config/sidekiq.yml
:concurrency: 3
:queues:
  - [critical, 5]
  - [high_priority, 3]
  - [default, 2]
  - [low_priority, 1]

# Job distribution across workers:
# Worker 1: Critical Job A, High Priority Job D
# Worker 2: Critical Job B, High Priority Job E  
# Worker 3: Critical Job C, Default Job F

# All critical jobs processed first, then high priority, etc.

πŸ”„ Multiple Workers Processing

πŸ“‹ Multi-Worker Priority Flow:
  1. Queue Polling: All workers poll all queues simultaneously
  2. Priority Check: Each worker checks highest priority queue first
  3. Job Distribution: Available jobs distributed across workers
  4. Parallel Processing: Multiple jobs processed simultaneously
  5. Queue Depletion: Workers move to next priority queue when current is empty

⚑ Multiple Workers Advantages

βœ… Benefits:
  • Higher throughput
  • Parallel job processing
  • Better resource utilization
  • Faster job completion
  • Maintains priority order
⚠️ Considerations:
  • More complex monitoring
  • Resource contention possible
  • Memory usage increases
  • Network overhead

πŸ”§ Priority Configuration Strategies

πŸ“ˆ High Priority Strategy

# Prioritize critical jobs
:queues:
  - [critical, 10]      # 50% of worker attention
  - [high_priority, 5]  # 25% of worker attention
  - [default, 3]        # 15% of worker attention
  - [low_priority, 2]   # 10% of worker attention

βš–οΈ Balanced Strategy

# Equal distribution with slight priority
:queues:
  - [critical, 3]       # 30% of worker attention
  - [high_priority, 3]  # 30% of worker attention
  - [default, 2]        # 20% of worker attention
  - [low_priority, 2]   # 20% of worker attention

πŸ“Š Monitoring Priority Processing

# Check queue sizes and priorities
Sidekiq::Queue.all.each do |queue|
  puts "#{queue.name}: #{queue.size} jobs"
end

# Monitor worker queue distribution
Sidekiq::Workers.new.each do |worker|
  puts "Worker #{worker['pid']}: #{worker['queues']}"
end

# Check processing rates by queue
stats = Sidekiq::Stats.new
puts "Processed: #{stats.processed}"
puts "Failed: #{stats.failed}"

🚨 Priority Troubleshooting

Issue: Low priority jobs never get processed
Solution: Check if high priority queues are constantly full, adjust queue weights, or add dedicated low priority workers
Issue: Critical jobs stuck behind normal jobs
Solution: Verify queue configuration order, check for job serialization issues, ensure proper queue assignment
Issue: Uneven job distribution across workers
Solution: Monitor worker queue assignments, check for network issues, verify Redis connection stability

πŸ”§ Priority Best Practices

🎯 Priority Guidelines:
  • βœ… Use descriptive queue names that reflect priority
  • βœ… Set appropriate queue weights based on business needs
  • βœ… Monitor queue sizes and processing rates
  • βœ… Avoid too many priority levels (3-5 is optimal)
  • βœ… Test priority behavior with realistic job volumes
  • βœ… Use dedicated workers for critical queues if needed

πŸ“‹4. Queue Management & Priority

🏷️ Queue Management

Custom Queue Assignment

class UrgentEmailWorker
  include Sidekiq::Worker
  sidekiq_options queue: 'critical'
  
  def perform(user_id)
    # Process urgent email
  end
end

class NewsletterWorker
  include Sidekiq::Worker
  sidekiq_options queue: 'low_priority'
  
  def perform(user_id)
    # Process newsletter
  end
end

🎯 Queue Priority System

πŸ’‘ How Queue Priority Works:

Queue priority determines the order in which jobs are processed. Sidekiq processes queues in the order they’re defined in the configuration, with higher priority queues processed first.

πŸ“Š Priority Configuration

# config/sidekiq.yml
:concurrency: 10
:queues:
  - [critical, 10]      # Highest priority - processed first
  - [high_priority, 5]  # High priority - processed second
  - [default, 2]        # Normal priority - processed third
  - [low_priority, 1]   # Low priority - processed last
  - [mailers, 1]        # Email jobs - processed last

πŸ”’ Understanding Queue Weights

πŸ’‘ What is the Number?

The number in [queue_name, weight] is the queue weight or priority weight. It determines how much attention workers give to each queue relative to other queues.

πŸ“Š How Queue Weights Work:
πŸ”„ Weight Distribution Example:
# Configuration
:queues:
  - [critical, 10]      # 10/18 = 55.6% of worker attention
  - [high_priority, 5]  # 5/18 = 27.8% of worker attention
  - [default, 2]        # 2/18 = 11.1% of worker attention
  - [low_priority, 1]   # 1/18 = 5.6% of worker attention

# Total weight: 10 + 5 + 2 + 1 = 18
# Each queue gets its weight / total weight percentage of attention
🎯 Weight Calculation Formula:
Queue Attention % = (Queue Weight / Total Weights) Γ— 100

# Example with [critical, 10]:
# Critical attention = (10 / 18) Γ— 100 = 55.6%
# High Priority attention = (5 / 18) Γ— 100 = 27.8%
# Default attention = (2 / 18) Γ— 100 = 11.1%
# Low Priority attention = (1 / 18) Γ— 100 = 5.6%
βš–οΈ Weight Guidelines:
🎯 High Priority Strategy:
  • Critical: 10 (50% attention)
  • High: 5 (25% attention)
  • Normal: 3 (15% attention)
  • Low: 2 (10% attention)
βš–οΈ Balanced Strategy:
  • Critical: 4 (25% attention)
  • High: 4 (25% attention)
  • Normal: 4 (25% attention)
  • Low: 4 (25% attention)
πŸ”§ How to Define Queue Weights:
πŸ“‹ Step-by-Step Process:
  1. Identify Queue Types: List all your job types and their importance
  2. Assign Relative Weights: Give higher numbers to more important queues
  3. Calculate Percentages: Ensure weights add up to reasonable total
  4. Test and Monitor: Adjust based on actual processing patterns
  5. Optimize: Fine-tune based on performance metrics
πŸ“Š Weight Configuration Examples:
🎯 E-commerce Platform:
# High-priority order processing
:queues:
  - [order_processing, 10]    # 40% - Critical for business
  - [payment_processing, 8]   # 32% - Financial transactions
  - [email_notifications, 4]  # 16% - User communication
  - [analytics, 2]            # 8% - Background analytics
  - [maintenance, 1]          # 4% - Cleanup tasks
πŸ“± Social Media App:
# User experience focused
:queues:
  - [user_actions, 10]        # 40% - Real-time user interactions
  - [content_processing, 8]   # 32% - Media processing
  - [notifications, 4]        # 16% - Push notifications
  - [feed_generation, 2]      # 8% - Content feeds
  - [background_tasks, 1]     # 4% - Maintenance
πŸ“Š Analytics Platform:
# Data processing focused
:queues:
  - [data_processing, 10]     # 40% - Core data analysis
  - [report_generation, 6]    # 24% - User reports
  - [data_import, 4]          # 16% - External data
  - [alerts, 3]               # 12% - User alerts
  - [cleanup, 2]              # 8% - Data cleanup
🚨 Common Weight Mistakes:
❌ Problem: All queues have weight 1
βœ… Solution: Use different weights to prioritize important jobs
❌ Problem: Too many different weights (1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
βœ… Solution: Use 3-5 weight levels for simplicity
❌ Problem: Critical queue has low weight
βœ… Solution: Give highest weight to most important queue
❌ Problem: Weights don’t reflect business priorities
βœ… Solution: Align weights with business criticality
πŸ“ˆ Monitoring Weight Effectiveness:
# Check queue processing rates
Sidekiq::Queue.all.each do |queue|
  puts "#{queue.name}: #{queue.size} jobs, weight: #{get_queue_weight(queue.name)}"
end

# Monitor processing distribution
stats = Sidekiq::Stats.new
puts "Total processed: #{stats.processed}"
puts "Queue distribution: #{get_queue_distribution}"
πŸ”§ Weight Optimization Tips:
🎯 Best Practices:
  • βœ… Start with 3-5 weight levels for simplicity
  • βœ… Use higher weights (8-10) for critical business functions
  • βœ… Use lower weights (1-3) for background/maintenance tasks
  • βœ… Monitor queue sizes and adjust weights accordingly
  • βœ… Test weight changes in staging before production
  • βœ… Document weight decisions and business rationale

πŸ” How Queue Attention Works Behind the Scenes

πŸ’‘ The Backend Implementation:

Sidekiq uses a weighted round-robin algorithm to distribute worker attention across queues. The weights determine how frequently each queue is polled relative to others.

πŸ”„ Weighted Round-Robin Algorithm:

πŸ“‹ How Sidekiq Implements Queue Attention:
# Sidekiq's internal queue polling logic (simplified)
class QueuePoller
  def poll_queues
    # Calculate polling frequency based on weights
    queue_weights = {
      'critical' => 10,
      'high_priority' => 5,
      'default' => 2,
      'low_priority' => 1
    }
    
    total_weight = queue_weights.values.sum  # 18
    
    # Poll each queue based on its weight
    queue_weights.each do |queue_name, weight|
      polling_frequency = weight.to_f / total_weight
      
      # Poll this queue 'weight' times for every cycle
      weight.times do
        poll_single_queue(queue_name)
      end
    end
  end
  
  def poll_single_queue(queue_name)
    # Check if queue has jobs
    if jobs_available?(queue_name)
      # Pick up job from this queue
      job = dequeue_job(queue_name)
      process_job(job)
    end
  end
end

βš™οΈ Detailed Polling Mechanism:

🎯 Step-by-Step Process:
πŸ”„ Queue Polling Cycle:
  1. Weight Calculation: Sidekiq calculates total weight (10+5+2+1 = 18)
  2. Polling Frequency: Each queue gets polled based on its weight
  3. Round-Robin Cycle: Critical polled 10 times, High Priority 5 times, etc.
  4. Job Selection: If queue has jobs, worker picks up the oldest job
  5. Processing: Job is processed by available worker thread
  6. Cycle Repeat: Process repeats continuously
πŸ“Š Example Polling Cycle:
# With weights [critical: 10, high_priority: 5, default: 2, low_priority: 1]
# One complete polling cycle looks like:

Cycle 1:  critical β†’ critical β†’ critical β†’ critical β†’ critical β†’ 
          critical β†’ critical β†’ critical β†’ critical β†’ critical β†’ 
          high_priority β†’ high_priority β†’ high_priority β†’ high_priority β†’ high_priority β†’ 
          default β†’ default β†’ 
          low_priority

# This cycle repeats continuously
# Critical gets 10/18 = 55.6% of polling attention
# High Priority gets 5/18 = 27.8% of polling attention
# Default gets 2/18 = 11.1% of polling attention  
# Low Priority gets 1/18 = 5.6% of polling attention

πŸ” Redis Implementation Details:

πŸ“‹ How Sidekiq Stores and Retrieves Jobs:
# Redis data structure for queues
# Each queue is a Redis LIST
redis.lpush("queue:critical", job_data)      # Add job to critical queue
redis.lpush("queue:high_priority", job_data) # Add job to high priority queue
redis.lpush("queue:default", job_data)       # Add job to default queue
redis.lpush("queue:low_priority", job_data)  # Add job to low priority queue

# Sidekiq polls queues in weighted order
def poll_queues_with_weights
  weights = {critical: 10, high_priority: 5, default: 2, low_priority: 1}
  
  weights.each do |queue_name, weight|
    weight.times do
      # Try to get job from this queue
      job_data = redis.brpop("queue:#{queue_name}", timeout: 0.1)
      if job_data
        process_job(job_data)
        break  # Found a job, move to next queue
      end
    end
  end
end

⚑ Worker Thread Behavior:

🎯 How Workers Handle Queue Attention:
πŸ‘₯ Multi-Worker Queue Distribution:
# Multiple workers polling simultaneously
Worker 1: [critical] β†’ [critical] β†’ [high_priority] β†’ [default]
Worker 2: [critical] β†’ [high_priority] β†’ [high_priority] β†’ [low_priority]  
Worker 3: [critical] β†’ [default] β†’ [default] β†’ [low_priority]

# Each worker follows the same weighted polling pattern
# But they may pick up different jobs due to timing
πŸ“Š Concurrency and Queue Attention:
# With 3 workers and weights [10, 5, 2, 1]
# Total polling attempts per cycle: 3 workers Γ— 18 polls = 54 polls

# Distribution across queues:
# Critical: 3 workers Γ— 10 polls = 30 polling attempts (55.6%)
# High Priority: 3 workers Γ— 5 polls = 15 polling attempts (27.8%)
# Default: 3 workers Γ— 2 polls = 6 polling attempts (11.1%)
# Low Priority: 3 workers Γ— 1 poll = 3 polling attempts (5.6%)

πŸ”§ Advanced Queue Attention Features:

🎯 Dynamic Weight Adjustment:
# Sidekiq can dynamically adjust polling based on queue size
def adaptive_polling
  queue_sizes = get_queue_sizes
  
  queue_sizes.each do |queue_name, size|
    if size > threshold
      # Increase polling frequency for busy queues
      increase_polling_frequency(queue_name)
    end
  end
end
πŸ“Š Queue Starvation Prevention:
πŸ›‘οΈ Anti-Starvation Mechanisms:
  • Maximum Polling Delay: Prevents low-priority queues from being ignored too long
  • Queue Size Monitoring: Adjusts polling if queues grow too large
  • Fairness Algorithm: Ensures all queues get some attention
  • Dynamic Weight Adjustment: Can modify weights based on queue backlog

🚨 Queue Attention Edge Cases:

Issue: High-weight queue has no jobs
Solution: Sidekiq quickly moves to next queue, maintaining overall attention distribution
Issue: Low-weight queue has many jobs
Solution: Jobs wait longer but eventually get processed; consider adjusting weights
Issue: All queues have jobs
Solution: Higher-weight queues get more processing time, maintaining priority
Issue: Worker threads are busy
Solution: Polling continues but job processing waits for available threads

πŸ“ˆ Monitoring Queue Attention:

πŸ” How to Verify Queue Attention is Working:
# Monitor actual queue processing rates
Sidekiq::Queue.all.each do |queue|
  puts "#{queue.name}:"
  puts "  Size: #{queue.size}"
  puts "  Weight: #{get_queue_weight(queue.name)}"
  puts "  Processing rate: #{get_processing_rate(queue.name)}"
end

# Check worker distribution
Sidekiq::Workers.new.each do |worker|
  puts "Worker #{worker['pid']}:"
  puts "  Current job: #{worker['payload']}"
  puts "  Queue: #{worker['queue']}"
  puts "  Busy: #{worker['busy']}"
end
πŸ“Š Queue Attention Metrics:
πŸ“‹ Key Metrics to Monitor:
  • Polling Frequency: How often each queue is checked
  • Processing Rate: Jobs processed per minute per queue
  • Queue Wait Time: Average time jobs wait in each queue
  • Worker Distribution: Which queues workers are processing
  • Weight Effectiveness: Whether weights match actual processing

πŸ”§ Tuning Queue Attention:

🎯 Optimization Strategies:
πŸ“Š Performance Tuning:
  1. Monitor Queue Sizes: Track if queues are growing or shrinking
  2. Measure Processing Rates: Calculate jobs/minute per queue
  3. Analyze Wait Times: Check if jobs wait too long in any queue
  4. Adjust Weights: Modify weights based on actual performance
  5. Test Changes: Validate weight changes in staging
  6. Monitor Results: Track improvements after weight adjustments

πŸ” Priority Processing Rules

πŸ”„ Priority Processing Flow:
  1. Queue Order: Sidekiq processes queues in configuration order
  2. Job Selection: Takes jobs from highest priority queue first
  3. FIFO within Queue: Jobs within same queue processed first-in-first-out
  4. Queue Depletion: Only moves to next queue when current queue is empty
  5. Round Robin: Cycles through queues based on priority weights

πŸ‘€ Single Worker Priority

🎯 Single Worker Behavior:

With one worker, Sidekiq processes jobs sequentially based on queue priority. The worker focuses on one queue at a time until it’s empty.

πŸ“‹ Single Worker Example

# Worker processes jobs in this order:
# 1. All critical jobs (highest priority)
# 2. All high_priority jobs
# 3. All default jobs
# 4. All low_priority jobs
# 5. All mailers jobs (lowest priority)

# Example job processing sequence:
# Critical: Job A, Job B, Job C
# High Priority: Job D, Job E
# Default: Job F, Job G, Job H
# Low Priority: Job I
# Mailers: Job J

⚑ Single Worker Advantages

βœ… Benefits:
  • Guaranteed priority order
  • Simple to understand and debug
  • No resource contention
  • Predictable processing
❌ Limitations:
  • Lower throughput
  • Slower job processing
  • No parallel processing
  • Bottleneck for high volume

πŸ‘₯ Multiple Workers Priority

πŸš€ Multiple Workers Behavior:

With multiple workers, Sidekiq distributes jobs across workers while maintaining priority order. Each worker follows the same priority rules but can process jobs in parallel.

πŸ“Š Multiple Workers Example

# Configuration with 3 workers
# config/sidekiq.yml
:concurrency: 3
:queues:
  - [critical, 5]
  - [high_priority, 3]
  - [default, 2]
  - [low_priority, 1]

# Job distribution across workers:
# Worker 1: Critical Job A, High Priority Job D
# Worker 2: Critical Job B, High Priority Job E  
# Worker 3: Critical Job C, Default Job F

# All critical jobs processed first, then high priority, etc.

πŸ”„ Multiple Workers Processing

πŸ“‹ Multi-Worker Priority Flow:
  1. Queue Polling: All workers poll all queues simultaneously
  2. Priority Check: Each worker checks highest priority queue first
  3. Job Distribution: Available jobs distributed across workers
  4. Parallel Processing: Multiple jobs processed simultaneously
  5. Queue Depletion: Workers move to next priority queue when current is empty

⚑ Multiple Workers Advantages

βœ… Benefits:
  • Higher throughput
  • Parallel job processing
  • Better resource utilization
  • Faster job completion
  • Maintains priority order
⚠️ Considerations:
  • More complex monitoring
  • Resource contention possible
  • Memory usage increases
  • Network overhead

πŸ”§ Priority Configuration Strategies

πŸ“ˆ High Priority Strategy

# Prioritize critical jobs
:queues:
  - [critical, 10]      # 50% of worker attention
  - [high_priority, 5]  # 25% of worker attention
  - [default, 3]        # 15% of worker attention
  - [low_priority, 2]   # 10% of worker attention

βš–οΈ Balanced Strategy

# Equal distribution with slight priority
:queues:
  - [critical, 3]       # 30% of worker attention
  - [high_priority, 3]  # 30% of worker attention
  - [default, 2]        # 20% of worker attention
  - [low_priority, 2]   # 20% of worker attention

πŸ“Š Monitoring Priority Processing

# Check queue sizes and priorities
Sidekiq::Queue.all.each do |queue|
  puts "#{queue.name}: #{queue.size} jobs"
end

# Monitor worker queue distribution
Sidekiq::Workers.new.each do |worker|
  puts "Worker #{worker['pid']}: #{worker['queues']}"
end

# Check processing rates by queue
stats = Sidekiq::Stats.new
puts "Processed: #{stats.processed}"
puts "Failed: #{stats.failed}"

🚨 Priority Troubleshooting

Issue: Low priority jobs never get processed
Solution: Check if high priority queues are constantly full, adjust queue weights, or add dedicated low priority workers
Issue: Critical jobs stuck behind normal jobs
Solution: Verify queue configuration order, check for job serialization issues, ensure proper queue assignment
Issue: Uneven job distribution across workers
Solution: Monitor worker queue assignments, check for network issues, verify Redis connection stability

πŸ”§ Priority Best Practices

🎯 Priority Guidelines:
  • βœ… Use descriptive queue names that reflect priority
  • βœ… Set appropriate queue weights based on business needs
  • βœ… Monitor queue sizes and processing rates
  • βœ… Avoid too many priority levels (3-5 is optimal)
  • βœ… Test priority behavior with realistic job volumes
  • βœ… Use dedicated workers for critical queues if needed

βš™οΈ4. Configuration & Optimization

πŸ”§ Advanced Configuration

Redis Configuration

# config/initializers/sidekiq.rb
Sidekiq.configure_server do |config|
  config.redis = { url: 'redis://localhost:6379/0' }
end

Sidekiq.configure_client do |config|
  config.redis = { url: 'redis://localhost:6379/0' }
end

Concurrency Settings

# config/sidekiq.yml
:concurrency: 25        # Number of threads per worker
:queues:
  - [critical, 5]       # Highest priority
  - [high_priority, 3]  # High priority
  - [default, 2]        # Normal priority
  - [low_priority, 1]   # Low priority
  - [mailers, 1]        # Email jobs

:max_retries: 3         # Maximum retry attempts
:retry_interval: 5      # Seconds between retries
:timeout: 30            # Job timeout in seconds

Environment-Specific Configuration

# config/environments/production.rb
config.active_job.queue_name_prefix = Rails.env
config.active_job.queue_name_delimiter = '.'

# config/sidekiq.yml
production:
  :concurrency: 50
  :queues:
    - [critical, 10]
    - [high_priority, 5]
    - [default, 3]
    - [low_priority, 1]

development:
  :concurrency: 5
  :queues:
    - [default, 1]

πŸš€ Performance Optimization

🎯 Optimization Strategies:

  • Concurrency Tuning: Adjust based on CPU cores and memory
  • Queue Prioritization: Use appropriate queue priorities
  • Redis Optimization: Configure Redis for performance
  • Memory Management: Monitor and optimize memory usage
  • Job Batching: Use batch processing for efficiency

Memory Optimization

# config/initializers/sidekiq.rb
Sidekiq.configure_server do |config|
  # Enable garbage collection
  config.on(:startup) do
    GC.auto_compact = true
  end
  
  # Memory monitoring
  config.on(:heartbeat) do
    GC.start if GC.stat[:heap_allocated_pages] > 1000
  end
end

Connection Pooling

# config/initializers/sidekiq.rb
require 'connection_pool'

Sidekiq.configure_server do |config|
  config.redis_pool = ConnectionPool.new(size: 25, timeout: 3) do
    Redis.new(url: ENV['REDIS_URL'])
  end
end

πŸ“Š5. Monitoring & Web UI

πŸ–₯️ Sidekiq Web UI

Setup Web UI

# config/routes.rb
require 'sidekiq/web'

Rails.application.routes.draw do
  # Basic mount
  mount Sidekiq::Web => '/sidekiq'
  
  # With authentication
  authenticate :user, lambda { |u| u.admin? } do
    mount Sidekiq::Web => '/sidekiq'
  end
end

Web UI Features

πŸ“ˆ Dashboard:
  • Real-time statistics
  • Queue sizes and processing rates
  • Worker status and health
  • Failed job counts
πŸ”§ Management:
  • Retry failed jobs
  • Delete jobs from queues
  • View job details and arguments
  • Monitor worker processes

πŸ“Š Monitoring Commands

CLI Monitoring

# Check Sidekiq processes
ps aux | grep sidekiq

# Check Redis connection
redis-cli ping

# Monitor Sidekiq in real-time
bundle exec sidekiqmon

# Check queue sizes
bundle exec sidekiqctl stats

Ruby API Monitoring

# Check queue sizes
Sidekiq::Queue.new.size
Sidekiq::Queue.new('critical').size

# Check worker status
Sidekiq::Workers.new.size
Sidekiq::Workers.new.each do |worker|
  puts "PID: #{worker['pid']}"
  puts "Threads: #{worker['concurrency']}"
  puts "Busy: #{worker['busy']}"
end

# Check failed jobs
Sidekiq::DeadSet.new.size
Sidekiq::RetrySet.new.size

# Get statistics
Sidekiq::Stats.new.processed
Sidekiq::Stats.new.failed

πŸ” Advanced Monitoring

Custom Monitoring

# config/initializers/sidekiq.rb
Sidekiq.configure_server do |config|
  config.on(:startup) do
    Rails.logger.info "Sidekiq started with #{Sidekiq.options[:concurrency]} threads"
  end
  
  config.on(:shutdown) do
    Rails.logger.info "Sidekiq shutting down"
  end
  
  config.on(:heartbeat) do
    # Custom heartbeat logic
    Rails.logger.info "Sidekiq heartbeat"
  end
end

Health Checks

# lib/sidekiq_health_check.rb
class SidekiqHealthCheck
  def self.healthy?
    begin
      # Check Redis connection
      Sidekiq.redis { |conn| conn.ping }
      
      # Check worker processes
      workers = Sidekiq::Workers.new
      return false if workers.size == 0
      
      # Check queue health
      queues = Sidekiq::Queue.all
      return false if queues.any? { |q| q.size > 1000 }
      
      true
    rescue => e
      Rails.logger.error "Sidekiq health check failed: #{e.message}"
      false
    end
  end
end

πŸ”₯6. Advanced Features

πŸ“¦ Batch Processing

Creating Batches

# Create a batch
batch = Sidekiq::Batch.new
batch.on(:success, BatchCallback, { 'user_id' => user.id })

batch.jobs do
  users.each do |user|
    EmailWorker.perform_async(user.id)
  end
end

Batch Callbacks

class BatchCallback
  def on_success(status, options)
    user_id = options['user_id']
    Rails.logger.info "Batch completed for user #{user_id}"
  end
  
  def on_death(status, options)
    Rails.logger.error "Batch failed: #{status.failures}"
  end
end

⏰ Scheduled Jobs

Delayed Jobs

# Schedule job for later
EmailWorker.perform_in(5.minutes, user.id)

# Schedule job for specific time
EmailWorker.perform_at(1.hour.from_now, user.id)

# Recurring jobs with cron syntax
class DailyReportWorker
  include Sidekiq::Worker
  
  def perform
    # Generate daily report
  end
end

# In initializer
Sidekiq::Cron::Job.create(
  name: 'Daily Report',
  cron: '0 9 * * *',  # Every day at 9 AM
  class: 'DailyReportWorker'
)

πŸ”„ Retry Logic

Custom Retry Strategies

class ApiCallWorker
  include Sidekiq::Worker
  sidekiq_options retry: 5, backtrace: true
  
  def perform(api_url, data)
    response = HTTP.get(api_url, json: data)
    
    if response.status.success?
      process_response(response)
    else
      raise "API call failed: #{response.status}"
    end
  rescue => e
    # Custom retry logic
    if retry_count < 3
      raise e  # Will retry
    else
      # Log final failure
      Rails.logger.error "Final failure: #{e.message}"
    end
  end
end

πŸ”’ Job Locks

Unique Jobs

class UniqueEmailWorker
  include Sidekiq::Worker
  sidekiq_options unique_for: 1.hour
  
  def perform(user_id)
    # Only one job per user per hour
    send_email(user_id)
  end
end

Distributed Locks

class DataSyncWorker
  include Sidekiq::Worker
  
  def perform(data_id)
    # Use Redis for distributed locking
    Sidekiq.redis do |conn|
      lock_key = "sync_lock:#{data_id}"
      
      if conn.set(lock_key, 1, nx: true, ex: 300)  # 5 minute lock
        begin
          sync_data(data_id)
        ensure
          conn.del(lock_key)
        end
      else
        Rails.logger.info "Sync already in progress for #{data_id}"
      end
    end
  end
end

πŸ“Š Job Statistics

Custom Metrics

class MetricsWorker
  include Sidekiq::Worker
  
  def perform(metric_name, value)
    # Track custom metrics
    StatsD.increment("sidekiq.job.#{metric_name}", value)
  end
end

# Middleware for automatic metrics
class MetricsMiddleware
  def call(worker, job, queue)
    start_time = Time.current
    
    yield
    
    duration = Time.current - start_time
    StatsD.timing("sidekiq.job.duration", duration)
    StatsD.increment("sidekiq.job.success")
  rescue => e
    StatsD.increment("sidekiq.job.failure")
    raise e
  end
end

πŸ”§7. Troubleshooting & Best Practices

🚨 Common Issues

Issue: High memory usage
Solution: Reduce concurrency, implement garbage collection, monitor memory per worker
Issue: Jobs stuck in queue
Solution: Check worker processes, Redis connection, job serialization
Issue: Redis connection errors
Solution: Check Redis server, connection pool settings, network connectivity
Issue: Jobs failing repeatedly
Solution: Check error logs, implement proper error handling, fix underlying issues

βœ… Best Practices

🎯 Job Design:

  • βœ… Keep jobs idempotent (safe to retry)
  • βœ… Use small, focused jobs
  • βœ… Handle exceptions properly
  • βœ… Use appropriate queue priorities
  • βœ… Implement proper logging

Configuration:

  • βœ… Set appropriate concurrency levels
  • βœ… Configure Redis properly
  • βœ… Use connection pooling
  • βœ… Monitor memory usage
  • βœ… Implement health checks

πŸ“‹ Production Checklist

🟒 System Health:

  • βœ… Workers are running and healthy
  • βœ… Redis connection is stable
  • βœ… Queue sizes are reasonable
  • βœ… Memory usage is under control
  • βœ… Failed jobs are being handled

⚑ Performance:

  • βœ… Job processing times are acceptable
  • βœ… Concurrency is optimized
  • βœ… Monitoring is in place
  • βœ… Error logs are being reviewed
  • βœ… Backup and recovery procedures exist

🚨 Emergency Procedures

Worker Restart

# Graceful restart
bundle exec sidekiqctl restart

# Force restart
bundle exec sidekiqctl stop
bundle exec sidekiqctl start

# Check worker status
bundle exec sidekiqctl status

Queue Management

# Clear specific queue
Sidekiq::Queue.new('failed').clear

# Clear all queues (emergency only)
Sidekiq::Queue.all.each(&:clear)

# Retry failed jobs
Sidekiq::RetrySet.new.each(&:retry)

# Delete dead jobs
Sidekiq::DeadSet.new.clear

πŸ“š8. Reference & Commands

πŸ”§ Sidekiq Commands

CommandDescriptionUsage
bundle exec sidekiqStart Sidekiq with default configurationbundle exec sidekiq -C config/sidekiq.yml
bundle exec sidekiqctlShow Sidekiq control commandsbundle exec sidekiqctl restart
bundle exec sidekiqctl restartRestart Sidekiq processesbundle exec sidekiqctl restart
bundle exec sidekiqmonMonitor Sidekiq processesbundle exec sidekiqmon
Sidekiq::Queue.new.sizeGet default queue sizeSidekiq::Queue.new.size
Sidekiq::Queue.new('default').sizeGet specific queue sizeSidekiq::Queue.new('default').size
Sidekiq::Stats.newGet Sidekiq statisticsSidekiq::Stats.new.processed
Sidekiq::Stats.new.processedGet total processed jobs countSidekiq::Stats.new.processed
bundle exec sidekiq -q high,defaultStart worker for specific queuesQueue-specific processing
bundle exec sidekiq -c 10Start worker with specific concurrencyPerformance tuning
bundle exec sidekiq -r ./config/initializers/sidekiq.rbStart with custom configurationCustom setup
sidekiqctl stopGracefully stop Sidekiq workersMaintenance
sidekiqctl restartRestart Sidekiq workersDeployment
ps aux | grep sidekiqCheck running Sidekiq processesMonitoring
MyWorker.perform_async(args)Enqueue job for immediate processingJob enqueueing
MyWorker.perform_in(3600, args)Schedule job to run in 1 hourDelayed jobs
MyWorker.perform_at(Time.now + 1.hour, args)Schedule job for specific timeScheduled jobs
Sidekiq::Queue.new.sizeGet default queue sizeQueue monitoring
Sidekiq::Stats.new.processedGet total processed jobs countStatistics

πŸ“Š Monitoring Commands

Queue Monitoring

# Check all queue sizes
Sidekiq::Queue.all.each do |queue|
  puts "#{queue.name}: #{queue.size} jobs"
end

# Check worker status
Sidekiq::Workers.new.each do |worker|
  puts "PID: #{worker['pid']}, Busy: #{worker['busy']}"
end

# Check failed jobs
puts "Failed: #{Sidekiq::DeadSet.new.size}"
puts "Retrying: #{Sidekiq::RetrySet.new.size}"

Performance Monitoring

# Get processing statistics
stats = Sidekiq::Stats.new
puts "Processed: #{stats.processed}"
puts "Failed: #{stats.failed}"
puts "Enqueued: #{stats.enqueued}"

# Check Redis memory
redis_info = Sidekiq.redis { |conn| conn.info }
puts "Redis Memory: #{redis_info['used_memory_human']}"

πŸ”§ Configuration Examples

Production Configuration

# config/sidekiq.yml
:concurrency: 25
:queues:
  - [critical, 10]
  - [high_priority, 5]
  - [default, 3]
  - [low_priority, 1]
  - [mailers, 1]

:max_retries: 3
:retry_interval: 5
:timeout: 30

# Environment-specific
production:
  :concurrency: 50
  :queues:
    - [critical, 15]
    - [high_priority, 10]
    - [default, 5]
    - [low_priority, 2]

development:
  :concurrency: 5
  :queues:
    - [default, 1]

Redis Configuration

# config/initializers/sidekiq.rb
Sidekiq.configure_server do |config|
  config.redis = {
    url: ENV['REDIS_URL'] || 'redis://localhost:6379/0',
    password: ENV['REDIS_PASSWORD'],
    db: 0,
    network_timeout: 5,
    pool_timeout: 5,
    size: 25
  }
end

Sidekiq.configure_client do |config|
  config.redis = {
    url: ENV['REDIS_URL'] || 'redis://localhost:6379/0',
    password: ENV['REDIS_PASSWORD'],
    db: 0,
    size: 5
  }
end

πŸ“š Additional Resources

πŸ”— Official Resources:

  • Sidekiq Documentation: https://github.com/mperham/sidekiq
  • Sidekiq Wiki: https://github.com/mperham/sidekiq/wiki
  • Sidekiq Web UI: Built-in monitoring interface
  • Sidekiq Pro: Commercial version with advanced features

πŸ“– Related Topics:

  • Redis: In-memory data structure store
  • ActiveJob: Rails job framework
  • Background Jobs: Asynchronous processing
  • Performance Optimization: Scaling strategies

❓9. Interview Questions & Answers

πŸ” Basic Questions

Q1: What is Sidekiq and how does it work?
A: Sidekiq is a high-performance background job processing system for Ruby that uses Redis as storage and threads for concurrent execution. Jobs are serialized, stored in Redis queues, and processed by worker threads that poll for available jobs.
Q2: What's the difference between Sidekiq and Resque?
A: Sidekiq uses threads for concurrency (more memory efficient), while Resque uses processes (better fault isolation). Sidekiq has better performance (10,000+ jobs/sec vs 1,000-5,000 jobs/sec) and includes features like batches and scheduled jobs.
Q3: How do you handle job failures in Sidekiq?
A: Sidekiq automatically retries failed jobs based on configuration. Use sidekiq_options retry: 5 to set retry attempts. Failed jobs go to the dead queue after max retries. You can also implement custom retry logic in the job.
Q4: What is the difference between perform_async and perform_in?
A: perform_async enqueues the job immediately, while perform_in schedules the job to run after a specified delay. perform_at schedules for a specific time.
Q5: How do you monitor Sidekiq in production?
A: Use Sidekiq Web UI, CLI commands like sidekiqctl stats, Ruby API (Sidekiq::Queue.new.size), and external monitoring tools. Monitor queue sizes, worker health, failed jobs, and processing rates.

οΏ½οΏ½ Advanced Questions

Q6: How does Sidekiq handle memory management?
A: Sidekiq uses garbage collection after each job. For memory optimization, use GlobalID for ActiveRecord objects, implement garbage collection hooks, monitor memory usage, and use connection pooling for Redis.
Q7: What are Sidekiq batches and when would you use them?
A: Batches group multiple jobs together with callbacks. Use them for bulk operations, data processing workflows, or when you need to track completion of related jobs. They provide success/failure callbacks for the entire batch.
Q8: How do you implement job uniqueness in Sidekiq?
A: Use sidekiq_options unique_for: 1.hour or implement custom uniqueness using Redis locks. For distributed locks, use Redis SET with NX flag and expiration time.
Q9: What's the optimal concurrency setting for Sidekiq?
A: Formula: (CPU cores * 2) + 1. For 4 cores, use 9 threads. Adjust based on job type (I/O vs CPU bound), memory availability, and monitoring results. Start conservative and tune based on performance metrics.
Q10: How do you handle Redis connection failures in Sidekiq?
A: Configure connection pool settings, implement retry logic, use connection pooling, set appropriate timeouts, and implement health checks. Monitor Redis connection status and implement fallback strategies.

πŸ”§ System Design Questions

Q11: How would you design a system to process 1 million emails per day?
A: Use Sidekiq with multiple workers, implement batching for efficiency, use dedicated email queues, implement rate limiting, monitor queue sizes, use connection pooling, and implement proper error handling and retries.
Q12: How do you ensure job ordering in Sidekiq?
A: Sidekiq doesn't guarantee job ordering. For ordered processing, use job chaining (one job triggers the next), implement custom ordering logic, or use external coordination mechanisms like database locks or Redis sorted sets.
Q13: How would you handle a sudden spike in job volume?
A: Implement auto-scaling based on queue size, use multiple worker processes, implement job prioritization, add more Redis instances, implement rate limiting, and use monitoring to detect and respond to spikes quickly.
Q14: What's the difference between Sidekiq and ActiveJob?
A: ActiveJob is Rails' abstraction layer that works with multiple backends (Sidekiq, Resque, etc.). Sidekiq is a specific backend with direct API. ActiveJob provides unified interface but adds overhead, while direct Sidekiq offers better performance and features.
Q15: How do you implement job scheduling in Sidekiq?
A: Use perform_in for delayed jobs, perform_at for specific times, or Sidekiq Cron for recurring jobs. For complex scheduling, implement custom scheduling logic or use external schedulers like cron jobs.

🏒10. Real-World Case Studies

πŸ“§ E-commerce Email System

🎯 Problem:

A large e-commerce platform needed to send 500,000+ transactional emails daily with different priorities and delivery requirements.

πŸ’‘ Solution:

# Queue Configuration
:queues:
  - [critical_emails, 10]    # Order confirmations, password resets
  - [marketing_emails, 3]    # Newsletters, promotions
  - [bulk_emails, 1]         # Mass marketing campaigns

# Worker Classes
class CriticalEmailWorker
  include Sidekiq::Worker
  sidekiq_options queue: 'critical_emails', retry: 3
  
  def perform(user_id, email_type)
    user = User.find(user_id)
    case email_type
    when 'order_confirmation'
      OrderMailer.confirmation(user).deliver_now
    when 'password_reset'
      UserMailer.password_reset(user).deliver_now
    end
  end
end

class MarketingEmailWorker
  include Sidekiq::Worker
  sidekiq_options queue: 'marketing_emails', retry: 2
  
  def perform(user_id, campaign_id)
    user = User.find(user_id)
    campaign = Campaign.find(campaign_id)
    MarketingMailer.newsletter(user, campaign).deliver_now
  end
end

πŸ“Š Results:

  • βœ… 99.9% email delivery rate
  • βœ… Critical emails processed within 30 seconds
  • βœ… Marketing emails processed within 5 minutes
  • βœ… 50% reduction in email processing time
  • βœ… 80% reduction in failed email attempts

πŸ–ΌοΈ Image Processing Platform

🎯 Problem:

A social media platform needed to process user-uploaded images (resize, compress, generate thumbnails) with varying processing requirements and user expectations.

πŸ’‘ Solution:

# Priority-based image processing
class ImageProcessingWorker
  include Sidekiq::Worker
  sidekiq_options queue: 'images', retry: 2, backtrace: true
  
  def perform(image_id, priority = 'normal')
    image = Image.find(image_id)
    
    # Process based on priority
    case priority
    when 'urgent'
      process_urgent_image(image)
    when 'normal'
      process_normal_image(image)
    when 'background'
      process_background_image(image)
    end
    
    # Update image status
    image.update!(processed: true, processed_at: Time.current)
  end
  
  private
  
  def process_urgent_image(image)
    # High-quality processing for profile pictures
    generate_thumbnails(image, sizes: [50, 100, 200])
    compress_image(image, quality: 90)
  end
  
  def process_normal_image(image)
    # Standard processing for regular uploads
    generate_thumbnails(image, sizes: [100, 200])
    compress_image(image, quality: 80)
  end
  
  def process_background_image(image)
    # Background processing for old images
    generate_thumbnails(image, sizes: [200])
    compress_image(image, quality: 70)
  end
end

# Batch processing for bulk operations
class BulkImageProcessingWorker
  include Sidekiq::Worker
  sidekiq_options queue: 'bulk_images'
  
  def perform(image_ids)
    images = Image.where(id: image_ids)
    
    batch = Sidekiq::Batch.new
    batch.on(:success, BulkImageCallback)
    
    batch.jobs do
      images.each do |image|
        ImageProcessingWorker.perform_async(image.id, 'background')
      end
    end
  end

πŸ“Š Results:

  • βœ… Profile pictures processed within 10 seconds
  • βœ… Regular uploads processed within 2 minutes
  • βœ… Background processing handled 10,000+ images daily
  • βœ… 70% reduction in storage costs through compression
  • βœ… 95% user satisfaction with image quality

πŸ“Š Data Analytics Pipeline

🎯 Problem:

A SaaS analytics platform needed to process millions of data points daily, generate reports, and provide real-time insights while maintaining data accuracy and processing efficiency.

πŸ’‘ Solution:

# Multi-stage data processing pipeline
class DataCollectionWorker
  include Sidekiq::Worker
  sidekiq_options queue: 'data_collection', retry: 3
  
  def perform(event_data)
    # Collect and validate data
    event = Event.create!(event_data)
    
    # Trigger next stage
    DataProcessingWorker.perform_async(event.id)
  end
end

class DataProcessingWorker
  include Sidekiq::Worker
  sidekiq_options queue: 'data_processing', retry: 2
  
  def perform(event_id)
    event = Event.find(event_id)
    
    # Process and aggregate data
    processed_data = process_event_data(event)
    
    # Store processed data
    ProcessedEvent.create!(processed_data)
    
    # Trigger analytics generation
    AnalyticsWorker.perform_async(event.user_id)
  end
end

class AnalyticsWorker
  include Sidekiq::Worker
  sidekiq_options queue: 'analytics', retry: 1
  
  def perform(user_id)
    user = User.find(user_id)
    
    # Generate user analytics
    analytics = generate_user_analytics(user)
    
    # Update user dashboard
    user.update!(analytics: analytics)
    
    # Send notification if significant changes
    if analytics.significant_change?
      NotificationWorker.perform_async(user_id, 'analytics_update')
    end
  end
end

class ReportGenerationWorker
  include Sidekiq::Worker
  sidekiq_options queue: 'reports', retry: 1
  
  def perform(report_type, date_range)
    # Generate scheduled reports
    report = generate_report(report_type, date_range)
    
    # Store report
    Report.create!(report)
    
    # Send to stakeholders
    ReportMailer.daily_report(report).deliver_now
  end

πŸ“Š Results:

  • βœ… 5 million data points processed daily
  • βœ… Real-time analytics updated within 5 minutes
  • βœ… 99.9% data accuracy maintained
  • βœ… 60% reduction in processing time
  • βœ… Automated report generation for 100+ clients

πŸ” Financial Transaction Processing

🎯 Problem:

A fintech company needed to process high-value financial transactions with strict compliance requirements, audit trails, and real-time fraud detection.

πŸ’‘ Solution:

# Secure transaction processing with audit trails
class TransactionProcessingWorker
  include Sidekiq::Worker
  sidekiq_options queue: 'transactions', retry: 0, backtrace: true
  
  def perform(transaction_id)
    transaction = Transaction.find(transaction_id)
    
    # Create audit trail
    AuditLog.create!(
      transaction: transaction,
      action: 'processing_started',
      timestamp: Time.current
    )
    
    begin
      # Validate transaction
      validate_transaction(transaction)
      
      # Check fraud detection
      fraud_check = FraudDetectionWorker.perform_now(transaction.id)
      
      if fraud_check.suspicious?
        raise "Transaction flagged for fraud review"
      end
      
      # Process transaction
      process_transaction(transaction)
      
      # Update status
      transaction.update!(status: 'completed')
      
      # Create success audit log
      AuditLog.create!(
        transaction: transaction,
        action: 'processing_completed',
        timestamp: Time.current
      )
      
      # Send confirmation
      NotificationWorker.perform_async(transaction.user_id, 'transaction_completed')
      
    rescue => e
      # Log failure
      AuditLog.create!(
        transaction: transaction,
        action: 'processing_failed',
        error: e.message,
        timestamp: Time.current
      )
      
      # Update status
      transaction.update!(status: 'failed', error: e.message)
      
      # Send failure notification
      NotificationWorker.perform_async(transaction.user_id, 'transaction_failed')
      
      raise e
    end
  end
end

class FraudDetectionWorker
  include Sidekiq::Worker
  sidekiq_options queue: 'fraud_detection', retry: 1
  
  def perform(transaction_id)
    transaction = Transaction.find(transaction_id)
    
    # Implement fraud detection logic
    risk_score = calculate_risk_score(transaction)
    
    if risk_score > 0.8
      # Flag for manual review
      ManualReviewWorker.perform_async(transaction_id)
      return { suspicious: true, risk_score: risk_score }
    end
    
    { suspicious: false, risk_score: risk_score }
  end
end

class ComplianceReportingWorker
  include Sidekiq::Worker
  sidekiq_options queue: 'compliance', retry: 3
  
  def perform(report_date)
    # Generate compliance reports
    report = generate_compliance_report(report_date)
    
    # Store report
    ComplianceReport.create!(report)
    
    # Submit to regulatory bodies
    submit_to_regulators(report)
  end

πŸ“Š Results:

  • βœ… 100% transaction audit trail maintained
  • βœ… Fraud detection within 2 seconds
  • βœ… 99.99% transaction success rate
  • βœ… Regulatory compliance reports automated
  • βœ… 90% reduction in manual review time

πŸ“± Mobile App Notification System

🎯 Problem:

A mobile app needed to send personalized push notifications to millions of users with different engagement levels, time zones, and preferences.

πŸ’‘ Solution:

# Intelligent notification system
class NotificationWorker
  include Sidekiq::Worker
  sidekiq_options queue: 'notifications', retry: 2
  
  def perform(user_id, notification_type, data = {})
    user = User.find(user_id)
    
    # Check user preferences
    return unless user.notifications_enabled?
    
    # Check time zone and quiet hours
    return if user.in_quiet_hours?
    
    # Personalize notification
    notification = personalize_notification(user, notification_type, data)
    
    # Send via multiple channels
    send_push_notification(user, notification)
    send_email_notification(user, notification) if user.email_enabled?
    send_sms_notification(user, notification) if user.sms_enabled?
    
    # Track delivery
    NotificationDelivery.create!(
      user: user,
      notification_type: notification_type,
      sent_at: Time.current
    )
  end
end

class BatchNotificationWorker
  include Sidekiq::Worker
  sidekiq_options queue: 'batch_notifications'
  
  def perform(campaign_id)
    campaign = Campaign.find(campaign_id)
    
    # Get target users
    users = campaign.target_users
    
    # Process in batches
    users.in_batches(of: 1000) do |batch|
      batch.each do |user|
        NotificationWorker.perform_async(
          user.id, 
          'campaign', 
          { campaign_id: campaign.id }
        )
      end
    end
  end
end

class EngagementOptimizationWorker
  include Sidekiq::Worker
  sidekiq_options queue: 'engagement', retry: 1
  
  def perform(user_id)
    user = User.find(user_id)
    
    # Analyze user engagement
    engagement_score = calculate_engagement_score(user)
    
    # Adjust notification frequency
    if engagement_score < 0.3
      # Reduce notifications for low-engagement users
      user.update!(notification_frequency: 'reduced')
    elsif engagement_score > 0.8
      # Increase notifications for high-engagement users
      user.update!(notification_frequency: 'increased')
    end
  end

πŸ“Š Results:

  • βœ… 10 million notifications sent daily
  • βœ… 95% delivery rate across all channels
  • βœ… 40% increase in user engagement
  • βœ… 60% reduction in notification fatigue
  • βœ… Personalized delivery timing for each user

πŸ“‹11. Commands & Concepts Reference Table

Core Commands

CommandDescriptionUsage
bundle exec sidekiqStart Sidekiq with default configurationbundle exec sidekiq -C config/sidekiq.yml
bundle exec sidekiqctlShow Sidekiq control commandsbundle exec sidekiqctl restart
bundle exec sidekiqctl restartRestart Sidekiq processesbundle exec sidekiqctl restart
bundle exec sidekiqmonMonitor Sidekiq processesbundle exec sidekiqmon
Sidekiq::Queue.new.sizeGet default queue sizeSidekiq::Queue.new.size
Sidekiq::Queue.new('default').sizeGet specific queue sizeSidekiq::Queue.new('default').size
Sidekiq::Stats.newGet Sidekiq statisticsSidekiq::Stats.new.processed
Sidekiq::Stats.new.processedGet total processed jobs countSidekiq::Stats.new.processed
bundle exec sidekiq -q high,defaultStart worker for specific queuesQueue-specific processing
bundle exec sidekiq -c 10Start worker with specific concurrencyPerformance tuning
bundle exec sidekiq -r ./config/initializers/sidekiq.rbStart with custom configurationCustom setup
sidekiqctl stopGracefully stop Sidekiq workersMaintenance
sidekiqctl restartRestart Sidekiq workersDeployment
ps aux | grep sidekiqCheck running Sidekiq processesMonitoring
MyWorker.perform_async(args)Enqueue job for immediate processingJob enqueueing
MyWorker.perform_in(3600, args)Schedule job to run in 1 hourDelayed jobs
MyWorker.perform_at(Time.now + 1.hour, args)Schedule job for specific timeScheduled jobs
Sidekiq::Queue.new.sizeGet default queue sizeQueue monitoring
Sidekiq::Stats.new.processedGet total processed jobs countStatistics

Ruby/Rails Commands

CommandDescriptionUsage
MyWorker.perform_async(args)Enqueue job immediatelyBasic job enqueueing
MyWorker.perform_in(3600, args)Enqueue job with delay (seconds)Scheduled jobs
MyWorker.perform_at(Time.now + 1.hour, args)Enqueue job at specific timeTime-based scheduling
Sidekiq::Queue.new.sizeGet queue sizeQueue monitoring
Sidekiq::Stats.new.processedGet total processed jobsStatistics
Sidekiq::Stats.new.failedGet total failed jobsError monitoring
Sidekiq::Queue.all.map(&:size)Get all queue sizesQueue analysis

Redis Commands

CommandDescriptionUsage
redis-cli pingTest Redis connectionConnection testing
redis-cli llen sidekiq:queue:defaultCheck queue lengthQueue monitoring
redis-cli keys sidekiq:*List all Sidekiq keysDebugging
redis-cli flushallClear all Redis dataDevelopment cleanup

Monitoring Commands

CommandDescriptionUsage
tail -f log/sidekiq.logMonitor Sidekiq logsReal-time monitoring
pgrep -f "sidekiq"Find Sidekiq processesProcess monitoring
redis-cli info memoryCheck Redis memory usagePerformance monitoring
redis-cli info statsGet Redis statisticsSystem health

Core Concepts

ConceptDescriptionUsage
WorkerRuby class that includes Sidekiq::Worker and implements perform methodBackground task execution
ProcessSidekiq process that runs workers and processes jobsJob processing
QueueRedis list that holds pending jobsJob organization
JobSerialized worker instance with argumentsTask unit
ConcurrencyNumber of threads processing jobs simultaneouslyPerformance tuning
RetryAutomatic re-execution of failed jobsError handling
MiddlewareCode that runs before/after job executionLogging, monitoring, error handling
BatchGroup of jobs that can be tracked togetherComplex workflows

Configuration Options

OptionDescriptionDefault
concurrencyNumber of threads processing jobs25
queuesList of queues to process['default']
retryNumber of retry attempts for failed jobs25
timeoutJob timeout in seconds8
backtraceNumber of backtrace lines to log0
dead_job_maxMaximum number of dead jobs to keep10000

Redis Data Structures

Key PatternData TypeDescription
sidekiq:queue:defaultListPending jobs in default queue
sidekiq:processedStringTotal processed jobs counter
sidekiq:failedStringTotal failed jobs counter
sidekiq:workersSetActive worker processes
sidekiq:deadSorted SetDead jobs (failed max retries)
sidekiq:scheduledSorted SetScheduled jobs
sidekiq:retrySorted SetJobs waiting to retry

Environment Variables

VariableDescriptionExample
SIDEKIQ_CONCURRENCYNumber of worker threads25
SIDEKIQ_QUEUESComma-separated list of queueshigh,default,low
REDIS_URLRedis connection URLredis://localhost:6379/0
RAILS_ENVRails environmentproduction
SIDEKIQ_TIMEOUTJob timeout in seconds8

Common Job Patterns

PatternDescriptionUse Case
Email WorkerSend emails asynchronouslyUser notifications, marketing emails
Data ProcessingProcess large datasets in backgroundAnalytics, reports, data imports
File ProcessingHandle file uploads and processingImage resizing, document processing
API IntegrationMake external API calls asynchronouslyThird-party integrations, webhooks
Batch ProcessingProcess jobs in batches with trackingComplex workflows, data pipelines
Scheduled JobsExecute jobs at specific timesDaily reports, maintenance tasks

Learn more aboutΒ Rails
Learn more aboutΒ Active Job
Learn more aboutΒ DevOps

30 thoughts on “Sidekiq Tutorial for Rails: Async Jobs Made Easy”

Comments are closed.

Scroll to Top