π Sidekiq Complete Guide: High-Performance Background Jobs
π Table of Contents
- 1. Fundamentals & Core Concepts
- 2. Installation & Setup
- 3. Basic Usage & Workers
- 4. Queue Management & Priority
- 5. Configuration & Optimization
- 6. Monitoring & Web UI
- 7. Advanced Features
- 8. Troubleshooting & Best Practices
- 9. Interview Questions & Answers
- 10. Real-World Case Studies
- 11. Reference & Commands
- 12. Commands & Concepts Reference Table
π1. Fundamentals & Core Concepts
οΏ½οΏ½ What is Sidekiq?
Sidekiq is a high-performance background job processing system for Ruby that uses Redis as its storage backend and threads for concurrent job execution. It’s designed for speed, reliability, and scalability.
β Pros
- Extremely high performance (10,000+ jobs/second)
- Thread-based concurrency (efficient memory usage)
- Excellent monitoring with Web UI
- Rich feature set (batches, scheduled jobs, retries)
- Mature and well-documented
- Active community and development
β Cons
- Requires Redis infrastructure
- Thread safety considerations
- More complex setup than simpler alternatives
- Memory management challenges
- Learning curve for advanced features
ποΈ Sidekiq Architecture
π Sidekiq Processing Flow:
- Job Creation: Worker class defined with
include Sidekiq::Worker
- Job Enqueueing: Job serialized and stored in Redis
- Worker Polling: Sidekiq workers poll Redis for jobs
- Thread Execution: Jobs executed in worker threads
- Result Handling: Success/failure logged and processed
π§ Key Components
π Core Components:
- Workers: Job classes that process tasks
- Redis: Storage backend for job queues
- Processes: Sidekiq worker processes
- Threads: Concurrent job execution
π Data Structures:
- Queues: Redis lists for job storage
- Sets: Failed jobs, scheduled jobs
- Hashes: Job metadata and statistics
- Strings: Configuration and locks
βοΈ2. Installation & Setup
π¦ Installation Steps
1. Add Sidekiq to Gemfile
# Gemfile
gem 'sidekiq', '~> 7.0'
gem 'redis', '~> 5.0'
2. Install Dependencies
# Install gems
bundle install
# Install Redis (Ubuntu/Debian)
sudo apt-get install redis-server
# Install Redis (macOS)
brew install redis
# Start Redis
redis-server
3. Configure Sidekiq
# config/initializers/sidekiq.rb
Sidekiq.configure_server do |config|
config.redis = { url: 'redis://localhost:6379/0' }
end
Sidekiq.configure_client do |config|
config.redis = { url: 'redis://localhost:6379/0' }
end
4. Create Sidekiq Configuration
# config/sidekiq.yml
:concurrency: 10
:queues:
- [critical, 5]
- [high_priority, 3]
- [default, 2]
- [low_priority, 1]
- [mailers, 1]
:max_retries: 3
:retry_interval: 5
5. Configure Rails Integration
# config/application.rb
config.active_job.queue_adapter = :sidekiq
# config/routes.rb
require 'sidekiq/web'
Rails.application.routes.draw do
mount Sidekiq::Web => '/sidekiq'
end
π3. Basic Usage & Workers
π Creating Your First Worker
1. Generate Worker
# Generate worker
rails generate sidekiq:worker EmailWorker
# Or create manually
# app/sidekiq/email_worker.rb
2. Basic Worker Structure
class EmailWorker
include Sidekiq::Worker
def perform(user_id)
user = User.find(user_id)
UserMailer.welcome_email(user).deliver_now
end
end
3. Enqueue Jobs
# Enqueue job
EmailWorker.perform_async(user.id)
# Enqueue with delay
EmailWorker.perform_in(5.minutes, user.id)
# Enqueue at specific time
EmailWorker.perform_at(1.hour.from_now, user.id)
π·οΈ Queue Management
Custom Queue Assignment
class UrgentEmailWorker
include Sidekiq::Worker
sidekiq_options queue: 'critical'
def perform(user_id)
# Process urgent email
end
end
class NewsletterWorker
include Sidekiq::Worker
sidekiq_options queue: 'low_priority'
def perform(user_id)
# Process newsletter
end
end
π― Queue Priority System
π‘ How Queue Priority Works:
Queue priority determines the order in which jobs are processed. Sidekiq processes queues in the order they’re defined in the configuration, with higher priority queues processed first.
π Priority Configuration
# config/sidekiq.yml
:concurrency: 10
:queues:
- [critical, 10] # Highest priority - processed first
- [high_priority, 5] # High priority - processed second
- [default, 2] # Normal priority - processed third
- [low_priority, 1] # Low priority - processed last
- [mailers, 1] # Email jobs - processed last
π’ Understanding Queue Weights
π‘ What is the Number?
The number in [queue_name, weight]
is the queue weight or priority weight. It determines how much attention workers give to each queue relative to other queues.
π How Queue Weights Work:
π Weight Distribution Example:
# Configuration
:queues:
- [critical, 10] # 10/18 = 55.6% of worker attention
- [high_priority, 5] # 5/18 = 27.8% of worker attention
- [default, 2] # 2/18 = 11.1% of worker attention
- [low_priority, 1] # 1/18 = 5.6% of worker attention
# Total weight: 10 + 5 + 2 + 1 = 18
# Each queue gets its weight / total weight percentage of attention
π― Weight Calculation Formula:
Queue Attention % = (Queue Weight / Total Weights) Γ 100
# Example with [critical, 10]:
# Critical attention = (10 / 18) Γ 100 = 55.6%
# High Priority attention = (5 / 18) Γ 100 = 27.8%
# Default attention = (2 / 18) Γ 100 = 11.1%
# Low Priority attention = (1 / 18) Γ 100 = 5.6%
βοΈ Weight Guidelines:
π― High Priority Strategy:
- Critical: 10 (50% attention)
- High: 5 (25% attention)
- Normal: 3 (15% attention)
- Low: 2 (10% attention)
βοΈ Balanced Strategy:
- Critical: 4 (25% attention)
- High: 4 (25% attention)
- Normal: 4 (25% attention)
- Low: 4 (25% attention)
π§ How to Define Queue Weights:
π Step-by-Step Process:
- Identify Queue Types: List all your job types and their importance
- Assign Relative Weights: Give higher numbers to more important queues
- Calculate Percentages: Ensure weights add up to reasonable total
- Test and Monitor: Adjust based on actual processing patterns
- Optimize: Fine-tune based on performance metrics
π Weight Configuration Examples:
π― E-commerce Platform:
# High-priority order processing
:queues:
- [order_processing, 10] # 40% - Critical for business
- [payment_processing, 8] # 32% - Financial transactions
- [email_notifications, 4] # 16% - User communication
- [analytics, 2] # 8% - Background analytics
- [maintenance, 1] # 4% - Cleanup tasks
π± Social Media App:
# User experience focused
:queues:
- [user_actions, 10] # 40% - Real-time user interactions
- [content_processing, 8] # 32% - Media processing
- [notifications, 4] # 16% - Push notifications
- [feed_generation, 2] # 8% - Content feeds
- [background_tasks, 1] # 4% - Maintenance
π Analytics Platform:
# Data processing focused
:queues:
- [data_processing, 10] # 40% - Core data analysis
- [report_generation, 6] # 24% - User reports
- [data_import, 4] # 16% - External data
- [alerts, 3] # 12% - User alerts
- [cleanup, 2] # 8% - Data cleanup
π¨ Common Weight Mistakes:
β Solution: Use different weights to prioritize important jobs
β Solution: Use 3-5 weight levels for simplicity
β Solution: Give highest weight to most important queue
β Solution: Align weights with business criticality
π Monitoring Weight Effectiveness:
# Check queue processing rates
Sidekiq::Queue.all.each do |queue|
puts "#{queue.name}: #{queue.size} jobs, weight: #{get_queue_weight(queue.name)}"
end
# Monitor processing distribution
stats = Sidekiq::Stats.new
puts "Total processed: #{stats.processed}"
puts "Queue distribution: #{get_queue_distribution}"
π§ Weight Optimization Tips:
π― Best Practices:
- β Start with 3-5 weight levels for simplicity
- β Use higher weights (8-10) for critical business functions
- β Use lower weights (1-3) for background/maintenance tasks
- β Monitor queue sizes and adjust weights accordingly
- β Test weight changes in staging before production
- β Document weight decisions and business rationale
π How Queue Attention Works Behind the Scenes
π‘ The Backend Implementation:
Sidekiq uses a weighted round-robin algorithm to distribute worker attention across queues. The weights determine how frequently each queue is polled relative to others.
π Weighted Round-Robin Algorithm:
π How Sidekiq Implements Queue Attention:
# Sidekiq's internal queue polling logic (simplified)
class QueuePoller
def poll_queues
# Calculate polling frequency based on weights
queue_weights = {
'critical' => 10,
'high_priority' => 5,
'default' => 2,
'low_priority' => 1
}
total_weight = queue_weights.values.sum # 18
# Poll each queue based on its weight
queue_weights.each do |queue_name, weight|
polling_frequency = weight.to_f / total_weight
# Poll this queue 'weight' times for every cycle
weight.times do
poll_single_queue(queue_name)
end
end
end
def poll_single_queue(queue_name)
# Check if queue has jobs
if jobs_available?(queue_name)
# Pick up job from this queue
job = dequeue_job(queue_name)
process_job(job)
end
end
end
βοΈ Detailed Polling Mechanism:
π― Step-by-Step Process:
π Queue Polling Cycle:
- Weight Calculation: Sidekiq calculates total weight (10+5+2+1 = 18)
- Polling Frequency: Each queue gets polled based on its weight
- Round-Robin Cycle: Critical polled 10 times, High Priority 5 times, etc.
- Job Selection: If queue has jobs, worker picks up the oldest job
- Processing: Job is processed by available worker thread
- Cycle Repeat: Process repeats continuously
π Example Polling Cycle:
# With weights [critical: 10, high_priority: 5, default: 2, low_priority: 1]
# One complete polling cycle looks like:
Cycle 1: critical β critical β critical β critical β critical β
critical β critical β critical β critical β critical β
high_priority β high_priority β high_priority β high_priority β high_priority β
default β default β
low_priority
# This cycle repeats continuously
# Critical gets 10/18 = 55.6% of polling attention
# High Priority gets 5/18 = 27.8% of polling attention
# Default gets 2/18 = 11.1% of polling attention
# Low Priority gets 1/18 = 5.6% of polling attention
π Redis Implementation Details:
π How Sidekiq Stores and Retrieves Jobs:
# Redis data structure for queues
# Each queue is a Redis LIST
redis.lpush("queue:critical", job_data) # Add job to critical queue
redis.lpush("queue:high_priority", job_data) # Add job to high priority queue
redis.lpush("queue:default", job_data) # Add job to default queue
redis.lpush("queue:low_priority", job_data) # Add job to low priority queue
# Sidekiq polls queues in weighted order
def poll_queues_with_weights
weights = {critical: 10, high_priority: 5, default: 2, low_priority: 1}
weights.each do |queue_name, weight|
weight.times do
# Try to get job from this queue
job_data = redis.brpop("queue:#{queue_name}", timeout: 0.1)
if job_data
process_job(job_data)
break # Found a job, move to next queue
end
end
end
end
β‘ Worker Thread Behavior:
π― How Workers Handle Queue Attention:
π₯ Multi-Worker Queue Distribution:
# Multiple workers polling simultaneously
Worker 1: [critical] β [critical] β [high_priority] β [default]
Worker 2: [critical] β [high_priority] β [high_priority] β [low_priority]
Worker 3: [critical] β [default] β [default] β [low_priority]
# Each worker follows the same weighted polling pattern
# But they may pick up different jobs due to timing
π Concurrency and Queue Attention:
# With 3 workers and weights [10, 5, 2, 1]
# Total polling attempts per cycle: 3 workers Γ 18 polls = 54 polls
# Distribution across queues:
# Critical: 3 workers Γ 10 polls = 30 polling attempts (55.6%)
# High Priority: 3 workers Γ 5 polls = 15 polling attempts (27.8%)
# Default: 3 workers Γ 2 polls = 6 polling attempts (11.1%)
# Low Priority: 3 workers Γ 1 poll = 3 polling attempts (5.6%)
π§ Advanced Queue Attention Features:
π― Dynamic Weight Adjustment:
# Sidekiq can dynamically adjust polling based on queue size
def adaptive_polling
queue_sizes = get_queue_sizes
queue_sizes.each do |queue_name, size|
if size > threshold
# Increase polling frequency for busy queues
increase_polling_frequency(queue_name)
end
end
end
π Queue Starvation Prevention:
π‘οΈ Anti-Starvation Mechanisms:
- Maximum Polling Delay: Prevents low-priority queues from being ignored too long
- Queue Size Monitoring: Adjusts polling if queues grow too large
- Fairness Algorithm: Ensures all queues get some attention
- Dynamic Weight Adjustment: Can modify weights based on queue backlog
π¨ Queue Attention Edge Cases:
Solution: Sidekiq quickly moves to next queue, maintaining overall attention distribution
Solution: Jobs wait longer but eventually get processed; consider adjusting weights
Solution: Higher-weight queues get more processing time, maintaining priority
Solution: Polling continues but job processing waits for available threads
π Monitoring Queue Attention:
π How to Verify Queue Attention is Working:
# Monitor actual queue processing rates
Sidekiq::Queue.all.each do |queue|
puts "#{queue.name}:"
puts " Size: #{queue.size}"
puts " Weight: #{get_queue_weight(queue.name)}"
puts " Processing rate: #{get_processing_rate(queue.name)}"
end
# Check worker distribution
Sidekiq::Workers.new.each do |worker|
puts "Worker #{worker['pid']}:"
puts " Current job: #{worker['payload']}"
puts " Queue: #{worker['queue']}"
puts " Busy: #{worker['busy']}"
end
π Queue Attention Metrics:
π Key Metrics to Monitor:
- Polling Frequency: How often each queue is checked
- Processing Rate: Jobs processed per minute per queue
- Queue Wait Time: Average time jobs wait in each queue
- Worker Distribution: Which queues workers are processing
- Weight Effectiveness: Whether weights match actual processing
π§ Tuning Queue Attention:
π― Optimization Strategies:
π Performance Tuning:
- Monitor Queue Sizes: Track if queues are growing or shrinking
- Measure Processing Rates: Calculate jobs/minute per queue
- Analyze Wait Times: Check if jobs wait too long in any queue
- Adjust Weights: Modify weights based on actual performance
- Test Changes: Validate weight changes in staging
- Monitor Results: Track improvements after weight adjustments
π Priority Processing Rules
π Priority Processing Flow:
- Queue Order: Sidekiq processes queues in configuration order
- Job Selection: Takes jobs from highest priority queue first
- FIFO within Queue: Jobs within same queue processed first-in-first-out
- Queue Depletion: Only moves to next queue when current queue is empty
- Round Robin: Cycles through queues based on priority weights
π€ Single Worker Priority
π― Single Worker Behavior:
With one worker, Sidekiq processes jobs sequentially based on queue priority. The worker focuses on one queue at a time until it’s empty.
π Single Worker Example
# Worker processes jobs in this order:
# 1. All critical jobs (highest priority)
# 2. All high_priority jobs
# 3. All default jobs
# 4. All low_priority jobs
# 5. All mailers jobs (lowest priority)
# Example job processing sequence:
# Critical: Job A, Job B, Job C
# High Priority: Job D, Job E
# Default: Job F, Job G, Job H
# Low Priority: Job I
# Mailers: Job J
β‘ Single Worker Advantages
β Benefits:
- Guaranteed priority order
- Simple to understand and debug
- No resource contention
- Predictable processing
β Limitations:
- Lower throughput
- Slower job processing
- No parallel processing
- Bottleneck for high volume
π₯ Multiple Workers Priority
π Multiple Workers Behavior:
With multiple workers, Sidekiq distributes jobs across workers while maintaining priority order. Each worker follows the same priority rules but can process jobs in parallel.
π Multiple Workers Example
# Configuration with 3 workers
# config/sidekiq.yml
:concurrency: 3
:queues:
- [critical, 5]
- [high_priority, 3]
- [default, 2]
- [low_priority, 1]
# Job distribution across workers:
# Worker 1: Critical Job A, High Priority Job D
# Worker 2: Critical Job B, High Priority Job E
# Worker 3: Critical Job C, Default Job F
# All critical jobs processed first, then high priority, etc.
π Multiple Workers Processing
π Multi-Worker Priority Flow:
- Queue Polling: All workers poll all queues simultaneously
- Priority Check: Each worker checks highest priority queue first
- Job Distribution: Available jobs distributed across workers
- Parallel Processing: Multiple jobs processed simultaneously
- Queue Depletion: Workers move to next priority queue when current is empty
β‘ Multiple Workers Advantages
β Benefits:
- Higher throughput
- Parallel job processing
- Better resource utilization
- Faster job completion
- Maintains priority order
β οΈ Considerations:
- More complex monitoring
- Resource contention possible
- Memory usage increases
- Network overhead
π§ Priority Configuration Strategies
π High Priority Strategy
# Prioritize critical jobs
:queues:
- [critical, 10] # 50% of worker attention
- [high_priority, 5] # 25% of worker attention
- [default, 3] # 15% of worker attention
- [low_priority, 2] # 10% of worker attention
βοΈ Balanced Strategy
# Equal distribution with slight priority
:queues:
- [critical, 3] # 30% of worker attention
- [high_priority, 3] # 30% of worker attention
- [default, 2] # 20% of worker attention
- [low_priority, 2] # 20% of worker attention
π Monitoring Priority Processing
# Check queue sizes and priorities
Sidekiq::Queue.all.each do |queue|
puts "#{queue.name}: #{queue.size} jobs"
end
# Monitor worker queue distribution
Sidekiq::Workers.new.each do |worker|
puts "Worker #{worker['pid']}: #{worker['queues']}"
end
# Check processing rates by queue
stats = Sidekiq::Stats.new
puts "Processed: #{stats.processed}"
puts "Failed: #{stats.failed}"
π¨ Priority Troubleshooting
Solution: Check if high priority queues are constantly full, adjust queue weights, or add dedicated low priority workers
Solution: Verify queue configuration order, check for job serialization issues, ensure proper queue assignment
Solution: Monitor worker queue assignments, check for network issues, verify Redis connection stability
π§ Priority Best Practices
π― Priority Guidelines:
- β Use descriptive queue names that reflect priority
- β Set appropriate queue weights based on business needs
- β Monitor queue sizes and processing rates
- β Avoid too many priority levels (3-5 is optimal)
- β Test priority behavior with realistic job volumes
- β Use dedicated workers for critical queues if needed
π4. Queue Management & Priority
π·οΈ Queue Management
Custom Queue Assignment
class UrgentEmailWorker
include Sidekiq::Worker
sidekiq_options queue: 'critical'
def perform(user_id)
# Process urgent email
end
end
class NewsletterWorker
include Sidekiq::Worker
sidekiq_options queue: 'low_priority'
def perform(user_id)
# Process newsletter
end
end
π― Queue Priority System
π‘ How Queue Priority Works:
Queue priority determines the order in which jobs are processed. Sidekiq processes queues in the order they’re defined in the configuration, with higher priority queues processed first.
π Priority Configuration
# config/sidekiq.yml
:concurrency: 10
:queues:
- [critical, 10] # Highest priority - processed first
- [high_priority, 5] # High priority - processed second
- [default, 2] # Normal priority - processed third
- [low_priority, 1] # Low priority - processed last
- [mailers, 1] # Email jobs - processed last
π’ Understanding Queue Weights
π‘ What is the Number?
The number in [queue_name, weight]
is the queue weight or priority weight. It determines how much attention workers give to each queue relative to other queues.
π How Queue Weights Work:
π Weight Distribution Example:
# Configuration
:queues:
- [critical, 10] # 10/18 = 55.6% of worker attention
- [high_priority, 5] # 5/18 = 27.8% of worker attention
- [default, 2] # 2/18 = 11.1% of worker attention
- [low_priority, 1] # 1/18 = 5.6% of worker attention
# Total weight: 10 + 5 + 2 + 1 = 18
# Each queue gets its weight / total weight percentage of attention
π― Weight Calculation Formula:
Queue Attention % = (Queue Weight / Total Weights) Γ 100
# Example with [critical, 10]:
# Critical attention = (10 / 18) Γ 100 = 55.6%
# High Priority attention = (5 / 18) Γ 100 = 27.8%
# Default attention = (2 / 18) Γ 100 = 11.1%
# Low Priority attention = (1 / 18) Γ 100 = 5.6%
βοΈ Weight Guidelines:
π― High Priority Strategy:
- Critical: 10 (50% attention)
- High: 5 (25% attention)
- Normal: 3 (15% attention)
- Low: 2 (10% attention)
βοΈ Balanced Strategy:
- Critical: 4 (25% attention)
- High: 4 (25% attention)
- Normal: 4 (25% attention)
- Low: 4 (25% attention)
π§ How to Define Queue Weights:
π Step-by-Step Process:
- Identify Queue Types: List all your job types and their importance
- Assign Relative Weights: Give higher numbers to more important queues
- Calculate Percentages: Ensure weights add up to reasonable total
- Test and Monitor: Adjust based on actual processing patterns
- Optimize: Fine-tune based on performance metrics
π Weight Configuration Examples:
π― E-commerce Platform:
# High-priority order processing
:queues:
- [order_processing, 10] # 40% - Critical for business
- [payment_processing, 8] # 32% - Financial transactions
- [email_notifications, 4] # 16% - User communication
- [analytics, 2] # 8% - Background analytics
- [maintenance, 1] # 4% - Cleanup tasks
π± Social Media App:
# User experience focused
:queues:
- [user_actions, 10] # 40% - Real-time user interactions
- [content_processing, 8] # 32% - Media processing
- [notifications, 4] # 16% - Push notifications
- [feed_generation, 2] # 8% - Content feeds
- [background_tasks, 1] # 4% - Maintenance
π Analytics Platform:
# Data processing focused
:queues:
- [data_processing, 10] # 40% - Core data analysis
- [report_generation, 6] # 24% - User reports
- [data_import, 4] # 16% - External data
- [alerts, 3] # 12% - User alerts
- [cleanup, 2] # 8% - Data cleanup
π¨ Common Weight Mistakes:
β Solution: Use different weights to prioritize important jobs
β Solution: Use 3-5 weight levels for simplicity
β Solution: Give highest weight to most important queue
β Solution: Align weights with business criticality
π Monitoring Weight Effectiveness:
# Check queue processing rates
Sidekiq::Queue.all.each do |queue|
puts "#{queue.name}: #{queue.size} jobs, weight: #{get_queue_weight(queue.name)}"
end
# Monitor processing distribution
stats = Sidekiq::Stats.new
puts "Total processed: #{stats.processed}"
puts "Queue distribution: #{get_queue_distribution}"
π§ Weight Optimization Tips:
π― Best Practices:
- β Start with 3-5 weight levels for simplicity
- β Use higher weights (8-10) for critical business functions
- β Use lower weights (1-3) for background/maintenance tasks
- β Monitor queue sizes and adjust weights accordingly
- β Test weight changes in staging before production
- β Document weight decisions and business rationale
π How Queue Attention Works Behind the Scenes
π‘ The Backend Implementation:
Sidekiq uses a weighted round-robin algorithm to distribute worker attention across queues. The weights determine how frequently each queue is polled relative to others.
π Weighted Round-Robin Algorithm:
π How Sidekiq Implements Queue Attention:
# Sidekiq's internal queue polling logic (simplified)
class QueuePoller
def poll_queues
# Calculate polling frequency based on weights
queue_weights = {
'critical' => 10,
'high_priority' => 5,
'default' => 2,
'low_priority' => 1
}
total_weight = queue_weights.values.sum # 18
# Poll each queue based on its weight
queue_weights.each do |queue_name, weight|
polling_frequency = weight.to_f / total_weight
# Poll this queue 'weight' times for every cycle
weight.times do
poll_single_queue(queue_name)
end
end
end
def poll_single_queue(queue_name)
# Check if queue has jobs
if jobs_available?(queue_name)
# Pick up job from this queue
job = dequeue_job(queue_name)
process_job(job)
end
end
end
βοΈ Detailed Polling Mechanism:
π― Step-by-Step Process:
π Queue Polling Cycle:
- Weight Calculation: Sidekiq calculates total weight (10+5+2+1 = 18)
- Polling Frequency: Each queue gets polled based on its weight
- Round-Robin Cycle: Critical polled 10 times, High Priority 5 times, etc.
- Job Selection: If queue has jobs, worker picks up the oldest job
- Processing: Job is processed by available worker thread
- Cycle Repeat: Process repeats continuously
π Example Polling Cycle:
# With weights [critical: 10, high_priority: 5, default: 2, low_priority: 1]
# One complete polling cycle looks like:
Cycle 1: critical β critical β critical β critical β critical β
critical β critical β critical β critical β critical β
high_priority β high_priority β high_priority β high_priority β high_priority β
default β default β
low_priority
# This cycle repeats continuously
# Critical gets 10/18 = 55.6% of polling attention
# High Priority gets 5/18 = 27.8% of polling attention
# Default gets 2/18 = 11.1% of polling attention
# Low Priority gets 1/18 = 5.6% of polling attention
π Redis Implementation Details:
π How Sidekiq Stores and Retrieves Jobs:
# Redis data structure for queues
# Each queue is a Redis LIST
redis.lpush("queue:critical", job_data) # Add job to critical queue
redis.lpush("queue:high_priority", job_data) # Add job to high priority queue
redis.lpush("queue:default", job_data) # Add job to default queue
redis.lpush("queue:low_priority", job_data) # Add job to low priority queue
# Sidekiq polls queues in weighted order
def poll_queues_with_weights
weights = {critical: 10, high_priority: 5, default: 2, low_priority: 1}
weights.each do |queue_name, weight|
weight.times do
# Try to get job from this queue
job_data = redis.brpop("queue:#{queue_name}", timeout: 0.1)
if job_data
process_job(job_data)
break # Found a job, move to next queue
end
end
end
end
β‘ Worker Thread Behavior:
π― How Workers Handle Queue Attention:
π₯ Multi-Worker Queue Distribution:
# Multiple workers polling simultaneously
Worker 1: [critical] β [critical] β [high_priority] β [default]
Worker 2: [critical] β [high_priority] β [high_priority] β [low_priority]
Worker 3: [critical] β [default] β [default] β [low_priority]
# Each worker follows the same weighted polling pattern
# But they may pick up different jobs due to timing
π Concurrency and Queue Attention:
# With 3 workers and weights [10, 5, 2, 1]
# Total polling attempts per cycle: 3 workers Γ 18 polls = 54 polls
# Distribution across queues:
# Critical: 3 workers Γ 10 polls = 30 polling attempts (55.6%)
# High Priority: 3 workers Γ 5 polls = 15 polling attempts (27.8%)
# Default: 3 workers Γ 2 polls = 6 polling attempts (11.1%)
# Low Priority: 3 workers Γ 1 poll = 3 polling attempts (5.6%)
π§ Advanced Queue Attention Features:
π― Dynamic Weight Adjustment:
# Sidekiq can dynamically adjust polling based on queue size
def adaptive_polling
queue_sizes = get_queue_sizes
queue_sizes.each do |queue_name, size|
if size > threshold
# Increase polling frequency for busy queues
increase_polling_frequency(queue_name)
end
end
end
π Queue Starvation Prevention:
π‘οΈ Anti-Starvation Mechanisms:
- Maximum Polling Delay: Prevents low-priority queues from being ignored too long
- Queue Size Monitoring: Adjusts polling if queues grow too large
- Fairness Algorithm: Ensures all queues get some attention
- Dynamic Weight Adjustment: Can modify weights based on queue backlog
π¨ Queue Attention Edge Cases:
Solution: Sidekiq quickly moves to next queue, maintaining overall attention distribution
Solution: Jobs wait longer but eventually get processed; consider adjusting weights
Solution: Higher-weight queues get more processing time, maintaining priority
Solution: Polling continues but job processing waits for available threads
π Monitoring Queue Attention:
π How to Verify Queue Attention is Working:
# Monitor actual queue processing rates
Sidekiq::Queue.all.each do |queue|
puts "#{queue.name}:"
puts " Size: #{queue.size}"
puts " Weight: #{get_queue_weight(queue.name)}"
puts " Processing rate: #{get_processing_rate(queue.name)}"
end
# Check worker distribution
Sidekiq::Workers.new.each do |worker|
puts "Worker #{worker['pid']}:"
puts " Current job: #{worker['payload']}"
puts " Queue: #{worker['queue']}"
puts " Busy: #{worker['busy']}"
end
π Queue Attention Metrics:
π Key Metrics to Monitor:
- Polling Frequency: How often each queue is checked
- Processing Rate: Jobs processed per minute per queue
- Queue Wait Time: Average time jobs wait in each queue
- Worker Distribution: Which queues workers are processing
- Weight Effectiveness: Whether weights match actual processing
π§ Tuning Queue Attention:
π― Optimization Strategies:
π Performance Tuning:
- Monitor Queue Sizes: Track if queues are growing or shrinking
- Measure Processing Rates: Calculate jobs/minute per queue
- Analyze Wait Times: Check if jobs wait too long in any queue
- Adjust Weights: Modify weights based on actual performance
- Test Changes: Validate weight changes in staging
- Monitor Results: Track improvements after weight adjustments
π Priority Processing Rules
π Priority Processing Flow:
- Queue Order: Sidekiq processes queues in configuration order
- Job Selection: Takes jobs from highest priority queue first
- FIFO within Queue: Jobs within same queue processed first-in-first-out
- Queue Depletion: Only moves to next queue when current queue is empty
- Round Robin: Cycles through queues based on priority weights
π€ Single Worker Priority
π― Single Worker Behavior:
With one worker, Sidekiq processes jobs sequentially based on queue priority. The worker focuses on one queue at a time until it’s empty.
π Single Worker Example
# Worker processes jobs in this order:
# 1. All critical jobs (highest priority)
# 2. All high_priority jobs
# 3. All default jobs
# 4. All low_priority jobs
# 5. All mailers jobs (lowest priority)
# Example job processing sequence:
# Critical: Job A, Job B, Job C
# High Priority: Job D, Job E
# Default: Job F, Job G, Job H
# Low Priority: Job I
# Mailers: Job J
β‘ Single Worker Advantages
β Benefits:
- Guaranteed priority order
- Simple to understand and debug
- No resource contention
- Predictable processing
β Limitations:
- Lower throughput
- Slower job processing
- No parallel processing
- Bottleneck for high volume
π₯ Multiple Workers Priority
π Multiple Workers Behavior:
With multiple workers, Sidekiq distributes jobs across workers while maintaining priority order. Each worker follows the same priority rules but can process jobs in parallel.
π Multiple Workers Example
# Configuration with 3 workers
# config/sidekiq.yml
:concurrency: 3
:queues:
- [critical, 5]
- [high_priority, 3]
- [default, 2]
- [low_priority, 1]
# Job distribution across workers:
# Worker 1: Critical Job A, High Priority Job D
# Worker 2: Critical Job B, High Priority Job E
# Worker 3: Critical Job C, Default Job F
# All critical jobs processed first, then high priority, etc.
π Multiple Workers Processing
π Multi-Worker Priority Flow:
- Queue Polling: All workers poll all queues simultaneously
- Priority Check: Each worker checks highest priority queue first
- Job Distribution: Available jobs distributed across workers
- Parallel Processing: Multiple jobs processed simultaneously
- Queue Depletion: Workers move to next priority queue when current is empty
β‘ Multiple Workers Advantages
β Benefits:
- Higher throughput
- Parallel job processing
- Better resource utilization
- Faster job completion
- Maintains priority order
β οΈ Considerations:
- More complex monitoring
- Resource contention possible
- Memory usage increases
- Network overhead
π§ Priority Configuration Strategies
π High Priority Strategy
# Prioritize critical jobs
:queues:
- [critical, 10] # 50% of worker attention
- [high_priority, 5] # 25% of worker attention
- [default, 3] # 15% of worker attention
- [low_priority, 2] # 10% of worker attention
βοΈ Balanced Strategy
# Equal distribution with slight priority
:queues:
- [critical, 3] # 30% of worker attention
- [high_priority, 3] # 30% of worker attention
- [default, 2] # 20% of worker attention
- [low_priority, 2] # 20% of worker attention
π Monitoring Priority Processing
# Check queue sizes and priorities
Sidekiq::Queue.all.each do |queue|
puts "#{queue.name}: #{queue.size} jobs"
end
# Monitor worker queue distribution
Sidekiq::Workers.new.each do |worker|
puts "Worker #{worker['pid']}: #{worker['queues']}"
end
# Check processing rates by queue
stats = Sidekiq::Stats.new
puts "Processed: #{stats.processed}"
puts "Failed: #{stats.failed}"
π¨ Priority Troubleshooting
Solution: Check if high priority queues are constantly full, adjust queue weights, or add dedicated low priority workers
Solution: Verify queue configuration order, check for job serialization issues, ensure proper queue assignment
Solution: Monitor worker queue assignments, check for network issues, verify Redis connection stability
π§ Priority Best Practices
π― Priority Guidelines:
- β Use descriptive queue names that reflect priority
- β Set appropriate queue weights based on business needs
- β Monitor queue sizes and processing rates
- β Avoid too many priority levels (3-5 is optimal)
- β Test priority behavior with realistic job volumes
- β Use dedicated workers for critical queues if needed
βοΈ4. Configuration & Optimization
π§ Advanced Configuration
Redis Configuration
# config/initializers/sidekiq.rb
Sidekiq.configure_server do |config|
config.redis = { url: 'redis://localhost:6379/0' }
end
Sidekiq.configure_client do |config|
config.redis = { url: 'redis://localhost:6379/0' }
end
Concurrency Settings
# config/sidekiq.yml
:concurrency: 25 # Number of threads per worker
:queues:
- [critical, 5] # Highest priority
- [high_priority, 3] # High priority
- [default, 2] # Normal priority
- [low_priority, 1] # Low priority
- [mailers, 1] # Email jobs
:max_retries: 3 # Maximum retry attempts
:retry_interval: 5 # Seconds between retries
:timeout: 30 # Job timeout in seconds
Environment-Specific Configuration
# config/environments/production.rb
config.active_job.queue_name_prefix = Rails.env
config.active_job.queue_name_delimiter = '.'
# config/sidekiq.yml
production:
:concurrency: 50
:queues:
- [critical, 10]
- [high_priority, 5]
- [default, 3]
- [low_priority, 1]
development:
:concurrency: 5
:queues:
- [default, 1]
π Performance Optimization
π― Optimization Strategies:
- Concurrency Tuning: Adjust based on CPU cores and memory
- Queue Prioritization: Use appropriate queue priorities
- Redis Optimization: Configure Redis for performance
- Memory Management: Monitor and optimize memory usage
- Job Batching: Use batch processing for efficiency
Memory Optimization
# config/initializers/sidekiq.rb
Sidekiq.configure_server do |config|
# Enable garbage collection
config.on(:startup) do
GC.auto_compact = true
end
# Memory monitoring
config.on(:heartbeat) do
GC.start if GC.stat[:heap_allocated_pages] > 1000
end
end
Connection Pooling
# config/initializers/sidekiq.rb
require 'connection_pool'
Sidekiq.configure_server do |config|
config.redis_pool = ConnectionPool.new(size: 25, timeout: 3) do
Redis.new(url: ENV['REDIS_URL'])
end
end
π5. Monitoring & Web UI
π₯οΈ Sidekiq Web UI
Setup Web UI
# config/routes.rb
require 'sidekiq/web'
Rails.application.routes.draw do
# Basic mount
mount Sidekiq::Web => '/sidekiq'
# With authentication
authenticate :user, lambda { |u| u.admin? } do
mount Sidekiq::Web => '/sidekiq'
end
end
Web UI Features
π Dashboard:
- Real-time statistics
- Queue sizes and processing rates
- Worker status and health
- Failed job counts
π§ Management:
- Retry failed jobs
- Delete jobs from queues
- View job details and arguments
- Monitor worker processes
π Monitoring Commands
CLI Monitoring
# Check Sidekiq processes
ps aux | grep sidekiq
# Check Redis connection
redis-cli ping
# Monitor Sidekiq in real-time
bundle exec sidekiqmon
# Check queue sizes
bundle exec sidekiqctl stats
Ruby API Monitoring
# Check queue sizes
Sidekiq::Queue.new.size
Sidekiq::Queue.new('critical').size
# Check worker status
Sidekiq::Workers.new.size
Sidekiq::Workers.new.each do |worker|
puts "PID: #{worker['pid']}"
puts "Threads: #{worker['concurrency']}"
puts "Busy: #{worker['busy']}"
end
# Check failed jobs
Sidekiq::DeadSet.new.size
Sidekiq::RetrySet.new.size
# Get statistics
Sidekiq::Stats.new.processed
Sidekiq::Stats.new.failed
π Advanced Monitoring
Custom Monitoring
# config/initializers/sidekiq.rb
Sidekiq.configure_server do |config|
config.on(:startup) do
Rails.logger.info "Sidekiq started with #{Sidekiq.options[:concurrency]} threads"
end
config.on(:shutdown) do
Rails.logger.info "Sidekiq shutting down"
end
config.on(:heartbeat) do
# Custom heartbeat logic
Rails.logger.info "Sidekiq heartbeat"
end
end
Health Checks
# lib/sidekiq_health_check.rb
class SidekiqHealthCheck
def self.healthy?
begin
# Check Redis connection
Sidekiq.redis { |conn| conn.ping }
# Check worker processes
workers = Sidekiq::Workers.new
return false if workers.size == 0
# Check queue health
queues = Sidekiq::Queue.all
return false if queues.any? { |q| q.size > 1000 }
true
rescue => e
Rails.logger.error "Sidekiq health check failed: #{e.message}"
false
end
end
end
π₯6. Advanced Features
π¦ Batch Processing
Creating Batches
# Create a batch
batch = Sidekiq::Batch.new
batch.on(:success, BatchCallback, { 'user_id' => user.id })
batch.jobs do
users.each do |user|
EmailWorker.perform_async(user.id)
end
end
Batch Callbacks
class BatchCallback
def on_success(status, options)
user_id = options['user_id']
Rails.logger.info "Batch completed for user #{user_id}"
end
def on_death(status, options)
Rails.logger.error "Batch failed: #{status.failures}"
end
end
β° Scheduled Jobs
Delayed Jobs
# Schedule job for later
EmailWorker.perform_in(5.minutes, user.id)
# Schedule job for specific time
EmailWorker.perform_at(1.hour.from_now, user.id)
# Recurring jobs with cron syntax
class DailyReportWorker
include Sidekiq::Worker
def perform
# Generate daily report
end
end
# In initializer
Sidekiq::Cron::Job.create(
name: 'Daily Report',
cron: '0 9 * * *', # Every day at 9 AM
class: 'DailyReportWorker'
)
π Retry Logic
Custom Retry Strategies
class ApiCallWorker
include Sidekiq::Worker
sidekiq_options retry: 5, backtrace: true
def perform(api_url, data)
response = HTTP.get(api_url, json: data)
if response.status.success?
process_response(response)
else
raise "API call failed: #{response.status}"
end
rescue => e
# Custom retry logic
if retry_count < 3
raise e # Will retry
else
# Log final failure
Rails.logger.error "Final failure: #{e.message}"
end
end
end
π Job Locks
Unique Jobs
class UniqueEmailWorker
include Sidekiq::Worker
sidekiq_options unique_for: 1.hour
def perform(user_id)
# Only one job per user per hour
send_email(user_id)
end
end
Distributed Locks
class DataSyncWorker
include Sidekiq::Worker
def perform(data_id)
# Use Redis for distributed locking
Sidekiq.redis do |conn|
lock_key = "sync_lock:#{data_id}"
if conn.set(lock_key, 1, nx: true, ex: 300) # 5 minute lock
begin
sync_data(data_id)
ensure
conn.del(lock_key)
end
else
Rails.logger.info "Sync already in progress for #{data_id}"
end
end
end
end
π Job Statistics
Custom Metrics
class MetricsWorker
include Sidekiq::Worker
def perform(metric_name, value)
# Track custom metrics
StatsD.increment("sidekiq.job.#{metric_name}", value)
end
end
# Middleware for automatic metrics
class MetricsMiddleware
def call(worker, job, queue)
start_time = Time.current
yield
duration = Time.current - start_time
StatsD.timing("sidekiq.job.duration", duration)
StatsD.increment("sidekiq.job.success")
rescue => e
StatsD.increment("sidekiq.job.failure")
raise e
end
end
π§7. Troubleshooting & Best Practices
π¨ Common Issues
Solution: Reduce concurrency, implement garbage collection, monitor memory per worker
Solution: Check worker processes, Redis connection, job serialization
Solution: Check Redis server, connection pool settings, network connectivity
Solution: Check error logs, implement proper error handling, fix underlying issues
β Best Practices
π― Job Design:
- β Keep jobs idempotent (safe to retry)
- β Use small, focused jobs
- β Handle exceptions properly
- β Use appropriate queue priorities
- β Implement proper logging
Configuration:
- β Set appropriate concurrency levels
- β Configure Redis properly
- β Use connection pooling
- β Monitor memory usage
- β Implement health checks
π Production Checklist
π’ System Health:
- β Workers are running and healthy
- β Redis connection is stable
- β Queue sizes are reasonable
- β Memory usage is under control
- β Failed jobs are being handled
β‘ Performance:
- β Job processing times are acceptable
- β Concurrency is optimized
- β Monitoring is in place
- β Error logs are being reviewed
- β Backup and recovery procedures exist
π¨ Emergency Procedures
Worker Restart
# Graceful restart
bundle exec sidekiqctl restart
# Force restart
bundle exec sidekiqctl stop
bundle exec sidekiqctl start
# Check worker status
bundle exec sidekiqctl status
Queue Management
# Clear specific queue
Sidekiq::Queue.new('failed').clear
# Clear all queues (emergency only)
Sidekiq::Queue.all.each(&:clear)
# Retry failed jobs
Sidekiq::RetrySet.new.each(&:retry)
# Delete dead jobs
Sidekiq::DeadSet.new.clear
π8. Reference & Commands
π§ Sidekiq Commands
Command | Description | Usage |
---|---|---|
bundle exec sidekiq | Start Sidekiq with default configuration | bundle exec sidekiq -C config/sidekiq.yml |
bundle exec sidekiqctl | Show Sidekiq control commands | bundle exec sidekiqctl restart |
bundle exec sidekiqctl restart | Restart Sidekiq processes | bundle exec sidekiqctl restart |
bundle exec sidekiqmon | Monitor Sidekiq processes | bundle exec sidekiqmon |
Sidekiq::Queue.new.size | Get default queue size | Sidekiq::Queue.new.size |
Sidekiq::Queue.new('default').size | Get specific queue size | Sidekiq::Queue.new('default').size |
Sidekiq::Stats.new | Get Sidekiq statistics | Sidekiq::Stats.new.processed |
Sidekiq::Stats.new.processed | Get total processed jobs count | Sidekiq::Stats.new.processed |
bundle exec sidekiq -q high,default | Start worker for specific queues | Queue-specific processing |
bundle exec sidekiq -c 10 | Start worker with specific concurrency | Performance tuning |
bundle exec sidekiq -r ./config/initializers/sidekiq.rb | Start with custom configuration | Custom setup |
sidekiqctl stop | Gracefully stop Sidekiq workers | Maintenance |
sidekiqctl restart | Restart Sidekiq workers | Deployment |
ps aux | grep sidekiq | Check running Sidekiq processes | Monitoring |
MyWorker.perform_async(args) | Enqueue job for immediate processing | Job enqueueing |
MyWorker.perform_in(3600, args) | Schedule job to run in 1 hour | Delayed jobs |
MyWorker.perform_at(Time.now + 1.hour, args) | Schedule job for specific time | Scheduled jobs |
Sidekiq::Queue.new.size | Get default queue size | Queue monitoring |
Sidekiq::Stats.new.processed | Get total processed jobs count | Statistics |
π Monitoring Commands
Queue Monitoring
# Check all queue sizes
Sidekiq::Queue.all.each do |queue|
puts "#{queue.name}: #{queue.size} jobs"
end
# Check worker status
Sidekiq::Workers.new.each do |worker|
puts "PID: #{worker['pid']}, Busy: #{worker['busy']}"
end
# Check failed jobs
puts "Failed: #{Sidekiq::DeadSet.new.size}"
puts "Retrying: #{Sidekiq::RetrySet.new.size}"
Performance Monitoring
# Get processing statistics
stats = Sidekiq::Stats.new
puts "Processed: #{stats.processed}"
puts "Failed: #{stats.failed}"
puts "Enqueued: #{stats.enqueued}"
# Check Redis memory
redis_info = Sidekiq.redis { |conn| conn.info }
puts "Redis Memory: #{redis_info['used_memory_human']}"
π§ Configuration Examples
Production Configuration
# config/sidekiq.yml
:concurrency: 25
:queues:
- [critical, 10]
- [high_priority, 5]
- [default, 3]
- [low_priority, 1]
- [mailers, 1]
:max_retries: 3
:retry_interval: 5
:timeout: 30
# Environment-specific
production:
:concurrency: 50
:queues:
- [critical, 15]
- [high_priority, 10]
- [default, 5]
- [low_priority, 2]
development:
:concurrency: 5
:queues:
- [default, 1]
Redis Configuration
# config/initializers/sidekiq.rb
Sidekiq.configure_server do |config|
config.redis = {
url: ENV['REDIS_URL'] || 'redis://localhost:6379/0',
password: ENV['REDIS_PASSWORD'],
db: 0,
network_timeout: 5,
pool_timeout: 5,
size: 25
}
end
Sidekiq.configure_client do |config|
config.redis = {
url: ENV['REDIS_URL'] || 'redis://localhost:6379/0',
password: ENV['REDIS_PASSWORD'],
db: 0,
size: 5
}
end
π Additional Resources
π Official Resources:
- Sidekiq Documentation: https://github.com/mperham/sidekiq
- Sidekiq Wiki: https://github.com/mperham/sidekiq/wiki
- Sidekiq Web UI: Built-in monitoring interface
- Sidekiq Pro: Commercial version with advanced features
π Related Topics:
- Redis: In-memory data structure store
- ActiveJob: Rails job framework
- Background Jobs: Asynchronous processing
- Performance Optimization: Scaling strategies
β9. Interview Questions & Answers
π Basic Questions
A: Sidekiq is a high-performance background job processing system for Ruby that uses Redis as storage and threads for concurrent execution. Jobs are serialized, stored in Redis queues, and processed by worker threads that poll for available jobs.
A: Sidekiq uses threads for concurrency (more memory efficient), while Resque uses processes (better fault isolation). Sidekiq has better performance (10,000+ jobs/sec vs 1,000-5,000 jobs/sec) and includes features like batches and scheduled jobs.
A: Sidekiq automatically retries failed jobs based on configuration. Use
sidekiq_options retry: 5
to set retry attempts. Failed jobs go to the dead queue after max retries. You can also implement custom retry logic in the job.perform_async
and perform_in
?A:
perform_async
enqueues the job immediately, while perform_in
schedules the job to run after a specified delay. perform_at
schedules for a specific time.A: Use Sidekiq Web UI, CLI commands like
sidekiqctl stats
, Ruby API (Sidekiq::Queue.new.size
), and external monitoring tools. Monitor queue sizes, worker health, failed jobs, and processing rates.οΏ½οΏ½ Advanced Questions
A: Sidekiq uses garbage collection after each job. For memory optimization, use GlobalID for ActiveRecord objects, implement garbage collection hooks, monitor memory usage, and use connection pooling for Redis.
A: Batches group multiple jobs together with callbacks. Use them for bulk operations, data processing workflows, or when you need to track completion of related jobs. They provide success/failure callbacks for the entire batch.
A: Use
sidekiq_options unique_for: 1.hour
or implement custom uniqueness using Redis locks. For distributed locks, use Redis SET with NX flag and expiration time.A: Formula: (CPU cores * 2) + 1. For 4 cores, use 9 threads. Adjust based on job type (I/O vs CPU bound), memory availability, and monitoring results. Start conservative and tune based on performance metrics.
A: Configure connection pool settings, implement retry logic, use connection pooling, set appropriate timeouts, and implement health checks. Monitor Redis connection status and implement fallback strategies.
π§ System Design Questions
A: Use Sidekiq with multiple workers, implement batching for efficiency, use dedicated email queues, implement rate limiting, monitor queue sizes, use connection pooling, and implement proper error handling and retries.
A: Sidekiq doesn't guarantee job ordering. For ordered processing, use job chaining (one job triggers the next), implement custom ordering logic, or use external coordination mechanisms like database locks or Redis sorted sets.
A: Implement auto-scaling based on queue size, use multiple worker processes, implement job prioritization, add more Redis instances, implement rate limiting, and use monitoring to detect and respond to spikes quickly.
A: ActiveJob is Rails' abstraction layer that works with multiple backends (Sidekiq, Resque, etc.). Sidekiq is a specific backend with direct API. ActiveJob provides unified interface but adds overhead, while direct Sidekiq offers better performance and features.
A: Use
perform_in
for delayed jobs, perform_at
for specific times, or Sidekiq Cron for recurring jobs. For complex scheduling, implement custom scheduling logic or use external schedulers like cron jobs.π’10. Real-World Case Studies
π§ E-commerce Email System
π― Problem:
A large e-commerce platform needed to send 500,000+ transactional emails daily with different priorities and delivery requirements.
π‘ Solution:
# Queue Configuration
:queues:
- [critical_emails, 10] # Order confirmations, password resets
- [marketing_emails, 3] # Newsletters, promotions
- [bulk_emails, 1] # Mass marketing campaigns
# Worker Classes
class CriticalEmailWorker
include Sidekiq::Worker
sidekiq_options queue: 'critical_emails', retry: 3
def perform(user_id, email_type)
user = User.find(user_id)
case email_type
when 'order_confirmation'
OrderMailer.confirmation(user).deliver_now
when 'password_reset'
UserMailer.password_reset(user).deliver_now
end
end
end
class MarketingEmailWorker
include Sidekiq::Worker
sidekiq_options queue: 'marketing_emails', retry: 2
def perform(user_id, campaign_id)
user = User.find(user_id)
campaign = Campaign.find(campaign_id)
MarketingMailer.newsletter(user, campaign).deliver_now
end
end
π Results:
- β 99.9% email delivery rate
- β Critical emails processed within 30 seconds
- β Marketing emails processed within 5 minutes
- β 50% reduction in email processing time
- β 80% reduction in failed email attempts
πΌοΈ Image Processing Platform
π― Problem:
A social media platform needed to process user-uploaded images (resize, compress, generate thumbnails) with varying processing requirements and user expectations.
π‘ Solution:
# Priority-based image processing
class ImageProcessingWorker
include Sidekiq::Worker
sidekiq_options queue: 'images', retry: 2, backtrace: true
def perform(image_id, priority = 'normal')
image = Image.find(image_id)
# Process based on priority
case priority
when 'urgent'
process_urgent_image(image)
when 'normal'
process_normal_image(image)
when 'background'
process_background_image(image)
end
# Update image status
image.update!(processed: true, processed_at: Time.current)
end
private
def process_urgent_image(image)
# High-quality processing for profile pictures
generate_thumbnails(image, sizes: [50, 100, 200])
compress_image(image, quality: 90)
end
def process_normal_image(image)
# Standard processing for regular uploads
generate_thumbnails(image, sizes: [100, 200])
compress_image(image, quality: 80)
end
def process_background_image(image)
# Background processing for old images
generate_thumbnails(image, sizes: [200])
compress_image(image, quality: 70)
end
end
# Batch processing for bulk operations
class BulkImageProcessingWorker
include Sidekiq::Worker
sidekiq_options queue: 'bulk_images'
def perform(image_ids)
images = Image.where(id: image_ids)
batch = Sidekiq::Batch.new
batch.on(:success, BulkImageCallback)
batch.jobs do
images.each do |image|
ImageProcessingWorker.perform_async(image.id, 'background')
end
end
end
π Results:
- β Profile pictures processed within 10 seconds
- β Regular uploads processed within 2 minutes
- β Background processing handled 10,000+ images daily
- β 70% reduction in storage costs through compression
- β 95% user satisfaction with image quality
π Data Analytics Pipeline
π― Problem:
A SaaS analytics platform needed to process millions of data points daily, generate reports, and provide real-time insights while maintaining data accuracy and processing efficiency.
π‘ Solution:
# Multi-stage data processing pipeline
class DataCollectionWorker
include Sidekiq::Worker
sidekiq_options queue: 'data_collection', retry: 3
def perform(event_data)
# Collect and validate data
event = Event.create!(event_data)
# Trigger next stage
DataProcessingWorker.perform_async(event.id)
end
end
class DataProcessingWorker
include Sidekiq::Worker
sidekiq_options queue: 'data_processing', retry: 2
def perform(event_id)
event = Event.find(event_id)
# Process and aggregate data
processed_data = process_event_data(event)
# Store processed data
ProcessedEvent.create!(processed_data)
# Trigger analytics generation
AnalyticsWorker.perform_async(event.user_id)
end
end
class AnalyticsWorker
include Sidekiq::Worker
sidekiq_options queue: 'analytics', retry: 1
def perform(user_id)
user = User.find(user_id)
# Generate user analytics
analytics = generate_user_analytics(user)
# Update user dashboard
user.update!(analytics: analytics)
# Send notification if significant changes
if analytics.significant_change?
NotificationWorker.perform_async(user_id, 'analytics_update')
end
end
end
class ReportGenerationWorker
include Sidekiq::Worker
sidekiq_options queue: 'reports', retry: 1
def perform(report_type, date_range)
# Generate scheduled reports
report = generate_report(report_type, date_range)
# Store report
Report.create!(report)
# Send to stakeholders
ReportMailer.daily_report(report).deliver_now
end
π Results:
- β 5 million data points processed daily
- β Real-time analytics updated within 5 minutes
- β 99.9% data accuracy maintained
- β 60% reduction in processing time
- β Automated report generation for 100+ clients
π Financial Transaction Processing
π― Problem:
A fintech company needed to process high-value financial transactions with strict compliance requirements, audit trails, and real-time fraud detection.
π‘ Solution:
# Secure transaction processing with audit trails
class TransactionProcessingWorker
include Sidekiq::Worker
sidekiq_options queue: 'transactions', retry: 0, backtrace: true
def perform(transaction_id)
transaction = Transaction.find(transaction_id)
# Create audit trail
AuditLog.create!(
transaction: transaction,
action: 'processing_started',
timestamp: Time.current
)
begin
# Validate transaction
validate_transaction(transaction)
# Check fraud detection
fraud_check = FraudDetectionWorker.perform_now(transaction.id)
if fraud_check.suspicious?
raise "Transaction flagged for fraud review"
end
# Process transaction
process_transaction(transaction)
# Update status
transaction.update!(status: 'completed')
# Create success audit log
AuditLog.create!(
transaction: transaction,
action: 'processing_completed',
timestamp: Time.current
)
# Send confirmation
NotificationWorker.perform_async(transaction.user_id, 'transaction_completed')
rescue => e
# Log failure
AuditLog.create!(
transaction: transaction,
action: 'processing_failed',
error: e.message,
timestamp: Time.current
)
# Update status
transaction.update!(status: 'failed', error: e.message)
# Send failure notification
NotificationWorker.perform_async(transaction.user_id, 'transaction_failed')
raise e
end
end
end
class FraudDetectionWorker
include Sidekiq::Worker
sidekiq_options queue: 'fraud_detection', retry: 1
def perform(transaction_id)
transaction = Transaction.find(transaction_id)
# Implement fraud detection logic
risk_score = calculate_risk_score(transaction)
if risk_score > 0.8
# Flag for manual review
ManualReviewWorker.perform_async(transaction_id)
return { suspicious: true, risk_score: risk_score }
end
{ suspicious: false, risk_score: risk_score }
end
end
class ComplianceReportingWorker
include Sidekiq::Worker
sidekiq_options queue: 'compliance', retry: 3
def perform(report_date)
# Generate compliance reports
report = generate_compliance_report(report_date)
# Store report
ComplianceReport.create!(report)
# Submit to regulatory bodies
submit_to_regulators(report)
end
π Results:
- β 100% transaction audit trail maintained
- β Fraud detection within 2 seconds
- β 99.99% transaction success rate
- β Regulatory compliance reports automated
- β 90% reduction in manual review time
π± Mobile App Notification System
π― Problem:
A mobile app needed to send personalized push notifications to millions of users with different engagement levels, time zones, and preferences.
π‘ Solution:
# Intelligent notification system
class NotificationWorker
include Sidekiq::Worker
sidekiq_options queue: 'notifications', retry: 2
def perform(user_id, notification_type, data = {})
user = User.find(user_id)
# Check user preferences
return unless user.notifications_enabled?
# Check time zone and quiet hours
return if user.in_quiet_hours?
# Personalize notification
notification = personalize_notification(user, notification_type, data)
# Send via multiple channels
send_push_notification(user, notification)
send_email_notification(user, notification) if user.email_enabled?
send_sms_notification(user, notification) if user.sms_enabled?
# Track delivery
NotificationDelivery.create!(
user: user,
notification_type: notification_type,
sent_at: Time.current
)
end
end
class BatchNotificationWorker
include Sidekiq::Worker
sidekiq_options queue: 'batch_notifications'
def perform(campaign_id)
campaign = Campaign.find(campaign_id)
# Get target users
users = campaign.target_users
# Process in batches
users.in_batches(of: 1000) do |batch|
batch.each do |user|
NotificationWorker.perform_async(
user.id,
'campaign',
{ campaign_id: campaign.id }
)
end
end
end
end
class EngagementOptimizationWorker
include Sidekiq::Worker
sidekiq_options queue: 'engagement', retry: 1
def perform(user_id)
user = User.find(user_id)
# Analyze user engagement
engagement_score = calculate_engagement_score(user)
# Adjust notification frequency
if engagement_score < 0.3
# Reduce notifications for low-engagement users
user.update!(notification_frequency: 'reduced')
elsif engagement_score > 0.8
# Increase notifications for high-engagement users
user.update!(notification_frequency: 'increased')
end
end
π Results:
- β 10 million notifications sent daily
- β 95% delivery rate across all channels
- β 40% increase in user engagement
- β 60% reduction in notification fatigue
- β Personalized delivery timing for each user
π11. Commands & Concepts Reference Table
Core Commands
Command | Description | Usage |
---|---|---|
bundle exec sidekiq | Start Sidekiq with default configuration | bundle exec sidekiq -C config/sidekiq.yml |
bundle exec sidekiqctl | Show Sidekiq control commands | bundle exec sidekiqctl restart |
bundle exec sidekiqctl restart | Restart Sidekiq processes | bundle exec sidekiqctl restart |
bundle exec sidekiqmon | Monitor Sidekiq processes | bundle exec sidekiqmon |
Sidekiq::Queue.new.size | Get default queue size | Sidekiq::Queue.new.size |
Sidekiq::Queue.new('default').size | Get specific queue size | Sidekiq::Queue.new('default').size |
Sidekiq::Stats.new | Get Sidekiq statistics | Sidekiq::Stats.new.processed |
Sidekiq::Stats.new.processed | Get total processed jobs count | Sidekiq::Stats.new.processed |
bundle exec sidekiq -q high,default | Start worker for specific queues | Queue-specific processing |
bundle exec sidekiq -c 10 | Start worker with specific concurrency | Performance tuning |
bundle exec sidekiq -r ./config/initializers/sidekiq.rb | Start with custom configuration | Custom setup |
sidekiqctl stop | Gracefully stop Sidekiq workers | Maintenance |
sidekiqctl restart | Restart Sidekiq workers | Deployment |
ps aux | grep sidekiq | Check running Sidekiq processes | Monitoring |
MyWorker.perform_async(args) | Enqueue job for immediate processing | Job enqueueing |
MyWorker.perform_in(3600, args) | Schedule job to run in 1 hour | Delayed jobs |
MyWorker.perform_at(Time.now + 1.hour, args) | Schedule job for specific time | Scheduled jobs |
Sidekiq::Queue.new.size | Get default queue size | Queue monitoring |
Sidekiq::Stats.new.processed | Get total processed jobs count | Statistics |
Ruby/Rails Commands
Command | Description | Usage |
---|---|---|
MyWorker.perform_async(args) | Enqueue job immediately | Basic job enqueueing |
MyWorker.perform_in(3600, args) | Enqueue job with delay (seconds) | Scheduled jobs |
MyWorker.perform_at(Time.now + 1.hour, args) | Enqueue job at specific time | Time-based scheduling |
Sidekiq::Queue.new.size | Get queue size | Queue monitoring |
Sidekiq::Stats.new.processed | Get total processed jobs | Statistics |
Sidekiq::Stats.new.failed | Get total failed jobs | Error monitoring |
Sidekiq::Queue.all.map(&:size) | Get all queue sizes | Queue analysis |
Redis Commands
Command | Description | Usage |
---|---|---|
redis-cli ping | Test Redis connection | Connection testing |
redis-cli llen sidekiq:queue:default | Check queue length | Queue monitoring |
redis-cli keys sidekiq:* | List all Sidekiq keys | Debugging |
redis-cli flushall | Clear all Redis data | Development cleanup |
Monitoring Commands
Command | Description | Usage |
---|---|---|
tail -f log/sidekiq.log | Monitor Sidekiq logs | Real-time monitoring |
pgrep -f "sidekiq" | Find Sidekiq processes | Process monitoring |
redis-cli info memory | Check Redis memory usage | Performance monitoring |
redis-cli info stats | Get Redis statistics | System health |
Core Concepts
Concept | Description | Usage |
---|---|---|
Worker | Ruby class that includes Sidekiq::Worker and implements perform method | Background task execution |
Process | Sidekiq process that runs workers and processes jobs | Job processing |
Queue | Redis list that holds pending jobs | Job organization |
Job | Serialized worker instance with arguments | Task unit |
Concurrency | Number of threads processing jobs simultaneously | Performance tuning |
Retry | Automatic re-execution of failed jobs | Error handling |
Middleware | Code that runs before/after job execution | Logging, monitoring, error handling |
Batch | Group of jobs that can be tracked together | Complex workflows |
Configuration Options
Option | Description | Default |
---|---|---|
concurrency | Number of threads processing jobs | 25 |
queues | List of queues to process | ['default'] |
retry | Number of retry attempts for failed jobs | 25 |
timeout | Job timeout in seconds | 8 |
backtrace | Number of backtrace lines to log | 0 |
dead_job_max | Maximum number of dead jobs to keep | 10000 |
Redis Data Structures
Key Pattern | Data Type | Description |
---|---|---|
sidekiq:queue:default | List | Pending jobs in default queue |
sidekiq:processed | String | Total processed jobs counter |
sidekiq:failed | String | Total failed jobs counter |
sidekiq:workers | Set | Active worker processes |
sidekiq:dead | Sorted Set | Dead jobs (failed max retries) |
sidekiq:scheduled | Sorted Set | Scheduled jobs |
sidekiq:retry | Sorted Set | Jobs waiting to retry |
Environment Variables
Variable | Description | Example |
---|---|---|
SIDEKIQ_CONCURRENCY | Number of worker threads | 25 |
SIDEKIQ_QUEUES | Comma-separated list of queues | high,default,low |
REDIS_URL | Redis connection URL | redis://localhost:6379/0 |
RAILS_ENV | Rails environment | production |
SIDEKIQ_TIMEOUT | Job timeout in seconds | 8 |
Common Job Patterns
Pattern | Description | Use Case |
---|---|---|
Email Worker | Send emails asynchronously | User notifications, marketing emails |
Data Processing | Process large datasets in background | Analytics, reports, data imports |
File Processing | Handle file uploads and processing | Image resizing, document processing |
API Integration | Make external API calls asynchronously | Third-party integrations, webhooks |
Batch Processing | Process jobs in batches with tracking | Complex workflows, data pipelines |
Scheduled Jobs | Execute jobs at specific times | Daily reports, maintenance tasks |
Learn more aboutΒ Rails
Learn more aboutΒ Active Job
Learn more aboutΒ DevOps
https://shorturl.fm/YjxXV
https://shorturl.fm/xtrs4
https://shorturl.fm/d0mMe
https://shorturl.fm/G81jv
https://shorturl.fm/Xra0p
https://shorturl.fm/JccGB
https://shorturl.fm/6DTmR
https://shorturl.fm/DDThi
https://shorturl.fm/ElRAN
https://shorturl.fm/UrLTf
https://shorturl.fm/EEJNK
https://shorturl.fm/HvwzP
https://shorturl.fm/HgNpW
https://shorturl.fm/veRox
https://shorturl.fm/b5ozc
https://shorturl.fm/jDKh7
https://shorturl.fm/jh3eG
https://shorturl.fm/yiHJN
https://shorturl.fm/YbZFo
https://shorturl.fm/YbZFo
https://shorturl.fm/DxM3b
https://shorturl.fm/WfFQE
https://shorturl.fm/Jlgfd
https://shorturl.fm/9kaNL
https://shorturl.fm/9eMHj
https://shorturl.fm/CYfxy
https://shorturl.fm/ZnRNX
https://shorturl.fm/m1Pzk
https://shorturl.fm/uZ50v
https://shorturl.fm/3PVzG