Mastering Resque – Redis-backed Job Queue for Ruby
π Table of Contents
- 1. Fundamentals & Core Concepts
- 2. Installation & Setup
- 3. Basic Usage & Workers
- 4. Queue Management & Priority
- 6. Monitoring & Web UI
- 7. Advanced Features
- 8. Troubleshooting & Best Practices
- 9. Interview Questions & Answers
- 10. Real-World Case Studies
- 11. Reference & Commands
- 12. Commands & Concepts Reference Table
π1. Fundamentals & Core Concepts
Resque is a Redis-backed job queue for Ruby that provides a simple and reliable way to handle background job processing. It uses Redis for job storage and processes jobs in separate worker processes.
What is Resque?
Resque is a background job processing system that allows you to move time-consuming tasks out of your main application flow. It stores jobs in Redis and processes them using worker processes, making your application more responsive and scalable.
β Pros
- Simple and reliable
- Process-based concurrency (better fault isolation)
- Excellent monitoring with Web UI
- Redis-backed (fast and persistent)
- Mature and battle-tested
- Good documentation and community
β Cons
- Lower performance than Sidekiq
- Higher memory usage (process-based)
- No built-in scheduling
- Limited advanced features
- No priority queues
- Less active development
π Alternatives
- Sidekiq: Higher performance, thread-based
- Delayed::Job: Database-backed, no Redis needed
- Que: PostgreSQL-based, ACID compliant
- ActiveJob: Rails abstraction layer
Why Use Resque?
- β To perform tasks asynchronously (emails, notifications)
- β To avoid blocking the main web request/response cycle
- β To increase app responsiveness and user experience
- β To retry failed jobs automatically
- β To improve scalability and performance
- β To handle high-volume job processing
Resque Architecture
π How Resque Works:
- Job Creation: Jobs are enqueued to Redis
- Queue Storage: Jobs stored in Redis lists
- Worker Processes: Separate processes poll queues
- Job Execution: Workers pick up and execute jobs
- Result Handling: Success/failure logged to Redis
Key Components
- Redis: In-memory data store for job queues
- Workers: Separate processes that execute jobs
- Queues: Redis lists that hold pending jobs
- Jobs: Ruby classes that implement
perform
method - Web UI: Monitoring interface for queues and workers
βοΈ2. Installation & Setup
π¦ Installation Steps
1. Add Resque to Gemfile
# Gemfile
gem 'resque', '~> 2.0'
gem 'redis', '~> 5.0'
2. Install Dependencies
# Install gems
bundle install
# Install Redis (Ubuntu/Debian)
sudo apt-get install redis-server
# Install Redis (macOS)
brew install redis
# Start Redis
redis-server
3. Configure Resque
# config/initializers/resque.rb
Resque.redis = Redis.new(url: 'redis://localhost:6379/0')
# Set Redis namespace
Resque.redis.namespace = "resque:myapp"
4. Create Resque Configuration
# config/resque.yml
development:
redis: redis://localhost:6379/0
namespace: resque:myapp
production:
redis: redis://your-redis-server:6379/0
namespace: resque:myapp
5. Configure Rails Integration
# config/application.rb
config.active_job.queue_adapter = :resque
# config/routes.rb
require 'resque/server'
Rails.application.routes.draw do
mount Resque::Server.new, at: '/resque'
end
π3. Basic Usage & Workers
π Creating Your First Worker
1. Generate Worker
# Generate worker
rails generate job EmailWorker
2. Define Worker Class
class EmailWorker
@queue = :default
def self.perform(user_id, email_type)
user = User.find(user_id)
case email_type
when 'welcome'
UserMailer.welcome_email(user).deliver_now
when 'reminder'
UserMailer.reminder_email(user).deliver_now
end
puts "Email sent to user #{user_id}"
end
end
3. Enqueue Job
# Enqueue job
Resque.enqueue(EmailWorker, user.id, 'welcome')
# Or using ActiveJob
EmailWorker.perform_later(user.id, 'welcome')
π§ Queue Management
Queue Configuration
# config/resque.yml
development:
redis: redis://localhost:6379/0
namespace: resque:myapp
queues:
- default
- high
- low
- mailers
The @queue
variable defines which queue the job will be processed in. Jobs in the same queue are processed in FIFO (First In, First Out) order.
Queue Priority
# Queue Attention % = (Queue Position / Total Queues) Γ 100
# Example with queues: [high, default, low, mailers]
# High gets 25% of worker attention
# Default gets 25% of worker attention
# Low gets 25% of worker attention
# Mailers gets 25% of worker attention
π Queue Processing Rules
π‘ How Resque Processes Queues:
Resque processes queues in round-robin order. Each queue gets equal attention from workers, and jobs within each queue are processed in FIFO order.
π Round-Robin Algorithm:
π How Resque Implements Queue Processing:
# Resque's internal queue processing logic (simplified)
class QueuePoller
def poll_queues
queues = ['high', 'default', 'low', 'mailers']
queues.each do |queue_name|
# Try to get job from this queue
job_data = redis.lpop("queue:#{queue_name}")
if job_data
process_job(job_data)
break # Found a job, move to next queue
end
end
end
end
π Example Processing Cycle:
# With queues [high, default, low, mailers]
# One complete processing cycle looks like:
Cycle 1: high β default β low β mailers
Cycle 2: high β default β low β mailers
Cycle 3: high β default β low β mailers
# This cycle repeats continuously
# Each queue gets 25% of processing attention
# Jobs within each queue are processed FIFO
π€ Single Worker Processing
π― Single Worker Behavior:
With one worker, Resque processes jobs sequentially based on queue order. The worker focuses on one queue at a time until it’s empty.
π Single Worker Example
# Worker processes jobs in this order:
# 1. All high priority jobs (first queue)
# 2. All default jobs (second queue)
# 3. All low priority jobs (third queue)
# 4. All mailers jobs (fourth queue)
# Example job processing sequence:
# High: Job A, Job B, Job C
# Default: Job D, Job E
# Low: Job F, Job G, Job H
# Mailers: Job I
β‘ Single Worker Advantages
β Benefits:
- Guaranteed queue order
- Simple to understand and debug
- No resource contention
- Predictable processing
β Limitations:
- Lower throughput
- Slower job processing
- No parallel processing
- Bottleneck for high volume
π₯ Multiple Workers Processing
π Multiple Workers Behavior:
With multiple workers, Resque distributes jobs across workers while maintaining queue order. Each worker follows the same queue rules but can process jobs in parallel.
π Multiple Workers Example
# Configuration with 3 workers
# config/resque.yml
development:
redis: redis://localhost:6379/0
queues:
- high
- default
- low
- mailers
# Job distribution across workers:
# Worker 1: High Job A, Default Job D
# Worker 2: High Job B, Default Job E
# Worker 3: High Job C, Low Job F
# All high jobs processed first, then default, etc.
π Multiple Workers Processing
π Multi-Worker Queue Flow:
- Queue Polling: All workers poll all queues simultaneously
- Queue Check: Each worker checks first queue first
- Job Distribution: Available jobs distributed across workers
- Parallel Processing: Multiple jobs processed simultaneously
- Queue Depletion: Workers move to next queue when current is empty
β‘ Multiple Workers Advantages
β Benefits:
- Higher throughput
- Parallel job processing
- Better resource utilization
- Faster job completion
- Maintains queue order
β οΈ Considerations:
- More complex monitoring
- Resource contention possible
- Memory usage increases
- Network overhead
π4. Queue Management & Priority
Queue Basics
π‘ How Resque Queues Work:
Resque uses FIFO (First In, First Out) queues stored in Redis. Jobs are processed in the order they were added, with higher priority queues processed first.
Queue Assignment
class HighPriorityJob
@queue = :high_priority
def self.perform(data)
# Process high priority task
end
end
class DefaultJob
@queue = :default
def self.perform(data)
# Process normal task
end
end
Queue Priority System
Priority Processing Order
1. high_priority
2. default
3. low_priority
4. mailers
bundle exec rake resque:work QUEUE=high_priority,default,low_priority
Queue Configuration
QUEUES = %w[high_priority default low_priority mailers]
Resque.default_queue = :default
Queue Management Strategies
Priority-based Strategy
class OrderProcessingJob
@queue = :high_priority
def self.perform(order_id)
# Process payment
end
end
class UserNotificationJob
@queue = :default
def self.perform(user_id, message)
# Send notification
end
end
Resource-based Strategy
class DataAnalysisJob
@queue = :cpu_intensive
def self.perform(data_id)
# Heavy computation
end
end
class FileProcessingJob
@queue = :io_intensive
def self.perform(file_id)
# File operations
end
end
Queue Monitoring
Queue Statistics
Resque.queues.each do |queue|
puts "#{queue}: #{Resque.size(queue)} jobs"
end
Queue Health Monitoring
def check_queue_health
Resque.queues.each do |queue|
size = Resque.size(queue)
if size > 1000
Rails.logger.warn "Queue #{queue} has #{size} jobs"
end
end
end
Queue Optimization
Queue-specific Workers
bundle exec rake resque:work QUEUE=high_priority COUNT=3
bundle exec rake resque:work QUEUE=default COUNT=5
Queue Load Balancing
def start_workers_based_on_load
Resque.queues.each do |queue|
size = Resque.size(queue)
workers_needed = size > 1000 ? 5 : 2
start_workers(queue, workers_needed)
end
end
Queue Troubleshooting
Solution: Ensure worker processes queues in correct order:
QUEUE=high_priority,default,low_priority
Solution: Add more workers or optimize job processing speed
Solution: Check if workers are listening to that queue:
ps aux | grep resque
Solution: Reduce number of workers or use queue-specific workers
Queue Best Practices
π― Queue Guidelines:
- β Use descriptive queue names that reflect priority
- β Limit queue names to 3-5 different priorities
- β Monitor queue sizes and processing rates
- β Use dedicated workers for critical queues
- β Balance worker allocation based on queue load
- β Set up alerts for queue backlogs
Queue Naming Conventions
:high_priority # Critical business functions
:default # Normal application tasks
:low_priority # Background maintenance tasks
:mailers # Email sending
:notifications # Push notifications
:data_processing # Data analysis and processing
:api_sync # External API synchronization
:cleanup # Maintenance and cleanup tasks
Queue Configuration Examples
π― E-commerce Platform:
:order_processing # Payment processing
:inventory_sync # Stock updates
:customer_service # Support tickets
:email_notifications # Order confirmations
:analytics # User behavior tracking
:recommendations # Product suggestions
:data_cleanup # Old order cleanup
:report_generation # Daily reports
:maintenance # System maintenance
π± Social Media App:
:user_actions # Posts, comments, likes
:notifications # Real-time notifications
:content_processing # Media uploads
:feed_generation # News feed updates
:friend_suggestions # User recommendations
:email_digests # Daily summaries
:analytics # Usage statistics
:data_export # User data exports
:cleanup # Old content cleanup
π4. Monitoring & Web UI
π Monitoring Strategies
Web UI Monitoring
# Access Resque Web UI
# http://localhost:3000/resque
# Features available:
# - Real-time queue monitoring
# - Worker status and statistics
# - Failed job management
# - Job retry functionality
# - Queue size tracking
Command Line Monitoring
# Check queue sizes
Resque.size(:default)
Resque.size(:high)
# Get all queue sizes
Resque.queues.each do |queue|
puts "#{queue}: #{Resque.size(queue)}"
end
# Check worker status
Resque.workers.each do |worker|
puts "Worker #{worker}: #{worker.state}"
end
Health Checks
# config/initializers/resque.rb
class ResqueHealthCheck
def self.healthy?
begin
Resque.redis.ping == "PONG"
rescue => e
Rails.logger.error "Resque health check failed: #{e.message}"
false
end
end
def self.queue_sizes
Resque.queues.map { |queue| [queue, Resque.size(queue)] }.to_h
end
def self.worker_count
Resque.workers.count
end
end
π Performance Monitoring
Key Metrics to Track
π Key Metrics to Monitor:
- Queue Sizes: Number of jobs waiting in each queue
- Processing Rate: Jobs processed per minute
- Worker Health: Number of active workers
- Failed Jobs: Number of failed jobs
- Memory Usage: Memory consumption per worker
Monitoring Scripts
# Check all queue sizes
Resque.queues.each do |queue|
size = Resque.size(queue)
puts "#{queue}: #{size} jobs"
end
# Get processing statistics
stats = {
total_processed: Resque.info[:processed],
total_failed: Resque.info[:failed],
workers: Resque.workers.count,
queues: Resque.queues.count
}
puts "Statistics: #{stats}"
π¨ Alerting & Notifications
Queue Size Alerts
# config/initializers/resque_alerts.rb
class ResqueAlerting
def self.check_queue_sizes
thresholds = {
high: 100,
default: 500,
low: 1000,
mailers: 200
}
thresholds.each do |queue, threshold|
size = Resque.size(queue)
if size > threshold
send_alert("Queue #{queue} has #{size} jobs (threshold: #{threshold})")
end
end
end
def self.send_alert(message)
# Send to your preferred alerting service
# Slack, email, PagerDuty, etc.
Rails.logger.warn "RESQUE ALERT: #{message}"
end
end
Worker Health Monitoring
# Monitor worker health
def check_worker_health
workers = Resque.workers
active_workers = workers.select(&:working?).count
if active_workers == 0
send_alert("No active Resque workers!")
elsif active_workers < 2
send_alert("Only #{active_workers} active workers")
end
end
π Dashboard Integration
Custom Dashboard
# app/controllers/admin/resque_controller.rb
class Admin::ResqueController < ApplicationController
def dashboard
@queue_sizes = Resque.queues.map { |q| [q, Resque.size(q)] }.to_h
@worker_count = Resque.workers.count
@processed_count = Resque.info[:processed]
@failed_count = Resque.info[:failed]
end
end
# app/views/admin/resque/dashboard.html.erb
Resque Dashboard
Workers
<%= @worker_count %>
Processed
<%= @processed_count %>
Failed
<%= @failed_count %>
<% @queue_sizes.each do |queue, size| %>
<%= queue %>
<%= size %>
<% end %>
π§ Troubleshooting
Solution: Check if workers are running:
ps aux | grep resque
Solution: Verify Redis is running:
redis-cli ping
Solution: Reduce worker count or implement job cleanup
Solution: Check job logic and implement proper error handling
π₯5. Advanced Features
π§ Advanced Configuration
Redis Configuration
# config/initializers/resque.rb
require 'resque'
# Configure Redis connection
Resque.redis = Redis.new(
host: ENV['REDIS_HOST'] || 'localhost',
port: ENV['REDIS_PORT'] || 6379,
password: ENV['REDIS_PASSWORD'],
db: ENV['REDIS_DB'] || 0,
timeout: 5,
reconnect_attempts: 3
)
# Set Redis namespace
Resque.redis.namespace = "resque:#{Rails.env}"
# Configure connection pool
require 'connection_pool'
Resque.redis = ConnectionPool.new(size: 25, timeout: 3) do
Redis.new(url: ENV['REDIS_URL'] || 'redis://localhost:6379/0')
end
Worker Configuration
# config/initializers/resque.rb
# Configure worker settings
Resque.after_fork do |job|
# Reconnect to database after fork
ActiveRecord::Base.establish_connection
end
# Configure job failure handling
Resque::Failure::Multiple.classes = [
Resque::Failure::Redis,
Resque::Failure::Slack
]
# Configure failure backend
Resque::Failure.backend = Resque::Failure::Redis
Environment Configuration
# config/environments/production.rb
Rails.application.configure do
# Configure Resque for production
config.active_job.queue_adapter = :resque
# Set queue priorities
config.resque_queues = %w[high default low mailers]
# Configure worker count
config.resque_worker_count = ENV['RESQUE_WORKER_COUNT'] || 5
end
π Job Scheduling
Resque Scheduler
# Add to Gemfile
gem 'resque-scheduler'
# config/initializers/resque_scheduler.rb
require 'resque/scheduler'
require 'resque/scheduler/server'
# Configure scheduler
Resque::Scheduler.dynamic = true
Resque.schedule = YAML.load_file(Rails.root.join('config', 'resque_schedule.yml'))
# Schedule jobs
# config/resque_schedule.yml
daily_report:
cron: "0 6 * * *"
class: "DailyReportJob"
queue: default
args: []
weekly_cleanup:
cron: "0 2 * * 0"
class: "WeeklyCleanupJob"
queue: low
args: []
Delayed Jobs
# Schedule job for later
Resque.enqueue_in(1.hour, EmailJob, user.id, 'reminder')
# Schedule job for specific time
Resque.enqueue_at(Time.now + 1.hour, EmailJob, user.id, 'reminder')
# Using ActiveJob
EmailJob.set(wait: 1.hour).perform_later(user.id, 'reminder')
EmailJob.set(wait_until: Time.now + 1.hour).perform_later(user.id, 'reminder')
π Job Security & Reliability
Job Retry Logic
class ReliableJob
@queue = :default
def self.perform(data)
begin
# Job logic here
process_data(data)
rescue => e
Rails.logger.error "Job failed: #{e.message}"
# Retry logic
if retry_count < max_retries
Resque.enqueue_in(retry_delay, self, data)
else
# Move to failed queue
raise e
end
end
end
def self.retry_count
# Implement retry counting logic
end
def self.max_retries
3
end
def self.retry_delay
60 # seconds
end
end
Job Uniqueness
class UniqueJob
@queue = :default
def self.perform(user_id, action)
# Use Redis lock for uniqueness
lock_key = "unique_job:#{user_id}:#{action}"
if Resque.redis.set(lock_key, 1, nx: true, ex: 3600)
begin
# Process the job
process_user_action(user_id, action)
ensure
# Release the lock
Resque.redis.del(lock_key)
end
else
Rails.logger.info "Job already running for user #{user_id}"
end
end
end
Job Batching
class BatchJob
@queue = :default
def self.perform(batch_id)
# Process batch of items
batch_items = get_batch_items(batch_id)
batch_items.each do |item|
# Process individual item
process_item(item)
end
# Mark batch as complete
mark_batch_complete(batch_id)
end
def self.create_batch(items)
batch_id = SecureRandom.uuid
# Store batch items
Resque.redis.hset("batch:#{batch_id}", "items", items.to_json)
Resque.redis.expire("batch:#{batch_id}", 3600)
# Enqueue batch job
Resque.enqueue(self, batch_id)
end
end
π Performance Optimization
Memory Management
# config/initializers/resque.rb
# Configure memory limits
Resque.after_fork do |job|
# Reset memory after each job
GC.start
# Set memory limit
if Process.getrlimit(Process::RLIMIT_AS)[0] > 500.megabytes
Process.setrlimit(Process::RLIMIT_AS, 500.megabytes)
end
end
# Monitor memory usage
class MemoryMonitor
def self.check_memory
memory_usage = GetProcessMem.new.mb
if memory_usage > 500
Rails.logger.warn "High memory usage: #{memory_usage}MB"
GC.start
end
end
end
Connection Pooling
# config/initializers/resque.rb
require 'connection_pool'
# Configure Redis connection pool
Resque.redis = ConnectionPool.new(size: 25, timeout: 3) do
Redis.new(
url: ENV['REDIS_URL'] || 'redis://localhost:6379/0',
timeout: 5,
reconnect_attempts: 3
)
end
# Configure database connection pool
Resque.after_fork do |job|
ActiveRecord::Base.connection_pool.disconnect!
ActiveRecord::Base.establish_connection
end
Worker Scaling
# config/initializers/resque_scaling.rb
class ResqueScaling
def self.scale_workers
queue_sizes = Resque.queues.map { |q| [q, Resque.size(q)] }.to_h
total_jobs = queue_sizes.values.sum
current_workers = Resque.workers.count
target_workers = calculate_target_workers(total_jobs)
if target_workers > current_workers
start_workers(target_workers - current_workers)
elsif target_workers < current_workers
stop_workers(current_workers - target_workers)
end
end
def self.calculate_target_workers(total_jobs)
# Simple scaling logic
[total_jobs / 10, 1].max
end
end
π§ Custom Extensions
Custom Failure Backend
# lib/resque/failure/slack.rb
module Resque
module Failure
class Slack < Base
def save
data = {
text: "Job failed: #{payload['class']}",
attachments: [{
fields: [
{ title: "Queue", value: queue, short: true },
{ title: "Arguments", value: args.to_json, short: true },
{ title: "Exception", value: exception.message, short: false }
]
}]
}
# Send to Slack
HTTParty.post(ENV['SLACK_WEBHOOK_URL'], body: data.to_json)
end
end
end
end
Custom Job Classes
# app/jobs/base_job.rb
class BaseJob
@queue = :default
def self.perform(*args)
# Common job logic
before_perform(*args)
begin
execute(*args)
after_perform(*args)
rescue => e
on_failure(e, *args)
raise e
end
end
def self.before_perform(*args)
# Override in subclasses
end
def self.after_perform(*args)
# Override in subclasses
end
def self.on_failure(exception, *args)
Rails.logger.error "Job failed: #{exception.message}"
end
def self.execute(*args)
raise NotImplementedError, "Subclasses must implement execute"
end
end
π¨6. Troubleshooting & Best Practices
π§ Common Issues & Solutions
Worker Issues
Solution: Check Redis connection and ensure Redis is running
Solution: Check memory usage and implement proper error handling
Solution: Verify workers are listening to the correct queues
Redis Issues
Solution: Check Redis server status and connection settings
Solution: Configure Redis memory limits and implement cleanup
Performance Issues
Solution: Reduce worker count and implement memory monitoring
Solution: Optimize job logic and add more workers
π Monitoring & Debugging
Debug Commands
# Check Redis connection
redis-cli ping
# Check queue sizes
Resque.size(:default)
Resque.size(:high)
# Check worker status
Resque.workers.each do |worker|
puts "Worker #{worker}: #{worker.state}"
end
# Check failed jobs
Resque::Failure.count
# Clear failed jobs
Resque::Failure.clear
Logging Configuration
# config/initializers/resque.rb
# Configure logging
Resque.logger = Rails.logger
Resque.logger.level = Logger::INFO
# Enable verbose logging in development
if Rails.env.development?
Resque.logger.level = Logger::DEBUG
end
# Custom logging
Resque.after_fork do |job|
Rails.logger.info "Processing job: #{job}"
end
π― Best Practices
Job Design
π― Job Guidelines:
- β Keep jobs idempotent (safe to retry)
- β Use small, focused jobs
- β Handle exceptions properly
- β Use appropriate queue priorities
- β Implement proper logging
Configuration
π§ Configuration Guidelines:
- β Set appropriate worker count
- β Configure Redis properly
- β Use connection pooling
- β Monitor memory usage
- β Implement health checks
Production Deployment
# config/initializers/resque.rb
# Production configuration
if Rails.env.production?
# Use connection pool
require 'connection_pool'
Resque.redis = ConnectionPool.new(size: 25, timeout: 3) do
Redis.new(url: ENV['REDIS_URL'])
end
# Configure logging
Resque.logger = Rails.logger
Resque.logger.level = Logger::INFO
# Set up monitoring
Resque.after_fork do |job|
# Reconnect to database
ActiveRecord::Base.establish_connection
end
end
π Security Considerations
Redis Security
# Redis security configuration
# redis.conf
requirepass your_strong_password
bind 127.0.0.1
protected-mode yes
# Resque configuration with authentication
Resque.redis = Redis.new(
url: ENV['REDIS_URL'],
password: ENV['REDIS_PASSWORD'],
ssl: true
)
Web UI Security
# config/routes.rb
# Secure Resque Web UI
require 'resque/server'
class SecureResqueServer < Resque::Server
before do
# Add authentication
unless authenticated?
halt 401, 'Unauthorized'
end
end
def authenticated?
# Implement your authentication logic
session[:admin] == true
end
end
Rails.application.routes.draw do
mount SecureResqueServer.new, at: '/resque'
end
π Performance Optimization
Worker Optimization
# Optimize worker performance
Resque.after_fork do |job|
# Reconnect to database
ActiveRecord::Base.establish_connection
# Set memory limits
if Process.getrlimit(Process::RLIMIT_AS)[0] > 500.megabytes
Process.setrlimit(Process::RLIMIT_AS, 500.megabytes)
end
end
# Monitor memory usage
class MemoryMonitor
def self.check_memory
memory_usage = GetProcessMem.new.mb
if memory_usage > 500
Rails.logger.warn "High memory usage: #{memory_usage}MB"
GC.start
end
end
end
Queue Optimization
# Queue-specific worker configuration
# Start workers for different queues
bundle exec rake resque:work QUEUE=high_priority COUNT=3
bundle exec rake resque:work QUEUE=default COUNT=5
bundle exec rake resque:work QUEUE=low_priority COUNT=2
# Dynamic worker scaling
class WorkerScaler
def self.scale_workers
queue_sizes = Resque.queues.map { |q| [q, Resque.size(q)] }.to_h
total_jobs = queue_sizes.values.sum
target_workers = [total_jobs / 10, 1].max
current_workers = Resque.workers.count
if target_workers > current_workers
start_workers(target_workers - current_workers)
end
end
end
π¨7. Troubleshooting & Best Practices
Common Issues
Jobs Not Processing
Solution: Check if workers are running and listening to correct queues
# Check worker status
bundle exec rake resque:workers
# Start workers for specific queues
bundle exec rake resque:work QUEUE=high_priority,default
# Check queue sizes
Resque.queues.each { |q| puts "#{q}: #{Resque.size(q)}" }
Memory Issues
Solution: Add garbage collection and memory limits
# config/initializers/resque.rb
Resque.after_fork do |job|
GC.start # Force garbage collection
end
# Monitor memory usage
memory_usage = GetProcessMem.new.mb
if memory_usage > 512
Rails.logger.warn "High memory usage: #{memory_usage}MB"
end
Failed Jobs
Solution: Check error logs and implement proper retry logic
# Check failed jobs
bundle exec rake resque:failed
# Retry specific failed job
bundle exec rake resque:retry_job JOB_ID=job_id
# Clear all failed jobs (dangerous!)
bundle exec rake resque:clear_failed
Performance Issues
Slow Job Processing
Solution: Optimize job logic and add more workers
# Profile job performance
class SlowJob
@queue = :default
def self.perform(data)
start_time = Time.current
# Your job logic here
process_data(data)
duration = Time.current - start_time
Rails.logger.info "Job completed in #{duration}s"
end
end
Queue Backlogs
Solution: Add more workers or optimize job processing
# Monitor queue sizes
def check_queue_backlogs
Resque.queues.each do |queue|
size = Resque.size(queue)
if size > 1000
Rails.logger.warn "Queue #{queue} has #{size} jobs"
end
end
end
# Start more workers
bundle exec rake resque:work QUEUE=* COUNT=5
Configuration Issues
Redis Connection Problems
Solution: Check Redis configuration and connection settings
# Test Redis connection
Resque.redis.ping # Should return "PONG"
# Check Redis configuration
puts Resque.redis.client.options
# Reconnect if needed
Resque.redis.client.reconnect
Worker Process Issues
Solution: Check worker logs and restart workers
# Check worker processes
ps aux | grep resque
# Kill dead workers
bundle exec rake resque:kill_workers
# Restart workers
bundle exec rake resque:work QUEUE=*
Debugging Tools
Job Debugging
# Add debugging to jobs
class DebugJob
@queue = :default
def self.perform(data)
Rails.logger.info "Starting job with data: #{data}"
begin
result = process_data(data)
Rails.logger.info "Job completed successfully"
result
rescue => e
Rails.logger.error "Job failed: #{e.message}"
Rails.logger.error e.backtrace.join("\n")
raise e
end
end
end
Worker Debugging
# Add debugging to worker configuration
Resque.after_fork do |job|
Rails.logger.info "Worker #{Process.pid} started job: #{job}"
end
Resque.before_perform do |job|
Rails.logger.info "Starting job: #{job}"
end
Resque.after_perform do |job|
Rails.logger.info "Completed job: #{job}"
end
Best Practices
π― General Guidelines:
- β Always handle exceptions in jobs
- β Use appropriate queue priorities
- β Monitor queue sizes and worker health
- β Implement proper retry logic
- β Clean up resources after job completion
- β Use environment-specific configurations
- β Set up proper logging and monitoring
- β Test jobs thoroughly before deployment
Job Design Best Practices
# Good job design
class WellDesignedJob
@queue = :default
def self.perform(user_id)
user = User.find(user_id)
begin
# Do the work
user.process_data!
# Log success
Rails.logger.info "Processed user #{user_id}"
rescue => e
# Log error and re-raise for retry
Rails.logger.error "Failed to process user #{user_id}: #{e.message}"
raise e
end
end
end
Queue Management Best Practices
# Monitor and manage queues
def manage_queues
Resque.queues.each do |queue|
size = Resque.size(queue)
case queue
when 'high_priority'
alert_if_backlog(queue, size, 100)
when 'default'
alert_if_backlog(queue, size, 500)
when 'low_priority'
alert_if_backlog(queue, size, 1000)
end
end
end
def alert_if_backlog(queue, size, threshold)
if size > threshold
Rails.logger.warn "Queue #{queue} has #{size} jobs"
end
end
Production Checklist
π Production Readiness:
- Monitoring: Set up queue and worker monitoring
- Logging: Configure proper job and worker logging
- Retry Logic: Implement appropriate retry strategies
- Error Handling: Handle all possible error cases
- Performance: Test job performance under load
- Security: Secure Redis connections and job data
Environment-specific Tips
Staging: Mirror production settings, test job performance
Production: Use SSL, connection pooling, monitoring
β8. Interview Questions & Answers
Basic Questions
Q: What is Resque and how does it work?
Q: How do you create a Resque job?
class EmailJob
@queue = :default
def self.perform(user_id, message)
user = User.find(user_id)
UserMailer.notification(user, message).deliver_now
end
end
# Enqueue the job
Resque.enqueue(EmailJob, user.id, "Welcome!")
Q: What's the difference between Resque and Sidekiq?
Intermediate Questions
Q: How do you handle job failures in Resque?
class RetryableJob
@queue = :default
def self.perform(data, retry_count = 0)
max_retries = 3
begin
process_data(data)
rescue => e
if retry_count < max_retries
delay = 2 ** retry_count # Exponential backoff
Resque.enqueue_in(delay, self, data, retry_count + 1)
else
Rails.logger.error "Job failed after #{max_retries} retries"
raise e
end
end
end
end
Q: How do you monitor Resque queues?
# Check queue sizes
Resque.queues.each do |queue|
size = Resque.size(queue)
puts "#{queue}: #{size} jobs"
end
# Check worker status
workers = Resque.workers
puts "Total workers: #{workers.count}"
puts "Busy workers: #{workers.select(&:working?).count}"
Advanced Questions
Q: How do you implement job chaining in Resque?
class DataProcessingChain
@queue = :data_processing
def self.perform(data_id)
data = DataRecord.find(data_id)
processed_data = data.process!
# Chain to next job
Resque.enqueue(DataAnalysisJob, processed_data.id)
end
end
class DataAnalysisJob
@queue = :data_processing
def self.perform(processed_data_id)
processed_data = ProcessedData.find(processed_data_id)
analysis = processed_data.analyze!
# Chain to final job
Resque.enqueue(ReportGenerationJob, analysis.id)
end
end
Q: How do you optimize Resque for high throughput?
# Start multiple workers
bundle exec rake resque:work QUEUE=* COUNT=5
# Use queue-specific workers
bundle exec rake resque:work QUEUE=high_priority COUNT=3
bundle exec rake resque:work QUEUE=default COUNT=5
bundle exec rake resque:work QUEUE=low_priority COUNT=2
# Optimize memory usage
Resque.after_fork do |job|
GC.start # Force garbage collection
end
System Design Questions
Q: Design a job processing system for an e-commerce platform
- high_priority: Payment processing, inventory updates
- default: Order confirmations, email notifications
- low_priority: Analytics, reporting, cleanup tasks
Q: How would you handle job failures in a production system?
# Implement comprehensive error handling
class ProductionJob
@queue = :default
def self.perform(data)
begin
process_data(data)
track_success(data)
rescue NetworkError => e
# Retry network errors
retry_with_backoff(data, e)
rescue ValidationError => e
# Don't retry validation errors
log_validation_error(data, e)
rescue => e
# Retry other errors
retry_with_backoff(data, e)
end
end
end
Best Practices Questions
Q: What are the best practices for Resque job design?
π― Key Principles:
- β Keep jobs idempotent (can be retried safely)
- β Handle all exceptions properly
- β Use appropriate queue priorities
- β Keep jobs focused and single-purpose
- β Implement proper logging and monitoring
- β Clean up resources after job completion
Q: How do you secure Resque in production?
# Secure Redis connection
Resque.redis = Redis.new(
url: ENV['REDIS_URL'],
password: ENV['REDIS_PASSWORD'],
ssl: ENV['REDIS_SSL'] == 'true'
)
# Secure Web UI
Resque::Server.use(Rack::Auth::Basic) do |username, password|
username == ENV['RESQUE_USERNAME'] &&
password == ENV['RESQUE_PASSWORD']
end
π’9. Real-World Case Studies
E-commerce Order Processing
Problem
An e-commerce platform needs to process orders asynchronously to handle high traffic and ensure payment processing, inventory updates, and email notifications don't block the user experience.
Solution
# High priority: Payment processing
class PaymentProcessingJob
@queue = :high_priority
def self.perform(order_id)
order = Order.find(order_id)
order.process_payment!
order.update!(status: 'paid')
# Trigger inventory update
Resque.enqueue(InventoryUpdateJob, order.id)
end
end
# Default priority: Inventory and notifications
class InventoryUpdateJob
@queue = :default
def self.perform(order_id)
order = Order.find(order_id)
order.update_inventory!
# Send confirmation email
Resque.enqueue(OrderConfirmationJob, order.id)
end
end
class OrderConfirmationJob
@queue = :default
def self.perform(order_id)
order = Order.find(order_id)
OrderMailer.confirmation(order).deliver_now
end
end
Result
β Scalable: Handled 10x more orders during peak times
β Reliable: Failed payments automatically retried
Social Media Content Processing
Problem
A social media platform needs to process user uploads (images, videos) asynchronously, generate thumbnails, and update user feeds without blocking the upload experience.
Solution
# High priority: User actions
class ContentUploadJob
@queue = :high_priority
def self.perform(content_id)
content = Content.find(content_id)
content.process_upload!
# Generate thumbnails
Resque.enqueue(ThumbnailGenerationJob, content.id)
end
end
# Default priority: Media processing
class ThumbnailGenerationJob
@queue = :default
def self.perform(content_id)
content = Content.find(content_id)
content.generate_thumbnails!
# Update user feeds
Resque.enqueue(FeedUpdateJob, content.id)
end
end
class FeedUpdateJob
@queue = :default
def self.perform(content_id)
content = Content.find(content_id)
content.update_user_feeds!
end
end
Result
β Scalable: Processed 1000+ uploads per minute
β Reliable: Failed processing automatically retried
Data Analytics Pipeline
Problem
A SaaS platform needs to process large amounts of user analytics data daily, generate reports, and send insights to users without impacting system performance.
Solution
# Scheduled daily analytics
class DailyAnalyticsJob
@queue = :low_priority
def self.perform
# Process user data in batches
User.find_in_batches(batch_size: 1000) do |batch|
Resque.enqueue(AnalyticsBatchJob, batch.map(&:id))
end
end
end
class AnalyticsBatchJob
@queue = :data_processing
def self.perform(user_ids)
users = User.where(id: user_ids)
users.each do |user|
user.generate_daily_analytics!
# Send insights if user has premium
if user.premium?
Resque.enqueue(InsightEmailJob, user.id)
end
end
end
end
class InsightEmailJob
@queue = :mailers
def self.perform(user_id)
user = User.find(user_id)
AnalyticsMailer.daily_insights(user).deliver_now
end
end
Result
β Non-blocking: Analytics didn't impact user experience
β Automated: Daily reports generated automatically
API Integration SystemProblem
A platform needs to integrate with multiple external APIs (payment gateways, shipping providers, etc.) and handle failures gracefully while maintaining data consistency.
Solution
# API integration with retry logic
class PaymentGatewayJob
@queue = :high_priority
def self.perform(payment_id, retry_count = 0)
payment = Payment.find(payment_id)
begin
result = PaymentGateway.process(payment)
payment.update!(status: 'processed', gateway_response: result)
rescue PaymentGateway::NetworkError => e
if retry_count < 3
delay = 2 ** retry_count
Resque.enqueue_in(delay, self, payment_id, retry_count + 1)
else
payment.update!(status: 'failed', error: e.message)
end
rescue PaymentGateway::ValidationError => e
# Don't retry validation errors
payment.update!(status: 'failed', error: e.message)
end
end
end
class ShippingProviderJob
@queue = :default
def self.perform(order_id)
order = Order.find(order_id)
begin
tracking = ShippingProvider.create_shipment(order)
order.update!(tracking_number: tracking.number)
# Send tracking email
Resque.enqueue(TrackingEmailJob, order.id)
rescue ShippingProvider::Error => e
order.update!(shipping_status: 'failed', error: e.message)
end
end
end
Result
β Resilient: Automatic retry for transient failures
β Consistent: Data integrity maintained across failures
Email Marketing System
Problem
A marketing platform needs to send thousands of personalized emails daily while respecting rate limits and handling bounces/failures appropriately.
Solution
# Email marketing with rate limiting
class EmailCampaignJob
@queue = :mailers
def self.perform(campaign_id, user_ids)
campaign = EmailCampaign.find(campaign_id)
user_ids.each_slice(100) do |batch|
Resque.enqueue(EmailBatchJob, campaign_id, batch)
end
end
end
class EmailBatchJob
@queue = :mailers
def self.perform(campaign_id, user_ids)
campaign = EmailCampaign.find(campaign_id)
user_ids.each do |user_id|
Resque.enqueue(IndividualEmailJob, campaign_id, user_id)
end
end
end
class IndividualEmailJob
@queue = :mailers
def self.perform(campaign_id, user_id)
campaign = EmailCampaign.find(campaign_id)
user = User.find(user_id)
begin
EmailService.send_personalized_email(campaign, user)
campaign.increment!(:sent_count)
rescue EmailService::BounceError => e
user.update!(email_status: 'bounced')
rescue EmailService::RateLimitError => e
# Re-enqueue with delay
Resque.enqueue_in(300, self, campaign_id, user_id)
end
end
end
Result
β Compliant: Respected rate limits and bounce handling
β Trackable: Detailed delivery and bounce tracking
π10. Reference & Commands
Quick Reference
Job Creation
class MyJob
@queue = :default
def self.perform(arg1, arg2)
# Job logic here
end
end
# Enqueue job
Resque.enqueue(MyJob, arg1, arg2)
Worker Commands
# Start worker
bundle exec rake resque:work QUEUE=*
# Start multiple workers
bundle exec rake resque:work QUEUE=* COUNT=5
# Start scheduler
bundle exec rake resque:scheduler
# Check workers
bundle exec rake resque:workers
Queue Commands
# Check queue size
Resque.size('queue_name')
# List all queues
Resque.queues
# Clear queue
bundle exec rake resque:clear QUEUE=queue_name
Failed Jobs
# List failed jobs
bundle exec rake resque:failed
# Retry failed job
bundle exec rake resque:retry_job JOB_ID=job_id
# Clear failed jobs
bundle exec rake resque:clear_failed
Configuration Reference
Redis Configuration
# config/initializers/resque.rb
Resque.redis = Redis.new(
host: ENV['REDIS_HOST'] || 'localhost',
port: ENV['REDIS_PORT'] || 6379,
password: ENV['REDIS_PASSWORD'],
db: ENV['REDIS_DB'] || 0
)
Worker Configuration
# config/initializers/resque.rb
Resque.after_fork do |job|
ActiveRecord::Base.establish_connection
end
Resque.before_perform do |job|
Rails.logger.info "Starting job: #{job}"
end
Resque.after_perform do |job|
Rails.logger.info "Completed job: #{job}"
end
Web UI Configuration
# config/routes.rb
mount Resque::Server.new, at: '/resque'
# config/initializers/resque.rb
Resque::Server.use(Rack::Auth::Basic) do |username, password|
username == ENV['RESQUE_USERNAME'] &&
password == ENV['RESQUE_PASSWORD']
end
Environment Variables
Required Variables
# .env
REDIS_URL=redis://localhost:6379/0
REDIS_PASSWORD=your_password
RESQUE_USERNAME=admin
RESQUE_PASSWORD=password
Optional Variables
# .env
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_DB=0
QUEUE=default,high,low
RESQUE_MEMORY_LIMIT=512
Monitoring Commands
Queue Monitoring
# Check all queue sizes
Resque.queues.each do |queue|
puts "#{queue}: #{Resque.size(queue)}"
end
# Check worker status
workers = Resque.workers
puts "Total: #{workers.count}"
puts "Busy: #{workers.select(&:working?).count}"
Performance Monitoring
# Monitor job performance
Resque.before_perform do |job|
@start_time = Time.current
end
Resque.after_perform do |job|
duration = Time.current - @start_time
Rails.logger.info "Job took #{duration}s"
end
Common Patterns
Retry Pattern
class RetryableJob
@queue = :default
def self.perform(data, retry_count = 0)
max_retries = 3
begin
process_data(data)
rescue => e
if retry_count < max_retries
delay = 2 ** retry_count
Resque.enqueue_in(delay, self, data, retry_count + 1)
else
raise e
end
end
end
end
Batch Processing Pattern
class BatchJob
@queue = :data_processing
def self.perform(batch_size = 1000)
DataRecord.find_in_batches(batch_size: batch_size) do |batch|
batch.each do |record|
record.process!
end
end
end
end
Job Chaining Pattern
class Step1Job
@queue = :default
def self.perform(data_id)
data = DataRecord.find(data_id)
result = data.process!
# Chain to next job
Resque.enqueue(Step2Job, result.id)
end
end
class Step2Job
@queue = :default
def self.perform(result_id)
result = ProcessedData.find(result_id)
result.analyze!
end
end
Best Practices Summary
π― Key Principles:
- β Keep jobs idempotent and focused
- β Handle exceptions properly
- β Use appropriate queue priorities
- β Monitor queue sizes and worker health
- β Implement proper retry logic
- β Clean up resources after job completion
- β Use environment-specific configurations
- β Set up comprehensive monitoring
Performance Tips
Workers: Scale workers based on queue load
Queues: Use dedicated queues for different priorities
Monitoring: Set up alerts for queue backlogs
π11. Commands & Concepts Reference Table
Core Commands
Command | Description | Usage |
---|---|---|
bundle exec rake resque:work | Start Resque worker process | Development/production |
bundle exec rake resque:work QUEUE=high,default | Start worker for specific queues | Queue-specific processing |
bundle exec rake resque:work COUNT=3 | Start multiple workers | High throughput |
bundle exec rake resque:scheduler | Start Resque scheduler | Scheduled jobs |
pkill -f "resque" | Stop all Resque workers | Maintenance/restart |
ps aux | grep resque | Check running Resque processes | Monitoring |
redis-cli ping | Test Redis connection | Connection testing |
Ruby/Rails Commands
Command | Description | Usage |
---|---|---|
Resque.enqueue(JobClass, args) | Enqueue job immediately | Basic job enqueueing |
Resque.enqueue_in(3600, JobClass, args) | Enqueue job with delay (seconds) | Scheduled jobs |
Resque.enqueue_at(Time.now + 1.hour, JobClass, args) | Enqueue job at specific time | Time-based scheduling |
Resque.size(:queue_name) | Get queue size | Queue monitoring |
Resque.queues | List all queues | Queue management |
Resque.workers | List all workers | Worker monitoring |
Resque.redis.get('stat:processed') | Get processed jobs count | Statistics |
Redis Commands
Command | Description | Usage |
---|---|---|
redis-cli llen resque:queue:default | Check queue length | Queue monitoring |
redis-cli keys resque:* | List all Resque keys | Debugging |
redis-cli smembers resque:workers | List active workers | Worker monitoring |
redis-cli zrange resque:delayed 0 -1 | List scheduled jobs | Scheduled job monitoring |
Monitoring Commands
Command | Description | Usage |
---|---|---|
tail -f log/resque.log | Monitor Resque logs | Real-time monitoring |
pgrep -f "resque" | Find Resque processes | Process monitoring |
redis-cli info memory | Check Redis memory usage | Performance monitoring |
redis-cli info stats | Get Redis statistics | System health |
Core Concepts
Concept | Description | Usage |
---|---|---|
Job | Ruby class with @queue and self.perform method | Background task execution |
Worker | Process that polls Redis and executes jobs | Job processing |
Queue | Redis list that holds pending jobs | Job organization |
Process | Separate process running workers | Concurrency model |
Scheduler | Process that handles delayed/scheduled jobs | Time-based job execution |
Retry | Automatic re-execution of failed jobs | Error handling |
Web UI | Web interface for monitoring queues and workers | Monitoring and management |
Hooks | Code that runs before/after job execution | Logging, monitoring, error handling |
Configuration Options
Option | Description | Default |
---|---|---|
interval | Seconds between Redis polls | 5 |
timeout | Job timeout in seconds | 3600 |
count | Number of workers to start | 1 |
failure_backend | Backend for storing failed jobs | Redis |
redis | Redis connection configuration | localhost:6379 |
logger | Logger instance for worker output | STDOUT |
Redis Data Structures
Key Pattern | Data Type | Description |
---|---|---|
resque:queue:default | List | Pending jobs in default queue |
resque:workers | Set | Active worker processes |
resque:delayed | Sorted Set | Scheduled jobs |
resque:failed | List | Failed jobs |
resque:stat:processed | String | Total processed jobs counter |
resque:stat:failed | String | Total failed jobs counter |
resque:worker:* | Hash | Worker metadata and status |
Environment Variables
Variable | Description | Example |
---|---|---|
QUEUE | Comma-separated list of queues to process | high,default,low |
COUNT | Number of worker processes to start | 3 |
REDIS_URL | Redis connection URL | redis://localhost:6379/0 |
RAILS_ENV | Rails environment | production |
INTERVAL | Seconds between Redis polls | 5 |
Common Job Patterns
Pattern | Description | Use Case |
---|---|---|
Email Job | Send emails asynchronously | User notifications, marketing emails |
Data Processing | Process large datasets in background | Analytics, reports, data imports |
File Processing | Handle file uploads and processing | Image resizing, document processing |
API Integration | Make external API calls asynchronously | Third-party integrations, webhooks |
Retry Pattern | Implement custom retry logic | Error handling, resilience |
Job Chaining | Chain multiple jobs together | Complex workflows, pipelines |