📘 Contents
- What is Nginx?
- Nginx vs Other Web Servers: Pros & Cons
- Real-Time Use Cases of Nginx with Examples
- When to Use Nginx vs Other Servers (With Examples)
- HTTP vs HTTPS — With Ports and Custom Usage
- Can Nginx Use Other Protocols?
- Structure of Nginx (How It Works)
- Common Nginx Terms Explained with Examples
- Basic Setup for Nginx (Ubuntu)
- Setting Up Nginx for a React App (Production Build)
- Setting Up Nginx for a Rails App (with Puma)
- Setting Up Nginx for Microservices
- Docker + Nginx Reverse Proxy Example
- Use Nginx to Access Different Ports via Routes
- Nginx Setup for Domain and Subdomains
- Complete SSL Setup for Nginx (Domain + Subdomains)
- How Caching Works in Nginx
- Enable or Disable Caching in Nginx
- Nginx on One Server to Access Another Server (Load Balancing)
- Best Practices for Using Nginx in Production
- Top 10 Nginx Interview Questions & Answers
📘 What is Nginx?
Nginx (pronounced “engine-x”) is a powerful, lightweight, and high-performance open-source web server that also acts as a reverse proxy, load balancer, and HTTP cache. It is designed to handle a large number of concurrent connections efficiently using an event-driven, asynchronous architecture.
Initially created by Igor Sysoev in 2004 to address the C10K problem (handling 10,000+ concurrent connections), Nginx has become the backbone of many high-traffic websites such as Netflix, Dropbox, WordPress.com, and GitHub.
Nginx is often used to:
- Serve static HTML, CSS, JS, and media files
- Proxy requests to backend servers (Node.js, Rails, Python, PHP)
- Act as a secure gateway for SSL/TLS traffic
- Load balance traffic across multiple servers
- Cache responses to improve performance
Unlike traditional web servers like Apache that use a process-per-connection model, Nginx uses a non-blocking, event-driven model which consumes less memory and provides significantly better performance under load.
🔁 Nginx vs Other Web Servers: Pros & Cons
| Web Server | Pros ✅ | Cons ❌ | Best Use Case |
|---|---|---|---|
| Nginx | – High performance under load – Low memory usage – Great for static content & reverse proxy – SSL, load balancing, caching built-in | – Steeper learning curve – Less flexible with .htaccess-style rules | Web apps, reverse proxy, API gateway |
| Apache | – Mature and widely supported – Powerful .htaccess support – Modular and extensible | – Slower under high concurrency – Higher memory consumption | CMS (e.g., WordPress), legacy PHP apps |
| Caddy | – Auto HTTPS via Let’s Encrypt – Simple config syntax – Modern Go-based server | – Limited community – Less control in large-scale use | Quick SSL sites, small apps, microservices |
| HAProxy | – Advanced load balancing (TCP & HTTP) – Robust health checks – High availability setups | – Not ideal for static files – More complex to configure for beginners | Load balancing, failover, microservice gateway |
| Lighttpd | – Very lightweight – Low CPU usage – Fast for static content | – Fewer features than Nginx – Less active community | Embedded systems, minimal static servers |
⚡ Real-Time Use Cases of Nginx with Examples
1. Reverse Proxy for Backend Apps
Nginx forwards HTTP requests to backend apps like Node.js, Django, or Rails.
listen 80;
server_name app.example.com;
location / {
proxy_pass http://localhost:3000;
}
}
2. Load Balancing Across Multiple Servers
Distribute traffic between multiple app servers to scale horizontally.
server 127.0.0.1:3000;
server 127.0.0.1:3001;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
3. SSL Termination (HTTPS)
Terminate HTTPS at Nginx and forward unencrypted traffic to internal services.
listen 443 ssl;
server_name secure.example.com;
ssl_certificate /path/to/fullchain.pem;
ssl_certificate_key /path/to/key.pem;
location / {
proxy_pass http://localhost:3000;
}
}
4. Hosting a Single-Page Application (SPA)
Serve static files from a React/Vue/Angular build with fallback routing.
listen 80;
root /var/www/my-app/dist;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
}
5. Caching API Responses
Improve performance by caching repeated requests temporarily.
server {
location /api/ {
proxy_cache my_cache;
proxy_pass http://localhost:3000;
}
}
6. Rate Limiting APIs
Limit requests per client to prevent abuse.
server {
location /api/ {
limit_req zone=api_limit burst=5;
proxy_pass http://localhost:3000;
}
}
7. Redirect HTTP to HTTPS
Force all HTTP traffic to HTTPS for security.
listen 80;
server_name www.example.com;
return 301 https://$host$request_uri;
}
8. Docker + Nginx Frontend
Use Nginx as a frontend container for your backend services.
In docker-compose.yml:
image: nginx
ports:
– “80:80”
volumes:
– ./nginx.conf:/etc/nginx/nginx.conf
9. Subdomain Routing
Host multiple apps on different subdomains using virtual hosts.
listen 80;
server_name api.example.com;
location / { proxy_pass http://localhost:3000; }
}
server {
listen 80;
server_name blog.example.com;
location / { proxy_pass http://localhost:4000; }
}
10. CDN for Static Assets
Use Nginx as a static file server with caching and compression.
listen 80;
location /assets/ {
root /cdn/static;
expires 7d;
add_header Cache-Control “public”;
}
}
🧭 When to Use Nginx vs Other Servers (With Examples)
✅ Use Nginx When…
- You need a reverse proxy: Nginx is perfect for forwarding requests to app servers (Node.js, Rails, etc.).
- High traffic performance is required: Its non-blocking architecture handles thousands of concurrent connections.
- Serving static files: Nginx is ultra-fast for HTML, JS, CSS, and images.
- You want to terminate SSL: Offload HTTPS at Nginx and forward to backend over HTTP.
- You’re using Docker or microservices: Use Nginx as a frontend or API gateway in containers.
You built a React frontend + Node.js backend. You deploy Nginx in front:
– Serve React static files
– Proxy `/api` to Node.js
– Use HTTPS + HTTP to backend internally
❌ Don’t Use Nginx When…
- You need .htaccess-based dynamic routing: Use Apache (ideal for WordPress, Laravel, etc.).
- You need native HTTP/2 + auto HTTPS with minimal setup: Use Caddy server.
- You’re load balancing TCP/UDP services (not HTTP): Use HAProxy for advanced routing.
- You’re on constrained embedded systems: Use Lighttpd — minimal footprint, good static performance.
You are hosting WordPress on shared hosting or need per-directory overrides like
.htaccess.👉 Use Apache — it allows granular control and is well-supported in the PHP ecosystem.
🧠 Pro Tip:
You can also use Nginx + Apache together — Nginx handles static content & HTTPS, while Apache handles dynamic content via PHP.
localhost:8080.🔐 HTTP vs HTTPS — With Ports and Custom Usage
🌍 What is HTTP?
HTTP (HyperText Transfer Protocol) is the standard protocol for transferring data over the web.
- Data is sent in plain text
- Default port:
80 - Not secure — anyone can intercept the data
- Example:
http://example.com
🔒 What is HTTPS?
HTTPS (HTTP Secure) is HTTP layered over SSL/TLS encryption.
- Encrypts data between browser and server
- Default port:
443 - Used for secure websites like banking, logins, e-commerce
- Example:
https://example.com
📦 Default Port Mapping
| Protocol | Port | Secure |
|---|---|---|
| HTTP | 80 | ❌ No |
| HTTPS | 443 | ✅ Yes |
🔁 Can You Use Custom Ports?
- ✅ Yes, you can use any port (e.g., 8080, 3000, 8443)
- ⚠️ But users must include the port in the URL (e.g.,
https://example.com:8443) - ✅ You can bind SSL to any port — just configure it manually in Nginx or Apache
- 🔐 SSL is not limited to port 443, but browsers treat 443 as default (no port in URL)
- 💡 Port 443 is expected by default for HTTPS, so changing it may cause trust or firewall issues
🔧 Example: Nginx SSL on Port 8443
listen 8443 ssl;
server_name example.com;
ssl_certificate /etc/nginx/ssl/example.crt;
ssl_certificate_key /etc/nginx/ssl/example.key;
location / {
return 200 “Secure connection on custom port 8443!”;
}
}
🧩 Can Nginx Use Other Protocols?
Yes! While Nginx is primarily known for serving HTTP/HTTPS traffic, it also supports other protocols directly or through modules. Here’s a breakdown:
1. 🧪 TCP and UDP (via Stream Module)
You can proxy raw TCP and UDP traffic—useful for databases, game servers, and SSL passthrough.
🔧 Example: Proxy MySQL on Port 3306upstream mysql_upstream {
server 127.0.0.1:3306;
}
server {
listen 3306;
proxy_pass mysql_upstream;
}
}
2. 📡 WebSocket
WebSockets run over HTTP/HTTPS but require proper headers. Nginx supports them as a reverse proxy.
🔧 Example: Proxy WebSocketproxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection “upgrade”;
}
3. 📤 gRPC
Nginx can proxy gRPC and gRPC-Web requests with HTTP/2 support.
🔧 Example: Proxy gRPC Servergrpc_pass grpc://127.0.0.1:50051;
error_page 502 = /errorfallback;
}
4. 📩 Mail (SMTP, IMAP, POP3)
Nginx can act as a mail proxy with SSL offloading or auth routing.
🔧 Example: SMTP Proxyserver {
listen 25;
protocol smtp;
proxy_pass smtp.example.com:25;
}
}
5. 🧪 Custom Internal Protocols
If your service runs over TCP or UDP, you can still use Nginx as a gateway—even if the protocol is custom (e.g., Redis, MQTT).
For example, proxy Redis over port 6379 with TCP:
server {
listen 6379;
proxy_pass 127.0.0.1:6379;
}
}
📌 Summary Table
| Protocol | Supported? | Example Use |
|---|---|---|
| HTTP / HTTPS | ✅ | Websites, APIs |
| TCP / UDP | ✅ (with stream) | DBs, Game Servers |
| WebSocket | ✅ | Live chat, dashboards |
| gRPC | ✅ (HTTP/2) | Microservices |
| Mail (SMTP/IMAP) | ✅ | Email routing |
🏗️ Structure of Nginx (How It Works)
Nginx uses a modular and event-driven architecture. When a request is received, it flows through a sequence of **modules and contexts** defined in configuration files. Here’s the high-level structure:
📂 Core Nginx Structure
- Master Process: Manages configuration and spawns worker processes.
- Worker Processes: Handle actual client requests using an event loop.
- Modules: Handle specific tasks like HTTP, proxying, SSL, gzip, etc.
- Event-Driven Loop: Non-blocking I/O for handling thousands of connections efficiently.
🧩 Configuration File Structure
Nginx config files are divided into multiple **context blocks**:
- main – Global settings (e.g., user, worker_processes)
- events – Event-driven settings (e.g., worker connections)
- http – All HTTP-related configurations
- server – Per-domain/host configurations (virtual hosts)
- location – Path-specific routing and rules
worker_processes auto;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://localhost:3000;
}
}
}
🔄 Nginx Request Flow
- User sends request to Nginx server
- Nginx receives request via Worker Process
- Based on config, Nginx matches:
- Domain →
serverblock - URL/path →
locationblock
- Domain →
- Performs action: serve static file, proxy to backend, return error, etc.
- Sends response to client
🌍 Real-World Use Case
You’re hosting a React frontend and a Node.js backend. Nginx structure:
- Master Process manages lifecycle
- Workers serve static files from React build
location /api/proxies to Node.js- SSL terminated at Nginx
📁 Common File Paths
/etc/nginx/nginx.conf– Main config/etc/nginx/sites-available/– Virtual host files/etc/nginx/sites-enabled/– Enabled host files (symlinks)/var/www/html– Default web root (can be customized)
📘 Common Nginx Terms Explained with Examples
| Term | Description | Example |
|---|---|---|
| server | Defines a virtual host block for a domain or subdomain. | server_name example.com; |
| location | Matches a specific request URI and handles it accordingly. | location /api/ { ... } |
| proxy_pass | Forwards incoming requests to another server or port. | proxy_pass http://localhost:3000; |
| listen | Specifies which port and protocol the server should listen on. | listen 80; |
| server_name | Matches the domain name used in the request. | server_name www.example.com; |
| root | Sets the root directory from which to serve static files. | root /var/www/html; |
| index | Sets the default file to serve for directory access. | index index.html; |
| try_files | Checks for file existence and serves fallback if missing. | try_files $uri /index.html; |
| ssl_certificate | Path to the SSL certificate file. | ssl_certificate /etc/letsencrypt/live/domain/fullchain.pem; |
| ssl_certificate_key | Path to the SSL private key file. | ssl_certificate_key /etc/letsencrypt/live/domain/privkey.pem; |
| rewrite | Rewrites the requested URI based on a pattern. | rewrite ^/old$ /new permanent; |
| return | Returns a response code or redirect. | return 301 https://$host$request_uri; |
| gzip | Enables response compression to speed up delivery. | gzip on; |
| upstream | Defines a group of backend servers for load balancing. | upstream backend {
server localhost:3000;
server localhost:3001;
} |
🔧 Basic Setup for Nginx (Ubuntu)
📥 Step 1: Install Nginx
Run these commands in your terminal:
sudo apt install nginx
sudo systemctl start nginx
sudo systemctl enable nginx
🧪 Step 2: Test Nginx
Visit your server’s IP in a browser: http://your-server-ip
You should see the default “Welcome to Nginx” page.
⚙️ Step 3: Configure Reverse Proxy
Edit or create a virtual host file in /etc/nginx/sites-available/:
Paste this config to proxy to a backend running on port 3000:
listen 80;
server_name example.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection ‘upgrade’;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
🔗 Step 4: Enable the Site
sudo nginx -t # Test config
sudo systemctl reload nginx
🌐 Step 5: Add Domain (Optional)
Point your domain’s A record to the server’s IP.
This allows example.com to reach your app via Nginx.
🔐 Step 6: Add SSL with Certbot (Optional)
sudo certbot –nginx -d example.com -d www.example.com
Certbot will auto-configure HTTPS for your site and renew SSL certificates automatically.
⚛️ Setting Up Nginx for a React App (Production Build)
After building your React app using npm run build or yarn build, Nginx can serve it as a static site.
📁 Step 1: Place Your Build
Copy your React build folder (usually build/) to the server:
📝 Step 2: Configure Nginx
Create a new site config (e.g., /etc/nginx/sites-available/react-app):
listen 80;
server_name react.example.com;
root /var/www/my-react-app;
index index.html;
location / {
try_files $uri /index.html;
}
}
Explanation: The try_files directive ensures that React routes (e.g., /dashboard) fallback to index.html, allowing React Router to work correctly.
🔗 Step 3: Enable and Reload
sudo nginx -t
sudo systemctl reload nginx
🔐 Step 4 (Optional): Add HTTPS with Certbot
sudo certbot –nginx -d react.example.com
Now your React app is live, secure, and properly served with clean URLs!
💎 Setting Up Nginx for a Rails App (with Puma)
In production, Rails apps are typically run with Puma as the app server. Nginx acts as a reverse proxy to handle HTTP/HTTPS, serve static assets, and forward requests to Puma.
📁 Step 1: Rails App + Puma Setup
Your Rails app should include puma in the Gemfile and have a config file like:
directory ‘/var/www/myrailsapp’
rackup “config.ru”
environment ‘production’
bind “tcp://127.0.0.1:3000”
⚙️ Step 2: Configure Nginx
Create a config file in /etc/nginx/sites-available/myrailsapp:
listen 80;
server_name myrailsapp.com;
root /var/www/myrailsapp/public;
index index.html;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location ~ ^/(assets|packs)/ {
expires max;
add_header Cache-Control public;
}
}
🔗 Step 3: Enable and Reload Nginx
sudo nginx -t
sudo systemctl reload nginx
🚀 Step 4: Start Your Rails App
Use systemd or a tool like foreman to run Puma in production mode:
🔐 Step 5 (Optional): Add HTTPS with Certbot
If your domain is live and DNS is configured:
sudo certbot –nginx -d myrailsapp.com
🧩 Setting Up Nginx for Microservices
In a microservices architecture, Nginx can act as an API Gateway to route requests to multiple backend services running on different ports or containers.
🔧 Example: 3 Microservices
- Frontend (React): runs on port 3000
- Auth Service: runs on port 4000
- Order Service: runs on port 5000
📝 Nginx Config (Reverse Proxy)
listen 80;
server_name micro.example.com;
location / {
proxy_pass http://localhost:3000;
}
location /auth/ {
proxy_pass http://localhost:4000/;
proxy_set_header Host $host;
}
location /orders/ {
proxy_pass http://localhost:5000/;
proxy_set_header Host $host;
}
}
This setup routes:
/→ React frontend/auth/→ Authentication service/orders/→ Order service
🔐 Optional: Add HTTPS
sudo certbot –nginx -d micro.example.com
🌍 Real Use Case
A startup has separate teams for frontend, auth, and order processing. Each service is deployed independently. Nginx is used to:
- Expose all services under one domain
- Centralize SSL termination and logging
- Act as a single entry point (gateway)
🐳 Docker + Nginx Reverse Proxy Example
Use Nginx inside a Docker container to reverse proxy traffic to your backend and frontend services. This is great for production-ready deployment.
📁 File Structure
project-root/ │ ├── nginx/ │ └── nginx.conf ├── backend/ ← Node.js / Rails / Python app │ ├── frontend/ ← React / Vue / Static site │ └── docker-compose.yml
📄 docker-compose.yml
version: '3.8'
services:
nginx:
image: nginx:latest
ports:
- "80:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- backend
- frontend
backend:
build: ./backend
expose:
- "3000"
frontend:
build: ./frontend
expose:
- "5173"
⚙️ nginx/nginx.conf
events {}
http {
server {
listen 80;
location /api/ {
proxy_pass http://backend:3000/;
proxy_set_header Host $host;
}
location / {
proxy_pass http://frontend:5173/;
proxy_set_header Host $host;
}
}
}
🚀 Run the Project
docker-compose up --build
Nginx will:
- Forward
/apito the backend app (e.g., Node.js, Rails) - Serve frontend directly from
/path - Listen on
http://localhost
🌍 Real-World Scenario
You’re building a React + Rails microservice. With this setup:
- Frontend runs on port 5173 inside Docker
- Rails backend runs on port 3000
- Nginx container reverse proxies both
🌐 Use Nginx to Access Different Ports via Routes
Suppose you have multiple apps running on different ports:
- App 1 (React): running on
localhost:3000 - App 2 (Rails API): running on
localhost:4000 - App 3 (Docs or Admin): running on
localhost:5000
We want to expose them like this:
http://example.com/app1→ Reacthttp://example.com/api→ Railshttp://example.com/admin→ Docs/Admin
📝 Nginx Config Example
listen 80;
server_name example.com;
location /app1/ {
proxy_pass http://localhost:3000/;
proxy_set_header Host $host;
}
location /api/ {
proxy_pass http://localhost:4000/;
proxy_set_header Host $host;
}
location /admin/ {
proxy_pass http://localhost:5000/;
proxy_set_header Host $host;
}
}
✅ Notes
- Each port must be accessible from the Nginx container/server
- Trailing slashes in
proxy_passmatter — use them properly for path rewriting - You can also replace
localhostwith container names in Docker Compose
🌍 Real Use Case
You’re running a monorepo with separate services:
- React frontend on port 3000
- Rails API backend on port 4000
- Admin dashboard (e.g., Storybook or Docsify) on 5000
🌐 Nginx Setup for Domain and Subdomains
You can configure Nginx to route traffic to different applications or services using:
- Domain:
example.com - Subdomain 1:
api.example.com→ for API - Subdomain 2:
admin.example.com→ for admin panel
🗂 File Structure (sites-available)
/etc/nginx/sites-available/
├── example.com
├── api.example.com
└── admin.example.com
📝 example.com config
server {
listen 80;
server_name example.com www.example.com;
root /var/www/example.com;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
📝 api.example.com config
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://localhost:4000;
proxy_set_header Host $host;
}
}
📝 admin.example.com config
server {
listen 80;
server_name admin.example.com;
location / {
proxy_pass http://localhost:5000;
proxy_set_header Host $host;
}
}
🔗 Enable Sites and Reload
sudo ln -s /etc/nginx/sites-available/api.example.com /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/admin.example.com /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
🌍 DNS Settings
In your DNS provider (e.g., Cloudflare, GoDaddy), create the following A records pointing to your server IP:
- A record:
example.com - A record:
api.example.com - A record:
admin.example.com
🔐 Add HTTPS (Optional)
sudo certbot –nginx -d example.com -d www.example.com
sudo certbot –nginx -d api.example.com
sudo certbot –nginx -d admin.example.com
🔐 Complete SSL Setup for Nginx (Domain + Subdomains)
We’ll secure your main domain and subdomains using Let’s Encrypt and Certbot with Nginx.
📦 Step 1: Install Certbot + Nginx Plugin
sudo apt install certbot python3-certbot-nginx
🌐 Step 2: Issue SSL Certificates
For your domain and subdomains (adjust the domains):
Certbot will:
- Detect your existing Nginx server blocks
- Update each to use HTTPS
- Reload Nginx after applying SSL
📝 Example HTTPS Server Block
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
location / {
root /var/www/example.com;
index index.html;
}
}
server {
listen 80;
server_name example.com;
return 301 https://$host$request_uri;
}
🔁 Step 3: Auto Renewal
Let’s Encrypt certs last 90 days. Set up auto-renew:
Or manually test renewal:
🌍 Subdomain Blocks Example
server {
listen 443 ssl;
server_name api.example.com;
ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
location / {
proxy_pass http://localhost:4000;
}
}
server {
listen 443 ssl;
server_name admin.example.com;
ssl_certificate /etc/letsencrypt/live/admin.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/admin.example.com/privkey.pem;
location / {
proxy_pass http://localhost:5000;
}
}
✅ Now your entire site – domain and subdomains – is secured with free HTTPS using Nginx.
🧠 How Caching Works in Nginx
Nginx uses caching to store responses (static files or upstream data) so that repeated client requests are served faster without regenerating the response.
🗂️ 1. Static File Caching (Client-Side Cache)
Nginx adds Cache-Control and Expires headers to static files like CSS, JS, images, etc. These headers instruct the browser to cache the files.
expires 30d;
add_header Cache-Control “public, immutable”;
}
✅ Response is cached **in the browser** for 30 days. No new request hits the server unless user clears cache or it expires.
🔄 2. Proxy Caching (Server-Side Cache)
Nginx stores responses from backend servers (like Rails, Node.js, APIs) in a local cache directory. Next time the same URL is requested, Nginx serves it directly from cache.
location /api/ {
proxy_cache my_cache;
proxy_cache_valid 200 10m;
proxy_pass http://localhost:4000;
}
✅ Nginx saves the backend response for 10 minutes. If another user hits the same API, it’s served instantly from the cache instead of regenerating.
📦 Cache Storage
- Browser Cache: Stores static files on the client side
- Nginx Proxy Cache: Stores dynamic response files on server (e.g., in
/tmp/nginx_cache)
⛔ When Caching is Skipped
- Request has
Cache-Control: no-cacheorPragma: no-cache - Response has
Set-CookieorCache-Control: private - Backend response is not cacheable (non-200 or POST)
📌 In Summary:
- ✅ Use static caching to offload frontend files to the browser
- ✅ Use proxy caching to reduce backend load
- ⏱ Tune cache times based on content freshness
🚀 Enable or Disable Caching in Nginx
Nginx supports two main types of caching:
- Static File Caching: Uses browser headers like
expires - Proxy (Upstream) Caching: Caches dynamic content from backend apps
✅ Enable Static File Caching
Use this inside your location block for assets like images, CSS, JS:
expires 30d;
add_header Cache-Control “public, no-transform”;
}
✅ Enable Proxy Caching
Define cache zone and use it in your API or app routes:
server {
location /api/ {
proxy_pass http://localhost:4000;
proxy_cache my_cache;
proxy_cache_valid 200 10m;
}
}
🚫 Disable Cache for Sensitive Routes
For login or admin routes, disable all forms of caching:
add_header Cache-Control “no-store, no-cache, must-revalidate”;
expires off;
}
💡 Best Practices
- Use static caching for files that rarely change
- Use proxy caching for API GET requests only
- Disable cache for dynamic POST, login, or user-specific pages
- Use cache zones to control memory/disk use
📌 Clear Proxy Cache (Optional)
🌐 Nginx on One Server to Access Another Server (Load Balancing)
In this setup, Server A runs Nginx as a reverse proxy or load balancer, and it forwards requests to one or more backend servers (Server B, C, etc.) that host your application.
📦 Architecture
- Server A (Nginx): Public-facing proxy/load balancer
- Server B (App Server): Backend app, e.g., Node.js, Rails
- Server C (App Server): Optional second instance for scaling
⚙️ Nginx Config on Server A
Edit /etc/nginx/nginx.conf or a virtual host config:
server 192.168.1.101:3000; # Server B
server 192.168.1.102:3000; # Server C
}
server {
listen 80;
server_name yourdomain.com;
location / {
proxy_pass http://backend_servers;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
🧠 How It Works
- User sends a request to
Server A - Nginx matches the request and forwards it to one of the backend servers
- Load is balanced automatically (default is round-robin)
- The client only sees
Server A— backend IPs stay hidden
🚀 Use Cases
- Scaling APIs across multiple backend servers
- Reducing load on a single app server
- Failover routing in case one server is down
- Secure one public IP and reverse proxy to private machines
🔁 Optional Enhancements
- Health checks: Use third-party tools or Nginx Plus for automatic failover
- Sticky sessions: Use
ip_hashinsideupstreamblock - SSL termination: Add HTTPS at Server A only
✅ Best Practices for Using Nginx in Production
🔐 Security
- Use
ssl_certificatewith Let’s Encrypt or paid certs - Redirect all HTTP traffic to HTTPS (force secure access)
- Set strong security headers:
Strict-Transport-SecurityX-Content-Type-Options: nosniffContent-Security-Policy(CSP)
- Block unwanted bots and user-agents
- Disable directory listing with
autoindex off;
⚡ Performance
- Enable
gzipcompression for HTML/CSS/JS - Use caching headers on static files (
expires,Cache-Control) - Use
try_filesto avoid unnecessary backend calls - Use proxy cache for API GET requests to offload backend
- Tune worker processes:
worker_processes auto;worker_connections 1024;
🛠 Maintainability
- Use separate config files per site in
sites-available - Keep config organized with
includeblocks (e.g., gzip.conf) - Use variables like
$host,$remote_addrto avoid hardcoding - Comment your configs for team clarity
🔁 Scalability
- Use
upstreamwith multiple app servers for load balancing - Use
ip_hashif sticky sessions are needed - Use reverse proxy in front of containers or app clusters
- Configure failover using tools like Nginx Plus or third-party health checks
📈 Logging & Monitoring
- Use custom
access_loganderror_logformats - Send logs to ELK, Grafana, or CloudWatch for analysis
- Monitor status with
stub_statusendpoint (read-only)
🧠 Real-World Tips
- Always run
nginx -tbefore reloading - Use
systemctl reload nginx(not restart) for zero downtime - Keep backups of working configs before changes
- Test with staging domains before going live
💼 Top 10 Nginx Interview Questions & Answers
- What is Nginx?
Nginx is a high-performance web server that can also act as a reverse proxy, load balancer, and HTTP cache. - How is Nginx different from Apache?
Nginx uses an event-driven, asynchronous architecture—better for high concurrency. Apache uses a process/thread-based model, consuming more resources under heavy load. - What is a reverse proxy in Nginx?
A reverse proxy forwards client requests to backend servers and returns the server’s response to the client—useful for load balancing and hiding backend servers. - How do you enable gzip compression in Nginx?
Usegzip on;in the config, then definegzip_typesliketext/css,application/json, etc. - What is the difference between proxy_pass and root?
proxy_passforwards requests to another server, whilerootpoints to the directory on disk to serve static files. - How do you implement load balancing with Nginx?
Use anupstreamblock to define multiple backend servers, andproxy_passto forward requests to the group. Default method is round-robin. - How can you cache API responses in Nginx?
Define aproxy_cache_pathand applyproxy_cacheandproxy_cache_validsettings in your location block for API GET routes. - How do you test Nginx configuration?
Runnginx -tto validate the syntax and structure before reloading the server. - What’s the purpose of try_files?
try_fileschecks multiple paths or files in order and serves the first match. It’s used for fallback mechanisms like index.html or 404 pages. - How do you reload Nginx without downtime?
Usenginx -s reloadorsystemctl reload nginxto reload configuration without restarting the worker processes.
📚 Resources
Learn more about Rails setup
Learn more about React setup
Learn more about Mern stack setup

