π Contents
- What is Nginx?
- Nginx vs Other Web Servers: Pros & Cons
- Real-Time Use Cases of Nginx with Examples
- When to Use Nginx vs Other Servers (With Examples)
- HTTP vs HTTPS β With Ports and Custom Usage
- Can Nginx Use Other Protocols?
- Structure of Nginx (How It Works)
- Common Nginx Terms Explained with Examples
- Basic Setup for Nginx (Ubuntu)
- Setting Up Nginx for a React App (Production Build)
- Setting Up Nginx for a Rails App (with Puma)
- Setting Up Nginx for Microservices
- Docker + Nginx Reverse Proxy Example
- Use Nginx to Access Different Ports via Routes
- Nginx Setup for Domain and Subdomains
- Complete SSL Setup for Nginx (Domain + Subdomains)
- How Caching Works in Nginx
- Enable or Disable Caching in Nginx
- Nginx on One Server to Access Another Server (Load Balancing)
- Best Practices for Using Nginx in Production
- Top 10 Nginx Interview Questions & Answers
π What is Nginx?
Nginx (pronounced βengine-xβ) is a powerful, lightweight, and high-performance open-source web server that also acts as a reverse proxy, load balancer, and HTTP cache. It is designed to handle a large number of concurrent connections efficiently using an event-driven, asynchronous architecture.
Initially created by Igor Sysoev in 2004 to address the C10K problem (handling 10,000+ concurrent connections), Nginx has become the backbone of many high-traffic websites such as Netflix, Dropbox, WordPress.com, and GitHub.
Nginx is often used to:
- Serve static HTML, CSS, JS, and media files
- Proxy requests to backend servers (Node.js, Rails, Python, PHP)
- Act as a secure gateway for SSL/TLS traffic
- Load balance traffic across multiple servers
- Cache responses to improve performance
Unlike traditional web servers like Apache that use a process-per-connection model, Nginx uses a non-blocking, event-driven model which consumes less memory and provides significantly better performance under load.
π Nginx vs Other Web Servers: Pros & Cons
Web Server | Pros β | Cons β | Best Use Case |
---|---|---|---|
Nginx | – High performance under load – Low memory usage – Great for static content & reverse proxy – SSL, load balancing, caching built-in | – Steeper learning curve – Less flexible with .htaccess-style rules | Web apps, reverse proxy, API gateway |
Apache | – Mature and widely supported – Powerful .htaccess support – Modular and extensible | – Slower under high concurrency – Higher memory consumption | CMS (e.g., WordPress), legacy PHP apps |
Caddy | – Auto HTTPS via Let’s Encrypt – Simple config syntax – Modern Go-based server | – Limited community – Less control in large-scale use | Quick SSL sites, small apps, microservices |
HAProxy | – Advanced load balancing (TCP & HTTP) – Robust health checks – High availability setups | – Not ideal for static files – More complex to configure for beginners | Load balancing, failover, microservice gateway |
Lighttpd | – Very lightweight – Low CPU usage – Fast for static content | – Fewer features than Nginx – Less active community | Embedded systems, minimal static servers |
β‘ Real-Time Use Cases of Nginx with Examples
1. Reverse Proxy for Backend Apps
Nginx forwards HTTP requests to backend apps like Node.js, Django, or Rails.
listen 80;
server_name app.example.com;
location / {
proxy_pass http://localhost:3000;
}
}
2. Load Balancing Across Multiple Servers
Distribute traffic between multiple app servers to scale horizontally.
server 127.0.0.1:3000;
server 127.0.0.1:3001;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
3. SSL Termination (HTTPS)
Terminate HTTPS at Nginx and forward unencrypted traffic to internal services.
listen 443 ssl;
server_name secure.example.com;
ssl_certificate /path/to/fullchain.pem;
ssl_certificate_key /path/to/key.pem;
location / {
proxy_pass http://localhost:3000;
}
}
4. Hosting a Single-Page Application (SPA)
Serve static files from a React/Vue/Angular build with fallback routing.
listen 80;
root /var/www/my-app/dist;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
}
5. Caching API Responses
Improve performance by caching repeated requests temporarily.
server {
location /api/ {
proxy_cache my_cache;
proxy_pass http://localhost:3000;
}
}
6. Rate Limiting APIs
Limit requests per client to prevent abuse.
server {
location /api/ {
limit_req zone=api_limit burst=5;
proxy_pass http://localhost:3000;
}
}
7. Redirect HTTP to HTTPS
Force all HTTP traffic to HTTPS for security.
listen 80;
server_name www.example.com;
return 301 https://$host$request_uri;
}
8. Docker + Nginx Frontend
Use Nginx as a frontend container for your backend services.
In docker-compose.yml
:
image: nginx
ports:
– “80:80”
volumes:
– ./nginx.conf:/etc/nginx/nginx.conf
9. Subdomain Routing
Host multiple apps on different subdomains using virtual hosts.
listen 80;
server_name api.example.com;
location / { proxy_pass http://localhost:3000; }
}
server {
listen 80;
server_name blog.example.com;
location / { proxy_pass http://localhost:4000; }
}
10. CDN for Static Assets
Use Nginx as a static file server with caching and compression.
listen 80;
location /assets/ {
root /cdn/static;
expires 7d;
add_header Cache-Control “public”;
}
}
π§ When to Use Nginx vs Other Servers (With Examples)
β Use Nginx When…
- You need a reverse proxy: Nginx is perfect for forwarding requests to app servers (Node.js, Rails, etc.).
- High traffic performance is required: Its non-blocking architecture handles thousands of concurrent connections.
- Serving static files: Nginx is ultra-fast for HTML, JS, CSS, and images.
- You want to terminate SSL: Offload HTTPS at Nginx and forward to backend over HTTP.
- Youβre using Docker or microservices: Use Nginx as a frontend or API gateway in containers.
You built a React frontend + Node.js backend. You deploy Nginx in front:
– Serve React static files
– Proxy `/api` to Node.js
– Use HTTPS + HTTP to backend internally
β Donβt Use Nginx When…
- You need .htaccess-based dynamic routing: Use Apache (ideal for WordPress, Laravel, etc.).
- You need native HTTP/2 + auto HTTPS with minimal setup: Use Caddy server.
- Youβre load balancing TCP/UDP services (not HTTP): Use HAProxy for advanced routing.
- Youβre on constrained embedded systems: Use Lighttpd β minimal footprint, good static performance.
You are hosting WordPress on shared hosting or need per-directory overrides like
.htaccess
.π Use Apache β it allows granular control and is well-supported in the PHP ecosystem.
π§ Pro Tip:
You can also use Nginx + Apache together β Nginx handles static content & HTTPS, while Apache handles dynamic content via PHP.
localhost:8080
.π HTTP vs HTTPS β With Ports and Custom Usage
π What is HTTP?
HTTP (HyperText Transfer Protocol) is the standard protocol for transferring data over the web.
- Data is sent in plain text
- Default port:
80
- Not secure β anyone can intercept the data
- Example:
http://example.com
π What is HTTPS?
HTTPS (HTTP Secure) is HTTP layered over SSL/TLS encryption.
- Encrypts data between browser and server
- Default port:
443
- Used for secure websites like banking, logins, e-commerce
- Example:
https://example.com
π¦ Default Port Mapping
Protocol | Port | Secure |
---|---|---|
HTTP | 80 | β No |
HTTPS | 443 | β Yes |
π Can You Use Custom Ports?
- β Yes, you can use any port (e.g., 8080, 3000, 8443)
- β οΈ But users must include the port in the URL (e.g.,
https://example.com:8443
) - β You can bind SSL to any port β just configure it manually in Nginx or Apache
- π SSL is not limited to port 443, but browsers treat 443 as default (no port in URL)
- π‘ Port 443 is expected by default for HTTPS, so changing it may cause trust or firewall issues
π§ Example: Nginx SSL on Port 8443
listen 8443 ssl;
server_name example.com;
ssl_certificate /etc/nginx/ssl/example.crt;
ssl_certificate_key /etc/nginx/ssl/example.key;
location / {
return 200 “Secure connection on custom port 8443!”;
}
}
π§© Can Nginx Use Other Protocols?
Yes! While Nginx is primarily known for serving HTTP/HTTPS traffic, it also supports other protocols directly or through modules. Here’s a breakdown:
1. π§ͺ TCP and UDP (via Stream Module)
You can proxy raw TCP and UDP trafficβuseful for databases, game servers, and SSL passthrough.
π§ Example: Proxy MySQL on Port 3306upstream mysql_upstream {
server 127.0.0.1:3306;
}
server {
listen 3306;
proxy_pass mysql_upstream;
}
}
2. π‘ WebSocket
WebSockets run over HTTP/HTTPS but require proper headers. Nginx supports them as a reverse proxy.
π§ Example: Proxy WebSocketproxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection “upgrade”;
}
3. π€ gRPC
Nginx can proxy gRPC and gRPC-Web requests with HTTP/2 support.
π§ Example: Proxy gRPC Servergrpc_pass grpc://127.0.0.1:50051;
error_page 502 = /errorfallback;
}
4. π© Mail (SMTP, IMAP, POP3)
Nginx can act as a mail proxy with SSL offloading or auth routing.
π§ Example: SMTP Proxyserver {
listen 25;
protocol smtp;
proxy_pass smtp.example.com:25;
}
}
5. π§ͺ Custom Internal Protocols
If your service runs over TCP or UDP, you can still use Nginx as a gatewayβeven if the protocol is custom (e.g., Redis, MQTT).
For example, proxy Redis over port 6379
with TCP:
server {
listen 6379;
proxy_pass 127.0.0.1:6379;
}
}
π Summary Table
Protocol | Supported? | Example Use |
---|---|---|
HTTP / HTTPS | β | Websites, APIs |
TCP / UDP | β (with stream) | DBs, Game Servers |
WebSocket | β | Live chat, dashboards |
gRPC | β (HTTP/2) | Microservices |
Mail (SMTP/IMAP) | β | Email routing |
ποΈ Structure of Nginx (How It Works)
Nginx uses a modular and event-driven architecture. When a request is received, it flows through a sequence of **modules and contexts** defined in configuration files. Here’s the high-level structure:
π Core Nginx Structure
- Master Process: Manages configuration and spawns worker processes.
- Worker Processes: Handle actual client requests using an event loop.
- Modules: Handle specific tasks like HTTP, proxying, SSL, gzip, etc.
- Event-Driven Loop: Non-blocking I/O for handling thousands of connections efficiently.
π§© Configuration File Structure
Nginx config files are divided into multiple **context blocks**:
- main β Global settings (e.g., user, worker_processes)
- events β Event-driven settings (e.g., worker connections)
- http β All HTTP-related configurations
- server β Per-domain/host configurations (virtual hosts)
- location β Path-specific routing and rules
worker_processes auto;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://localhost:3000;
}
}
}
π Nginx Request Flow
- User sends request to Nginx server
- Nginx receives request via Worker Process
- Based on config, Nginx matches:
- Domain β
server
block - URL/path β
location
block
- Domain β
- Performs action: serve static file, proxy to backend, return error, etc.
- Sends response to client
π Real-World Use Case
You’re hosting a React frontend and a Node.js backend. Nginx structure:
- Master Process manages lifecycle
- Workers serve static files from React build
location /api/
proxies to Node.js- SSL terminated at Nginx
π Common File Paths
/etc/nginx/nginx.conf
β Main config/etc/nginx/sites-available/
β Virtual host files/etc/nginx/sites-enabled/
β Enabled host files (symlinks)/var/www/html
β Default web root (can be customized)
π Common Nginx Terms Explained with Examples
Term | Description | Example |
---|---|---|
server | Defines a virtual host block for a domain or subdomain. | server_name example.com; |
location | Matches a specific request URI and handles it accordingly. | location /api/ { ... } |
proxy_pass | Forwards incoming requests to another server or port. | proxy_pass http://localhost:3000; |
listen | Specifies which port and protocol the server should listen on. | listen 80; |
server_name | Matches the domain name used in the request. | server_name www.example.com; |
root | Sets the root directory from which to serve static files. | root /var/www/html; |
index | Sets the default file to serve for directory access. | index index.html; |
try_files | Checks for file existence and serves fallback if missing. | try_files $uri /index.html; |
ssl_certificate | Path to the SSL certificate file. | ssl_certificate /etc/letsencrypt/live/domain/fullchain.pem; |
ssl_certificate_key | Path to the SSL private key file. | ssl_certificate_key /etc/letsencrypt/live/domain/privkey.pem; |
rewrite | Rewrites the requested URI based on a pattern. | rewrite ^/old$ /new permanent; |
return | Returns a response code or redirect. | return 301 https://$host$request_uri; |
gzip | Enables response compression to speed up delivery. | gzip on; |
upstream | Defines a group of backend servers for load balancing. | upstream backend { server localhost:3000; server localhost:3001; } |
π§ Basic Setup for Nginx (Ubuntu)
π₯ Step 1: Install Nginx
Run these commands in your terminal:
sudo apt install nginx
sudo systemctl start nginx
sudo systemctl enable nginx
π§ͺ Step 2: Test Nginx
Visit your serverβs IP in a browser: http://your-server-ip
You should see the default βWelcome to Nginxβ page.
βοΈ Step 3: Configure Reverse Proxy
Edit or create a virtual host file in /etc/nginx/sites-available/
:
Paste this config to proxy to a backend running on port 3000:
listen 80;
server_name example.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection ‘upgrade’;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
π Step 4: Enable the Site
sudo nginx -t # Test config
sudo systemctl reload nginx
π Step 5: Add Domain (Optional)
Point your domainβs A record to the serverβs IP.
This allows example.com
to reach your app via Nginx.
π Step 6: Add SSL with Certbot (Optional)
sudo certbot –nginx -d example.com -d www.example.com
Certbot will auto-configure HTTPS for your site and renew SSL certificates automatically.
βοΈ Setting Up Nginx for a React App (Production Build)
After building your React app using npm run build
or yarn build
, Nginx can serve it as a static site.
π Step 1: Place Your Build
Copy your React build folder (usually build/
) to the server:
π Step 2: Configure Nginx
Create a new site config (e.g., /etc/nginx/sites-available/react-app
):
listen 80;
server_name react.example.com;
root /var/www/my-react-app;
index index.html;
location / {
try_files $uri /index.html;
}
}
Explanation: The try_files
directive ensures that React routes (e.g., /dashboard
) fallback to index.html
, allowing React Router to work correctly.
π Step 3: Enable and Reload
sudo nginx -t
sudo systemctl reload nginx
π Step 4 (Optional): Add HTTPS with Certbot
sudo certbot –nginx -d react.example.com
Now your React app is live, secure, and properly served with clean URLs!
π Setting Up Nginx for a Rails App (with Puma)
In production, Rails apps are typically run with Puma as the app server. Nginx acts as a reverse proxy to handle HTTP/HTTPS, serve static assets, and forward requests to Puma.
π Step 1: Rails App + Puma Setup
Your Rails app should include puma
in the Gemfile and have a config file like:
directory ‘/var/www/myrailsapp’
rackup “config.ru”
environment ‘production’
bind “tcp://127.0.0.1:3000”
βοΈ Step 2: Configure Nginx
Create a config file in /etc/nginx/sites-available/myrailsapp
:
listen 80;
server_name myrailsapp.com;
root /var/www/myrailsapp/public;
index index.html;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location ~ ^/(assets|packs)/ {
expires max;
add_header Cache-Control public;
}
}
π Step 3: Enable and Reload Nginx
sudo nginx -t
sudo systemctl reload nginx
π Step 4: Start Your Rails App
Use systemd or a tool like foreman
to run Puma in production mode:
π Step 5 (Optional): Add HTTPS with Certbot
If your domain is live and DNS is configured:
sudo certbot –nginx -d myrailsapp.com
π§© Setting Up Nginx for Microservices
In a microservices architecture, Nginx can act as an API Gateway to route requests to multiple backend services running on different ports or containers.
π§ Example: 3 Microservices
- Frontend (React): runs on port 3000
- Auth Service: runs on port 4000
- Order Service: runs on port 5000
π Nginx Config (Reverse Proxy)
listen 80;
server_name micro.example.com;
location / {
proxy_pass http://localhost:3000;
}
location /auth/ {
proxy_pass http://localhost:4000/;
proxy_set_header Host $host;
}
location /orders/ {
proxy_pass http://localhost:5000/;
proxy_set_header Host $host;
}
}
This setup routes:
/
β React frontend/auth/
β Authentication service/orders/
β Order service
π Optional: Add HTTPS
sudo certbot –nginx -d micro.example.com
π Real Use Case
A startup has separate teams for frontend, auth, and order processing. Each service is deployed independently. Nginx is used to:
- Expose all services under one domain
- Centralize SSL termination and logging
- Act as a single entry point (gateway)
π³ Docker + Nginx Reverse Proxy Example
Use Nginx inside a Docker container to reverse proxy traffic to your backend and frontend services. This is great for production-ready deployment.
π File Structure
project-root/ β βββ nginx/ β βββ nginx.conf βββ backend/ β Node.js / Rails / Python app β βββ frontend/ β React / Vue / Static site β βββ docker-compose.yml
π docker-compose.yml
version: '3.8' services: nginx: image: nginx:latest ports: - "80:80" volumes: - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro depends_on: - backend - frontend backend: build: ./backend expose: - "3000" frontend: build: ./frontend expose: - "5173"
βοΈ nginx/nginx.conf
events {} http { server { listen 80; location /api/ { proxy_pass http://backend:3000/; proxy_set_header Host $host; } location / { proxy_pass http://frontend:5173/; proxy_set_header Host $host; } } }
π Run the Project
docker-compose up --build
Nginx will:
- Forward
/api
to the backend app (e.g., Node.js, Rails) - Serve frontend directly from
/
path - Listen on
http://localhost
π Real-World Scenario
Youβre building a React + Rails microservice. With this setup:
- Frontend runs on port 5173 inside Docker
- Rails backend runs on port 3000
- Nginx container reverse proxies both
π Use Nginx to Access Different Ports via Routes
Suppose you have multiple apps running on different ports:
- App 1 (React): running on
localhost:3000
- App 2 (Rails API): running on
localhost:4000
- App 3 (Docs or Admin): running on
localhost:5000
We want to expose them like this:
http://example.com/app1
β Reacthttp://example.com/api
β Railshttp://example.com/admin
β Docs/Admin
π Nginx Config Example
listen 80;
server_name example.com;
location /app1/ {
proxy_pass http://localhost:3000/;
proxy_set_header Host $host;
}
location /api/ {
proxy_pass http://localhost:4000/;
proxy_set_header Host $host;
}
location /admin/ {
proxy_pass http://localhost:5000/;
proxy_set_header Host $host;
}
}
β Notes
- Each port must be accessible from the Nginx container/server
- Trailing slashes in
proxy_pass
matter β use them properly for path rewriting - You can also replace
localhost
with container names in Docker Compose
π Real Use Case
Youβre running a monorepo with separate services:
- React frontend on port 3000
- Rails API backend on port 4000
- Admin dashboard (e.g., Storybook or Docsify) on 5000
π Nginx Setup for Domain and Subdomains
You can configure Nginx to route traffic to different applications or services using:
- Domain:
example.com
- Subdomain 1:
api.example.com
β for API - Subdomain 2:
admin.example.com
β for admin panel
π File Structure (sites-available)
/etc/nginx/sites-available/ βββ example.com βββ api.example.com βββ admin.example.com
π example.com config
server { listen 80; server_name example.com www.example.com; root /var/www/example.com; index index.html; location / { try_files $uri $uri/ =404; } }
π api.example.com config
server { listen 80; server_name api.example.com; location / { proxy_pass http://localhost:4000; proxy_set_header Host $host; } }
π admin.example.com config
server { listen 80; server_name admin.example.com; location / { proxy_pass http://localhost:5000; proxy_set_header Host $host; } }
π Enable Sites and Reload
sudo ln -s /etc/nginx/sites-available/api.example.com /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/admin.example.com /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
π DNS Settings
In your DNS provider (e.g., Cloudflare, GoDaddy), create the following A records pointing to your server IP:
- A record:
example.com
- A record:
api.example.com
- A record:
admin.example.com
π Add HTTPS (Optional)
sudo certbot –nginx -d example.com -d www.example.com
sudo certbot –nginx -d api.example.com
sudo certbot –nginx -d admin.example.com
π Complete SSL Setup for Nginx (Domain + Subdomains)
Weβll secure your main domain and subdomains using Letβs Encrypt and Certbot with Nginx.
π¦ Step 1: Install Certbot + Nginx Plugin
sudo apt install certbot python3-certbot-nginx
π Step 2: Issue SSL Certificates
For your domain and subdomains (adjust the domains):
Certbot will:
- Detect your existing Nginx server blocks
- Update each to use HTTPS
- Reload Nginx after applying SSL
π Example HTTPS Server Block
server { listen 443 ssl; server_name example.com; ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; location / { root /var/www/example.com; index index.html; } } server { listen 80; server_name example.com; return 301 https://$host$request_uri; }
π Step 3: Auto Renewal
Let’s Encrypt certs last 90 days. Set up auto-renew:
Or manually test renewal:
π Subdomain Blocks Example
server { listen 443 ssl; server_name api.example.com; ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem; location / { proxy_pass http://localhost:4000; } } server { listen 443 ssl; server_name admin.example.com; ssl_certificate /etc/letsencrypt/live/admin.example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/admin.example.com/privkey.pem; location / { proxy_pass http://localhost:5000; } }
β Now your entire site β domain and subdomains β is secured with free HTTPS using Nginx.
π§ How Caching Works in Nginx
Nginx uses caching to store responses (static files or upstream data) so that repeated client requests are served faster without regenerating the response.
ποΈ 1. Static File Caching (Client-Side Cache)
Nginx adds Cache-Control
and Expires
headers to static files like CSS, JS, images, etc. These headers instruct the browser to cache the files.
expires 30d;
add_header Cache-Control “public, immutable”;
}
β Response is cached **in the browser** for 30 days. No new request hits the server unless user clears cache or it expires.
π 2. Proxy Caching (Server-Side Cache)
Nginx stores responses from backend servers (like Rails, Node.js, APIs) in a local cache directory. Next time the same URL is requested, Nginx serves it directly from cache.
location /api/ {
proxy_cache my_cache;
proxy_cache_valid 200 10m;
proxy_pass http://localhost:4000;
}
β Nginx saves the backend response for 10 minutes. If another user hits the same API, itβs served instantly from the cache instead of regenerating.
π¦ Cache Storage
- Browser Cache: Stores static files on the client side
- Nginx Proxy Cache: Stores dynamic response files on server (e.g., in
/tmp/nginx_cache
)
β When Caching is Skipped
- Request has
Cache-Control: no-cache
orPragma: no-cache
- Response has
Set-Cookie
orCache-Control: private
- Backend response is not cacheable (non-200 or POST)
π In Summary:
- β Use static caching to offload frontend files to the browser
- β Use proxy caching to reduce backend load
- β± Tune cache times based on content freshness
π Enable or Disable Caching in Nginx
Nginx supports two main types of caching:
- Static File Caching: Uses browser headers like
expires
- Proxy (Upstream) Caching: Caches dynamic content from backend apps
β Enable Static File Caching
Use this inside your location
block for assets like images, CSS, JS:
expires 30d;
add_header Cache-Control “public, no-transform”;
}
β Enable Proxy Caching
Define cache zone and use it in your API or app routes:
server {
location /api/ {
proxy_pass http://localhost:4000;
proxy_cache my_cache;
proxy_cache_valid 200 10m;
}
}
π« Disable Cache for Sensitive Routes
For login or admin routes, disable all forms of caching:
add_header Cache-Control “no-store, no-cache, must-revalidate”;
expires off;
}
π‘ Best Practices
- Use static caching for files that rarely change
- Use proxy caching for API GET requests only
- Disable cache for dynamic POST, login, or user-specific pages
- Use cache zones to control memory/disk use
π Clear Proxy Cache (Optional)
π Nginx on One Server to Access Another Server (Load Balancing)
In this setup, Server A runs Nginx as a reverse proxy or load balancer, and it forwards requests to one or more backend servers (Server B, C, etc.) that host your application.
π¦ Architecture
- Server A (Nginx): Public-facing proxy/load balancer
- Server B (App Server): Backend app, e.g., Node.js, Rails
- Server C (App Server): Optional second instance for scaling
βοΈ Nginx Config on Server A
Edit /etc/nginx/nginx.conf
or a virtual host config:
server 192.168.1.101:3000; # Server B
server 192.168.1.102:3000; # Server C
}
server {
listen 80;
server_name yourdomain.com;
location / {
proxy_pass http://backend_servers;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
π§ How It Works
- User sends a request to
Server A
- Nginx matches the request and forwards it to one of the backend servers
- Load is balanced automatically (default is round-robin)
- The client only sees
Server A
β backend IPs stay hidden
π Use Cases
- Scaling APIs across multiple backend servers
- Reducing load on a single app server
- Failover routing in case one server is down
- Secure one public IP and reverse proxy to private machines
π Optional Enhancements
- Health checks: Use third-party tools or Nginx Plus for automatic failover
- Sticky sessions: Use
ip_hash
insideupstream
block - SSL termination: Add HTTPS at Server A only
β Best Practices for Using Nginx in Production
π Security
- Use
ssl_certificate
with Let’s Encrypt or paid certs - Redirect all HTTP traffic to HTTPS (force secure access)
- Set strong security headers:
Strict-Transport-Security
X-Content-Type-Options: nosniff
Content-Security-Policy
(CSP)
- Block unwanted bots and user-agents
- Disable directory listing with
autoindex off;
β‘ Performance
- Enable
gzip
compression for HTML/CSS/JS - Use caching headers on static files (
expires
,Cache-Control
) - Use
try_files
to avoid unnecessary backend calls - Use proxy cache for API GET requests to offload backend
- Tune worker processes:
worker_processes auto;
worker_connections 1024;
π Maintainability
- Use separate config files per site in
sites-available
- Keep config organized with
include
blocks (e.g., gzip.conf) - Use variables like
$host
,$remote_addr
to avoid hardcoding - Comment your configs for team clarity
π Scalability
- Use
upstream
with multiple app servers for load balancing - Use
ip_hash
if sticky sessions are needed - Use reverse proxy in front of containers or app clusters
- Configure failover using tools like Nginx Plus or third-party health checks
π Logging & Monitoring
- Use custom
access_log
anderror_log
formats - Send logs to ELK, Grafana, or CloudWatch for analysis
- Monitor status with
stub_status
endpoint (read-only)
π§ Real-World Tips
- Always run
nginx -t
before reloading - Use
systemctl reload nginx
(not restart) for zero downtime - Keep backups of working configs before changes
- Test with staging domains before going live
πΌ Top 10 Nginx Interview Questions & Answers
- What is Nginx?
Nginx is a high-performance web server that can also act as a reverse proxy, load balancer, and HTTP cache. - How is Nginx different from Apache?
Nginx uses an event-driven, asynchronous architectureβbetter for high concurrency. Apache uses a process/thread-based model, consuming more resources under heavy load. - What is a reverse proxy in Nginx?
A reverse proxy forwards client requests to backend servers and returns the serverβs response to the clientβuseful for load balancing and hiding backend servers. - How do you enable gzip compression in Nginx?
Usegzip on;
in the config, then definegzip_types
liketext/css
,application/json
, etc. - What is the difference between proxy_pass and root?
proxy_pass
forwards requests to another server, whileroot
points to the directory on disk to serve static files. - How do you implement load balancing with Nginx?
Use anupstream
block to define multiple backend servers, andproxy_pass
to forward requests to the group. Default method is round-robin. - How can you cache API responses in Nginx?
Define aproxy_cache_path
and applyproxy_cache
andproxy_cache_valid
settings in your location block for API GET routes. - How do you test Nginx configuration?
Runnginx -t
to validate the syntax and structure before reloading the server. - Whatβs the purpose of try_files?
try_files
checks multiple paths or files in order and serves the first match. It’s used for fallback mechanisms like index.html or 404 pages. - How do you reload Nginx without downtime?
Usenginx -s reload
orsystemctl reload nginx
to reload configuration without restarting the worker processes.
π Resources
Learn more aboutΒ RailsΒ setup
Learn more aboutΒ ReactΒ setup
Learn more aboutΒ Mern stackΒ setup