Nginx powers over a third of the internet. Whether you’re serving static files, proxying to a Node.js app, or load balancing across containers, Nginx is probably involved. This guide covers everything from basic setup to production-ready configurations.
If you’re brand new, start with our What is Nginx? explainer. For quick reference, keep the Nginx cheat sheet handy.
Installing Nginx
On Ubuntu/Debian:
sudo apt update && sudo apt install nginx
sudo systemctl start nginx
sudo systemctl enable nginx
On macOS:
brew install nginx
brew services start nginx
Verify it’s running by visiting http://localhost (or port 8080 on macOS). You should see the default welcome page.
Understanding the Config Structure
Nginx configuration lives in /etc/nginx/ (Linux) or /opt/homebrew/etc/nginx/ (macOS with Homebrew). The main file is nginx.conf, which includes site configs from sites-enabled/ or conf.d/.
# /etc/nginx/nginx.conf — simplified structure
worker_processes auto;
events {
worker_connections 1024;
}
http {
include mime.types;
# Global settings
sendfile on;
keepalive_timeout 65;
gzip on;
# Include site configs
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Each server block defines a virtual host. You’ll typically create one file per site in sites-available/ and symlink it to sites-enabled/.
Serving Static Files
The simplest use case — serving an HTML/CSS/JS site:
server {
listen 80;
server_name example.com www.example.com;
root /var/www/example.com;
index index.html;
location / {
try_files $uri $uri/ =404;
}
# Cache static assets
location ~* \.(css|js|png|jpg|gif|ico|svg|woff2)$ {
expires 30d;
add_header Cache-Control "public, immutable";
}
}
try_files tells Nginx to look for the exact file, then a directory, then return 404. This is the foundation of most static site configs.
Reverse Proxy to a Backend App
This is the most common production pattern — Nginx handles HTTP, SSL, and static files while proxying API requests to your app. If you need a ready-made config, try our Nginx config generator.
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
The proxy_set_header lines are critical — without them, your backend won’t know the client’s real IP or whether the original request was HTTPS. See our Nginx reverse proxy config guide for more advanced setups.
SSL/TLS with Let’s Encrypt
Free SSL certificates with Certbot:
sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d example.com -d www.example.com
Certbot modifies your Nginx config automatically. The result looks like:
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# Modern SSL config
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers off;
# HSTS
add_header Strict-Transport-Security "max-age=63072000" always;
}
# Redirect HTTP to HTTPS
server {
listen 80;
server_name example.com;
return 301 https://$host$request_uri;
}
Auto-renewal is set up by Certbot. Verify with sudo certbot renew --dry-run.
Load Balancing
Distribute traffic across multiple backend instances:
upstream backend {
least_conn; # or: round-robin (default), ip_hash
server 127.0.0.1:3001;
server 127.0.0.1:3002;
server 127.0.0.1:3003;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
Strategies:
round-robin(default) — rotates through serversleast_conn— sends to the server with fewest active connectionsip_hash— same client always hits the same server (useful for sessions)
Performance Tuning
Key settings for high-traffic sites:
worker_processes auto; # one per CPU core
worker_connections 4096; # connections per worker
# Compression
gzip on;
gzip_types text/plain text/css application/json application/javascript text/xml;
gzip_min_length 1000;
# Buffering
proxy_buffering on;
proxy_buffer_size 16k;
proxy_buffers 4 32k;
# Timeouts
proxy_connect_timeout 5s;
proxy_read_timeout 60s;
send_timeout 60s;
# File serving
sendfile on;
tcp_nopush on;
tcp_nodelay on;
WebSocket Support
For real-time apps:
location /ws {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400; # keep connection alive
}
Rate Limiting
Protect against abuse:
# Define a rate limit zone
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
server {
location /api/ {
limit_req zone=api burst=20 nodelay;
proxy_pass http://backend;
}
}
This allows 10 requests per second per IP, with a burst of 20.
Security Headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Content-Security-Policy "default-src 'self'" always;
Troubleshooting
Common errors and how to fix them:
- 502 Bad Gateway — your backend isn’t running or Nginx can’t reach it. See our Nginx 502 fix
- Permission denied — Nginx can’t read files or connect to sockets. See our Nginx permission denied fix
- Config test failed — always test before reloading:
sudo nginx -t
Useful debug commands:
# Test config syntax
sudo nginx -t
# Reload without downtime
sudo nginx -s reload
# Check error logs
tail -f /var/log/nginx/error.log
# Check access logs
tail -f /var/log/nginx/access.log
Caching
Nginx can cache backend responses to reduce load:
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=app:10m max_size=1g inactive=60m;
server {
location / {
proxy_cache app;
proxy_cache_valid 200 10m;
proxy_cache_valid 404 1m;
proxy_cache_use_stale error timeout updating;
add_header X-Cache-Status $upstream_cache_status;
proxy_pass http://backend;
}
}
The X-Cache-Status header tells you whether a response was a HIT, MISS, or EXPIRED — useful for debugging.
Access Control
Restrict access by IP or with basic auth:
# IP-based access
location /admin {
allow 10.0.0.0/8;
deny all;
proxy_pass http://backend;
}
# Basic auth
location /staging {
auth_basic "Staging Environment";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_pass http://backend;
}
Generate the password file:
sudo apt install apache2-utils
sudo htpasswd -c /etc/nginx/.htpasswd admin
Logging
Customize access logs for better observability:
log_format detailed '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'$request_time $upstream_response_time';
access_log /var/log/nginx/access.log detailed;
# Disable logging for health checks
location /health {
access_log off;
return 200 'ok';
}
Rewrites and Redirects
Handle URL changes and clean URLs:
# Permanent redirect (301)
location /old-page {
return 301 /new-page;
}
# Redirect with regex
location ~ ^/blog/(\d{4})/(.*)$ {
return 301 /posts/$2;
}
# Rewrite (internal, URL stays the same in browser)
location /app {
rewrite ^/app/(.*)$ /index.html break;
}
# SPA fallback — serve index.html for all routes
location / {
try_files $uri $uri/ /index.html;
}
The SPA fallback is essential for React, Vue, and Angular apps where client-side routing handles the URLs.
CORS Headers
Enable cross-origin requests:
location /api/ {
add_header Access-Control-Allow-Origin *;
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS";
add_header Access-Control-Allow-Headers "Authorization, Content-Type";
if ($request_method = OPTIONS) {
return 204;
}
proxy_pass http://backend;
}
For production, replace * with your specific frontend domain.
Monitoring and Health Checks
# Basic health check endpoint
location /health {
access_log off;
return 200 'healthy\n';
add_header Content-Type text/plain;
}
# Nginx status page (for monitoring tools)
location /nginx_status {
stub_status on;
allow 127.0.0.1;
deny all;
}
The stub_status module exposes active connections, requests per second, and other metrics that tools like Prometheus can scrape.
Docker and Nginx
Running Nginx in Docker is common for containerized deployments:
FROM nginx:alpine
COPY nginx.conf /etc/nginx/nginx.conf
COPY dist/ /usr/share/nginx/html/
EXPOSE 80
For Docker Compose with a backend:
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- app
app:
build: .
expose:
- "3000"
In Docker networks, use the service name as the hostname: proxy_pass http://app:3000;
Learn more in our Docker complete guide.
Common Mistakes
A few patterns that trip up even experienced Nginx users:
Trailing slash matters. proxy_pass http://backend and proxy_pass http://backend/ behave differently. Without the trailing slash, Nginx passes the full URI. With it, Nginx strips the matched location prefix.
location /api/ {
# Request: /api/users → backend receives: /api/users
proxy_pass http://backend;
# Request: /api/users → backend receives: /users
proxy_pass http://backend/;
}
Don’t use if for routing. Nginx’s if directive is notoriously tricky inside location blocks. Use map or try_files instead when possible.
Always test before reloading. nginx -t catches syntax errors. A bad config with nginx -s reload can take your site down.
Don’t forget to open firewall ports. On cloud servers, both the OS firewall (ufw/iptables) and the cloud provider’s security group need port 80/443 open.
Nginx vs Other Options
Nginx isn’t the only game in town. Caddy offers automatic HTTPS with zero config. Traefik integrates natively with Docker and Kubernetes. Apache is still widely used for PHP hosting. But Nginx remains the go-to for its performance, flexibility, and massive ecosystem of documentation and examples.
Next Steps
- Grab the Nginx cheat sheet for quick reference
- Generate configs with our Nginx config generator
- Set up a reverse proxy for your app
- Learn about Docker to containerize your Nginx setup