- Introduction
- External Resources
- Helpers
- Shell aliases
- Debugging
- See the top 5 IP addresses in a web server log
- Analyse web server log and show only 2xx http codes
- Analyse web server log and show only 5xx http codes
- Get range of dates in a web server log
- Get line rates from web server log
- Trace network traffic for all Nginx processes
- List all files accessed by a Nginx
- Base rules
- Organising Nginx configuration
- Separate listen directives for 80 and 443
- Prevent processing requests with undefined server names
- Use only one SSL config for specific listen directive
- Force all connections over TLS
- Use geo/map modules instead allow/deny
- Map all the things...
- Drop the same root inside location block
- Use debug mode for debugging
- Use custom log formats
- Performance
- Hardening
- Run as an unprivileged user
- Disable unnecessary modules
- Protect sensitive resources
- Hide Nginx version number
- Hide Nginx server signature
- Hide upstream proxy headers
- Use only 4096-bit private keys
- Keep only TLS 1.2 (+ TLS 1.3)
- Use only strong ciphers
- Use more secure ECDH Curve
- Use strong Key Exchange
- Defend against the BEAST attack
- Disable HTTP compression (mitigation of CRIME/BREACH attacks)
- HTTP Strict Transport Security
- Reduce XSS risks (Content-Security-Policy)
- Control the behavior of the Referer header (Referrer-Policy)
- Provide clickjacking protection (X-Frame-Options)
- Prevent some categories of XSS attacks (X-XSS-Protection)
- Prevent Sniff Mimetype middleware (X-Content-Type-Options)
- Deny the use of browser features (Feature-Policy)
- Reject unsafe HTTP methods
- Control Buffer Overflow attacks
- Mitigating Slow HTTP DoS attack (Closing Slow Connections)
- Configuration examples
Before using the Nginx please read Beginner’s Guide.
Nginx (/ˌɛndʒɪnˈɛks/ EN-jin-EKS) is an HTTP and reverse proxy server, a mail proxy server, and a generic TCP/UDP proxy server, originally written by Igor Sysoev. For a long time, it has been running on many heavily loaded Russian sites including Yandex, Mail.Ru, VK, and Rambler.
To increase your knowledge, read Nginx Documentation.
This is not an official handbook. Many of these rules refer to external resources. It is rather a quick collection of some rules used by me in production environments (not only).
Before you start remember about the two most important things:
Do not follow guides just to get 100% of something. Think about what you actually do at your server!
These guidelines provides recommendations for very restrictive setup.
If you find something which doesn't make sense, or one of these doesn't seem right, or something seems really stupid; please make a pull request or please add valid and well-reasoned opinions about your changes or comments.
Before add pull request please see this.
Many of these recipes have been applied to the configuration of my private website. I finally got all 100%'s on my scores:
An example configuration is in this chapter.
Hardening checklist based on these recipes (@ssllabs A+ 100%) - High-Res 5000x8200.
For
*.xcf
and
:black_small_square: Nginx Project
:black_small_square: Nginx Documentation
:black_small_square: Nginx official read-only mirror
:black_small_square: Nginx boilerplate configs
:black_small_square: Awesome Nginx configuration template
:black_small_square: A collection of resources covering Nginx and more
:black_small_square: Nginx Secure Web Server
:black_small_square: Emiller’s Guide To Nginx Module Development
:black_small_square: Nginx Cheatsheet
:black_small_square: Nginx Quick Reference
:black_small_square: SSL/TLS Deployment Best Practices
:black_small_square: SSL Server Rating Guide
:black_small_square: How to Build a Tough NGINX Server in 15 Steps
:black_small_square: Top 25 Nginx Web Server Best Security Practices
:black_small_square: Strong SSL Security on Nginx
:black_small_square: Nginx Tuning For Best Performance by Denji
:black_small_square: Enable cross-origin resource sharing (CORS)
:black_small_square: TLS has exactly one performance problem: it is not used widely enough
:black_small_square: WAF for Nginx
:black_small_square: ModSecurity for Nginx
:black_small_square: Transport Layer Protection Cheat Sheet
:black_small_square: Security/Server Side TLS
:black_small_square: Nginx config generator on steroids
:black_small_square: Nginx static analyzer
:black_small_square: GoAccess
:black_small_square: Graylog
:black_small_square: Logstash
:black_small_square: ngxtop
:black_small_square: siege
:black_small_square: wrk
:black_small_square: bombardier
:black_small_square: gobench
:black_small_square: SSL Server Test by SSL Labs
:black_small_square: SSL/TLS Capabilities of Your Browser
:black_small_square: Test SSL/TLS (PCI DSS, HIPAA and NIST)
:black_small_square: SSL analyzer and certificate checker
:black_small_square: Test your TLS server configuration (e.g. ciphers)
:black_small_square: Scan your website for non-secure content
:black_small_square: Strong ciphers for Apache, Nginx, Lighttpd and more
:black_small_square: Analyse the HTTP response headers by Security Headers
:black_small_square: Analyze your website by Mozilla Observatory
:black_small_square: Linting tool that will help you with your site's accessibility, speed, security and more
:black_small_square: Service to scan and analyse websites
:black_small_square: Online tool to learn, build, & test Regular Expressions
:black_small_square: Online Regex Tester & Debugger
:black_small_square: User agent compatibility (Cipher suite)
:black_small_square: BBC Digital Media Distribution: How we improved throughput by 4x
:black_small_square: Web cache server performance benchmark: nuster vs nginx vs varnish vs squid
alias ng.test='nginx -t -c /etc/nginx/nginx.conf'
alias ng.stop='ng.test && systemctl stop nginx'
alias ng.reload='ng.test && systemctl reload nginx'
alias ng.restart='ng.test && systemctl restart nginx'
# or
alias ng.restart='ng.test && kill -HUP `cat /var/run/nginx.pid`'
cut -d ' ' -f1 /path/to/logfile | sort | uniq -c | sort -nr | head -5 | nl
tail -n 100 -f /path/to/logfile | grep "HTTP/[1-2].[0-1]\" [2]"
tail -n 100 -f /path/to/logfile | grep "HTTP/[1-2].[0-1]\" [5]"
# 1)
awk '/'$(date -d "1 hours ago" "+%d\\/%b\\/%Y:%H:%M")'/,/'$(date "+%d\\/%b\\/%Y:%H:%M")'/ { print $0 }' /path/to/logfile
# 2)
awk '/05\/Feb\/2019:09:2.*/,/05\/Feb\/2019:09:5.*/' /path/to/logfile
tail -F /path/to/logfile | pv -N RAW -lc 1>/dev/null
strace -e trace=network -p `pidof nginx | sed -e 's/ /,/g'`
strace -ff -e trace=file nginx 2>&1 | perl -ne 's/^[^"]+"(([^\\"]|\\[\\"nt])*)".*/$1/ && print'
When your configuration grow, the need for organising your code will also grow. Well organised code is:
- easier to understand
- easier to maintain
- easier to work with
Use
include
directive to attach your Nginx specific code to global config, contexts and other.
# Store this configuration in e.g. https-ssl-common.conf
listen 10.240.20.2:443 ssl;
root /etc/nginx/error-pages/other;
ssl_certificate /etc/nginx/domain.com/certs/nginx_domain.com_bundle.crt;
ssl_certificate_key /etc/nginx/domain.com/certs/domain.com.key;
# And include this file in server section:
server {
include /etc/nginx/domain.com/commons/https-ssl-common.conf;
server_name domain.com www.domain.com;
...
...
# For http:
server {
listen 10.240.20.2:80;
...
}
# For https:
server {
listen 10.240.20.2:443 ssl;
...
}
Nginx should prevent processing requests with undefined server names - also traffic on IP address. It also protects against configuration errors and don't pass traffic to incorrect backends. The problem is easily solved by creating a default catch all server config.
If none of the listen directives have the
default_server
parameter then the first server with the address:port pair will be the default server for this pair.
If someone makes a request using an IP address instead of a server name, the
Host
request header field will contain the IP address and the request can be handled using the IP address as the server name.
I think the best solution is
return 444;
for default server name because this will close the connection and log it internally, for any domain that isn't defined in Nginx.
# Place it at the beginning of the configuration file to prevent mistakes.
server {
# Add default_server to your listen directive in the server that you want to act as the default.
listen 10.240.20.2:443 default_server ssl;
# We catch invalid domain names, requests without the "Host" header and all others (also due to the above setting).
server_name _ "" default_server;
...
return 444;
# We can also serve:
# location / {
# static file (error page):
# root /etc/nginx/error-pages/404;
# or redirect:
# return 301 https://badssl.com;
# return 444;
# }
}
server {
listen 10.240.20.2:443 ssl;
server_name domain.com;
...
}
server {
listen 10.240.20.2:443 ssl;
server_name domain.org;
...
}
For sharing a single IP address between several HTTPS servers you should use one SSL config (e.g. protocols, ciphers, curves) because changes will affect only the default server.
Remember that regardless of SSL parameters, you are able to use multiple SSL certificates.
If you want to set up different SSL configurations for the same IP address then it will fail. It's important because SSL configuration is presented for default server - if none of the listen directives have the
default_server
parameter then the first server in your configuration. So you should use only one SSL setup with several names on the same IP address.
# Store this configuration in e.g. https.conf
listen 192.168.252.10:443 default_server ssl http2;
ssl_protocols TLSv1.2;
ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384";
ssl_prefer_server_ciphers on;
ssl_ecdh_curve secp521r1:secp384r1;
...
# Include this file to the server context (attach domain-a.com for specific listen directive)
server {
include /etc/nginx/https.conf;
server_name domain-a.com;
...
}
# Include this file to the server context (attach domain-b.com for specific listen directive)
server {
include /etc/nginx/https.conf;
server_name domain-b.com;
...
}
You should always use HTTPS instead of HTTP to protect your website, even if it doesn’t handle sensitive communications.
server {
listen 10.240.20.2:80;
server_name domain.com;
return 301 https://$host$request_uri;
}
server {
listen 10.240.20.2:443 ssl;
server_name domain.com;
...
}
Creates variables with values depending on the client IP address. Use map or geo modules (one of them) to prevent users abusing your servers.
# Map module:
map $remote_addr $globals_internal_map_acl {
# Status code:
# - 0 = false
# - 1 = true
default 0;
### INTERNAL ###
10.255.10.0/24 1;
10.255.20.0/24 1;
10.255.30.0/24 1;
192.168.0.0/16 1;
}
# Geo module:
geo $globals_internal_geo_acl {
# Status code:
# - 0 = false
# - 1 = true
default 0;
### INTERNAL ###
10.255.10.0/24 1;
10.255.20.0/24 1;
10.255.30.0/24 1;
192.168.0.0/16 1;
}
Manage a large number of redirects with Nginx maps.
Map module provides a more elegant solution for clearly parsing a big list of regexes, e.g. User-Agents.
map $http_user_agent $device_redirect {
default "desktop";
~(?i)ip(hone|od) "mobile";
~(?i)android.*(mobile|mini) "mobile";
~Mobile.+Firefox "mobile";
~^HTC "mobile";
~Fennec "mobile";
~IEMobile "mobile";
~BB10 "mobile";
~SymbianOS.*AppleWebKit "mobile";
~Opera\sMobi "mobile";
}
if ($device_redirect = "mobile") {
return 301 https://m.domain.com$request_uri;
}
If you add a root to every location block then a location block that isn’t matched will have no root. Set global
root
inside server directive.
server {
server_name domain.com;
root /var/www/domain.com/public;
location / {
...
}
location /api {
...
}
location /static {
root /var/www/domain.com/static;
...
}
}
There's probably more detail than you want, but that can sometimes be a lifesaver (but log file growing rapidly on a very high-traffic sites).
rewrite_log on;
error_log /var/log/nginx/error-debug.log debug;
Anything you can access as a variable in Nginx config, you can log, including non-standard http headers, etc. so it's a simple way to create your own log format for specific situations.
This is extremely helpful for debugging specific
location
directives.
# Default main log format from nginx repository:
log_format main
'$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
# Extended main log format:
log_format main-level-0
'$remote_addr - $remote_user [$time_local] '
'"$request_method $scheme://$host$request_uri '
'$server_protocol" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'$request_time';
# Debug log formats:
log_format debug-level-0
'$remote_addr - $remote_user [$time_local] '
'"$request_method $scheme://$host$request_uri '
'$server_protocol" $status $body_bytes_sent '
'$request_id $pid $msec $request_time '
'$upstream_connect_time $upstream_header_time '
'$upstream_response_time "$request_filename" '
'$request_completion';
log_format debug-level-1
'$remote_addr - $remote_user [$time_local] '
'"$request_method $scheme://$host$request_uri '
'$server_protocol" $status $body_bytes_sent '
'$request_id $pid $msec $request_time '
'$upstream_connect_time $upstream_header_time '
'$upstream_response_time "$request_filename" $request_length '
'$request_completion $connection $connection_requests';
log_format debug-level-2
'$remote_addr - $remote_user [$time_local] '
'"$request_method $scheme://$host$request_uri '
'$server_protocol" $status $body_bytes_sent '
'$request_id $pid $msec $request_time '
'$upstream_connect_time $upstream_header_time '
'$upstream_response_time "$request_filename" $request_length '
'$request_completion $connection $connection_requests '
'$server_addr $server_port $remote_addr $remote_port';
- Module ngx_http_log_module
- Nginx: Custom access log format and error levels
- nginx: Log complete request/response with all headers?
The
worker_processes
directive is the sturdy spine of life for Nginx. This directive is responsible for letting our virtual server know many workers to spawn once it has become bound to the proper IP and port(s).
Official Nginx documentation say:
When one is in doubt, setting it to the number of available CPU cores would be a good start (the value "auto" will try to autodetect it).
I think for high load proxy servers (also standalone servers) good value is
ALL_CORES - 1
(please test it before used).
# VCPU = 4 , expr $(nproc --all) - 1
worker_processes 3;
HTTP/2 will make our applications faster, simpler, and more robust.
The primary goals for HTTP/2 are to reduce latency by enabling full request and response multiplexing, minimize protocol overhead via efficient compression of HTTP header fields, and add support for request prioritization and server push.
HTTP/2 is backwards-compatible with HTTP/1.1, so it would be possible to ignore it completely and everything will continue to work as before.
# For https:
server {
listen 10.240.20.2:443 ssl http2;
...
- Introduction to HTTP/2
- What is HTTP/2 - The Ultimate Guide
- The HTTP/2 Protocol: Its Pros & Cons and How to Start Using It
This improves performance from the clients’ perspective, because it eliminates the need for a new (and time-consuming) SSL handshake to be conducted each time a request is made.
Most servers do not purge sessions or ticket keys, thus increasing the risk that a server compromise would leak data from previous (and future) connections.
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 24h;
ssl_session_tickets off;
ssl_buffer_size 1400;
Exact names, wildcard names starting with an asterisk, and wildcard names ending with an asterisk are stored in three hash tables bound to the listen ports.
The exact names hash table is searched first. If a name is not found, the hash table with wildcard names starting with an asterisk is searched. If the name is not found there, the hash table with wildcard names ending with an asterisk is searched. Searching wildcard names hash table is slower than searching exact names hash table because names are searched by domain parts.
Regular expressions are tested sequentially and therefore are the slowest method and are non-scalable. For these reasons, it is better to use exact names where possible.
# It is more efficient to define them explicitly:
server {
listen 80;
server_name example.org www.example.org *.example.org;
...
}
# than to use the simplified form:
server {
listen 80;
server_name .example.org;
...
}
There is no real difference in security just by changing the process owner name. On the other hand in security, the principle of least privilege states that an entity should be given no more permission than necessary to accomplish its goals within a given system. This way only master process runs as root.
# Edit nginx.conf:
user www-data;
# Set owner and group for root (app, default) directory:
chown -R www-data:www-data /var/www/domain.com
It is recommended to disable any modules which are not required as this will minimize the risk of any potential attacks by limiting the operations allowed by the web server.
# During installation:
./configure --without-http_autoindex_module
# Comment modules in the configuration file e.g. modules.conf:
# load_module /usr/share/nginx/modules/ndk_http_module.so;
# load_module /usr/share/nginx/modules/ngx_http_auth_pam_module.so;
# load_module /usr/share/nginx/modules/ngx_http_cache_purge_module.so;
# load_module /usr/share/nginx/modules/ngx_http_dav_ext_module.so;
load_module /usr/share/nginx/modules/ngx_http_echo_module.so;
# load_module /usr/share/nginx/modules/ngx_http_fancyindex_module.so;
load_module /usr/share/nginx/modules/ngx_http_geoip_module.so;
load_module /usr/share/nginx/modules/ngx_http_headers_more_filter_module.so;
# load_module /usr/share/nginx/modules/ngx_http_image_filter_module.so;
# load_module /usr/share/nginx/modules/ngx_http_lua_module.so;
load_module /usr/share/nginx/modules/ngx_http_perl_module.so;
# load_module /usr/share/nginx/modules/ngx_mail_module.so;
# load_module /usr/share/nginx/modules/ngx_nchan_module.so;
# load_module /usr/share/nginx/modules/ngx_stream_module.so;
Hidden directories and files should never be web accessible.
if ($request_uri ~ "/\.git") {
return 403;
}
# or
location ~ /\.git {
deny all;
}
# or
location ~* ^.*(\.(?:git|svn|htaccess))$ {
return 403;
}
# or all . directories/files excepted .well-known
location ~ /\.(?!well-known\/) {
deny all;
}
Disclosing the version of Nginx running can be undesirable, particularly in environments sensitive to information disclosure.
The "Official Apache Documentation (Apache Core Features)" say:
Setting ServerTokens to less than minimal is not recommended because it makes it more difficult to debug interoperational problems. Also note that disabling the Server: header does nothing at all to make your server more secure. The idea of "security through obscurity" is a myth and leads to a false sense of safety.
server_tokens off;
In my opinion there is no real reason or need to show this much information about your server. It is easy to look up particular vulnerabilities once you know the version number.
You should compile Nginx from sources with
ngx_headers_more
to usedmore_set_headers
directive.
more_set_headers "Server: Unknown";
- Shhh... don’t let your response headers talk too loudly
- How to change (hide) the Nginx Server Signature?
When Nginx is used to proxy requests to an upstream server (such as a PHP-FPM instance), it can be beneficial to hide certain headers sent in the upstream response (e.g. the version of PHP running).
proxy_hide_header X-Powered-By;
proxy_hide_header X-AspNetMvc-Version;
proxy_hide_header X-AspNet-Version;
proxy_hide_header X-Drupal-Cache;
Advisories recommend 2048 for now. Security experts are projecting that 2048 bits will be sufficient for commercial use until around the year 2030.
Generally there is no compelling reason to choose 4096 bit keys over 2048 provided you use sane expiration intervals.
If you want to get A+ with 100%s on SSL Lab you should definitely use 4096 bit private key.
I always generate 4096 bit keys for low busy sites since the downside is minimal (slightly lower performance) and security is slightly higher (although not as high as one would like).
Use of alternative solution: ECC Certificate Signing Request (CSR).
The "SSL/TLS Deployment Best Practices" book say:
The cryptographic handshake, which is used to establish secure connections, is an operation whose cost is highly influenced by private key size. Using a key that is too short is insecure, but using a key that is too long will result in “too much” security and slow operation. For most web sites, using RSA keys stronger than 2048 bits and ECDSA keys stronger than 256 bits is a waste of CPU power and might impair user experience. Similarly, there is little benefit to increasing the strength of the ephemeral key exchange beyond 2048 bits for DHE and 256 bits for ECDHE.
Konstantin Ryabitsev (Reddit):
Generally speaking, if we ever find ourselves in a world where 2048-bit keys are no longer good enough, it won't be because of improvements in brute-force capabilities of current computers, but because RSA will be made obsolete as a technology due to revolutionary computing advances. If that ever happens, 3072 or 4096 bits won't make much of a difference anyway. This is why anything above 2048 bits is generally regarded as a sort of feel-good hedging theatre.
### Example (RSA):
( _fd="domain.com.key" ; _len="4096" ; openssl genrsa -out ${_fd} ${_len} )
# Let's Encrypt:
certbot certonly -d domain.com -d www.domain.com --rsa-key-size 4096
### Example (ECC):
# _curve: prime256v1, secp521r1, secp384r1
( _fd="domain.com.key" ; _fd_csr="domain.com.csr" ; _curve="prime256v1" ; \
openssl ecparam -out ${_fd} -name ${_curve} -genkey ; openssl req -new -key ${_fd} -out ${_fd_csr} -sha256 )
# Let's Encrypt (from above):
certbot --csr ${_fd_csr} -[other-args]
For x25519
:
( _fd="private.key" ; _curve="x25519" ; \
openssl genpkey -algorithm ${_curve} -out ${_fd} )
ssllabs score: 100
( _fd="domain.com.key" ; _len="2048" ; openssl genrsa -out ${_fd} ${_len} )
# Let's Encrypt:
certbot certonly -d domain.com -d www.domain.com
ssllabs score: 90
It is recommended to run TLS 1.1/1.2 and fully disable SSLv2, SSLv3 and TLS 1.0 that have protocol weaknesses.
TLS 1.1 and 1.2 are both without security issues - but only v1.2 provides modern cryptographic algorithms. TLS 1.0 and TLS 1.1 protocols will be removed from browsers at the beginning of 2020.
ssl_protocols TLSv1.2;
# For TLS 1.3
ssl_protocols TLSv1.2 TLSv1.3;
ssllabs score: 100
ssl_protocols TLSv1.2 TLSv1.1;
ssllabs score: 95
- TLS/SSL Explained – Examples of a TLS Vulnerability and Attack, Final Part
- Deprecating TLS 1.0 and 1.1 - Enhancing Security for Everyone
- TLS1.3 - OpenSSLWiki
- How to enable TLS 1.3 on Nginx
This parameter changes quite often, the recommended configuration for today may be out of date tomorrow.
For more security use only strong and not vulnerable ciphersuite (but if you use HTTP/2 you can get
Server sent fatal alert: handshake_failure
error).
Place
ECDHE
andDHE
suites at the top of your list. The order is important; becauseECDHE
suites are faster, you want to use them whenever clients supports them.
For backward compatibility software components you should use less restrictive ciphers.
You should definitely disable weak ciphers like those with
DSS
,DSA
,DES/3DES
,RC4
,MD5
,SHA1
,null
, anon in the name.
ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384";
ssllabs score: 100
# 1)
ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256";
# 2)
ssl_ciphers "ECDHE-ECDSA-CHACHA20-POLY1305:ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:!AES256-GCM-SHA256:!AES256-GCM-SHA128:!aNULL:!MD5";
ssllabs score: 90
Ciphersuite for TLS 1.3:
ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256";
- SSL/TLS: How to choose your cipher suite
- HTTP/2 and ECDSA Cipher Suites
- Which SSL/TLS Protocol Versions and Cipher Suites Should I Use?
- Why use Ephemeral Diffie-Hellman
- Differences between TLS 1.2 and TLS 1.3
For a SSL server certificate, an "elliptic curve" certificate will be used only with digital signatures (
ECDSA
algorithm).
x25519
is a more secure but slightly less compatible option. To maximise interoperability with existing browsers and servers, stick toP-256 prime256v1
andP-384 secp384r1
curves.
NSA Suite B says that NSA uses curves
P-256
andP-384
(in OpenSSL, they are designated as, respectively,prime256v1
andsecp384r1
). There is nothing wrong withP-521
, except that it is, in practice, useless. Arguably,P-384
is also useless, because the more efficientP-256
curve already provides security that cannot be broken through accumulation of computing power.
Use
P-256
to minimize trouble. If you feel that your manhood is threatened by using a 256-bit curve where a 384-bit curve is available, then useP-384
: it will increases your computational and network costs.
If you do not set
ssh_ecdh_curve
, then the Nginx will use its default settings, e.g. Chrome will preferx25519
, but this is not recommended because you can not control the Nginx's default settings (seems to beP-256
).
Explicitly set
ssh_ecdh_curve X25519:prime256v1:secp521r1:secp384r1;
decreases the Key Exchange SSL Labs rating.
Definitely do not use the
secp112r1
,secp112r2
,secp128r1
,secp128r2
,secp160k1
,secp160r1
,secp160r2
,secp192k1
curves. They have a too small size for security application according to NIST recommendation.
ssl_ecdh_curve secp521r1:secp384r1;
ssllabs score: 100
# Alternative (this one doesn’t affect compatibility, by the way; it’s just a question of the preferred order). This setup downgrade Key Exchange score:
ssl_ecdh_curve X25519:prime256v1:secp521r1:secp384r1;
- Standards for Efficient Cryptography Group
- SafeCurves: choosing safe curves for elliptic-curve cryptography
- P-521 is pretty nice prime
- Safe ECC curves for HTTPS are coming sooner than you think
- Cryptographic Key Length Recommendations
- Testing for Weak SSL/TLS Ciphers, Insufficient Transport Layer Protection (OTG-CRYPST-001)
- Elliptic Curve performance: NIST vs Brainpool
- Which elliptic curve should I use?
The DH key is only used if DH ciphers are used. Modern clients prefer
ECDHE
instead and if your Nginx accepts this preference then the handshake will not use the DH param at all since it will not do aDHE
key exchange but anECDHE
key exchange.
Most of the "modern" profiles from places like Mozilla's ssl config generator no longer recommend using this.
Default key size in OpenSSL is
1024 bits
- it's vulnerable and breakable. For the best security configuration use your own4096 bit
DH Group or use known safe ones pre-defined DH groups (it's recommended) from mozilla.
# To generate a DH key:
openssl dhparam -out /etc/nginx/ssl/dhparam_4096.pem 4096
# To produce "DSA-like" DH parameters:
openssl dhparam -dsaparam -out /etc/nginx/ssl/dhparam_4096.pem 4096
# To generate a ECDH key:
openssl ecparam -out /etc/nginx/ssl/ecparam.pem -name prime256v1
# Nginx configuration:
ssl_dhparam /etc/nginx/ssl/dhparams_4096.pem;
ssllabs score: 100
- Weak Diffie-Hellman and the Logjam Attack
- Guide to Deploying Diffie-Hellman for TLS
- Pre-defined DHE groups
- Instructs OpenSSL to produce "DSA-like" DH parameters
- OpenSSL generate different types of self signed certificate
Enables server-side protection from BEAST attacks.
ssl_prefer_server_ciphers on;
You should probably never use TLS compression. Some user agents (at least Chrome) will disable it anyways. Disabling SSL/TLS compression stops the attack very effectively.
Some attacks are possible because of gzip (HTTP compression not TLS compression) being enabled on SSL requests. In most cases, the best action is to simply disable gzip for SSL.
You shouldn't use HTTP compression on private responses when using TLS.
Compression can be (i think) okay to HTTP compress publicly available static content like css or js and HTML content with zero sensitive info (like an "About Us" page).
gzip off;
- Is HTTP compression safe?
- HTTP compression continues to put encrypted communications at risk
- SSL/TLS attacks: Part 2 – CRIME Attack
- To avoid BREACH, can we use gzip on non-token responses?
The header indicates for how long a browser should unconditionally refuse to take part in unsecured HTTP connection for a specific domain.
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains" always;
ssllabs score: A+
CSP reduce the risk and impact of XSS attacks in modern browsers.
# This policy allows images, scripts, AJAX, and CSS from the same origin, and does not allow any other resources to load.
add_header Content-Security-Policy "default-src 'none'; script-src 'self'; connect-src 'self'; img-src 'self'; style-src 'self';" always;
Determine what information is sent along with the requests.
add_header Referrer-Policy "no-referrer";
Helps to protect your visitors against clickjacking attacks. It is recommended that you use the
x-frame-options
header on pages which should not be allowed to render a page in a frame.
add_header X-Frame-Options "SAMEORIGIN" always;
Enable the cross-site scripting (XSS) filter built into modern web browsers.
add_header X-XSS-Protection "1; mode=block" always;
It prevents the browser from doing MIME-type sniffing (prevents "mime" based attacks).
add_header X-Content-Type-Options "nosniff" always;
This header protects your site from third parties using APIs that have security and privacy implications, and also from your own team adding outdated APIs or poorly optimized images.
add_header Feature-Policy "geolocation none; midi none; notifications none; push none; sync-xhr none; microphone none; camera none; magnetometer none; gyroscope none; speaker none; vibrate none; fullscreen self; payment none; usb none;";
Set of methods support by a resource. An ordinary web server supports the
HEAD
,GET
andPOST
methods to retrieve static and dynamic content. Other (e.g.OPTIONS
,TRACE
) methods should not be supported on public web servers, as they increase the attack surface.
add_header Allow "GET, POST, HEAD" always;
if ($request_method !~ ^(GET|POST|HEAD)$) {
return 405;
}
Buffer overflow attacks are made possible by writing data to a buffer and exceeding that buffers’ boundary and overwriting memory fragments of a process. To prevent this in Nginx we can set buffer size limitations for all clients.
client_body_buffer_size 100k;
client_header_buffer_size 1k;
client_max_body_size 100k;
large_client_header_buffers 2 1k;
Close connections that are writing data too infrequently, which can represent an attempt to keep connections open as long as possible.
client_body_timeout 10s;
client_header_timeout 10s;
keepalive_timeout 5s 5s;
send_timeout 10s;
Remember to make a copy of the current configuration and all files/directories.
Before read this configuration remember about Nginx Contexts structure:
Core Contexts:
Global/Main Context
Events Context
HTTP Context
Server Context
Location Context
Upstream Context
Mail Context
This chapter describes the basic configuration of my proxy server (for blkcipher.info domain).
It's very simple - clone the repo and perform full directory sync:
git clone https://github.com/trimstray/nginx-quick-reference.git
rsync -avur --delete lib/nginx/ /etc/nginx/
For leaving your configuration (not recommended) remove
--delete
rsync param.
cd /etc/nginx
find . -depth -name '*192.168.252.2*' -execdir bash -c 'mv -v "$1" "${1//192.168.252.2/xxx.xxx.xxx.xxx}"' _ {} \;
cd /etc/nginx
find . -type f -print0 | xargs -0 sed -i 's/192.168.252.2/xxx.xxx.xxx.xxx/g'
cd /etc/nginx
find . -depth -name '*blkcipher.info*' -execdir bash -c 'mv -v "$1" "${1//blkcipher.info/example.com}"' _ {} \;
cd /etc/nginx
find . -type f -print0 | xargs -0 sed -i 's/blkcipher_info/example_com/g'
find . -type f -print0 | xargs -0 sed -i 's/blkcipher.info/example.com/g'
cd /etc/nginx/master/_server/localhost/certs
# Private key + Self-signed certificate
( _fd="localhost.key" ; _fd_crt="nginx_localhost_bundle.crt" ; \
openssl req -x509 -newkey rsa:4096 -keyout ${_fd} -out ${_fd_crt} -days 365 -nodes \
-subj "/C=X0/ST=localhost/L=localhost/O=localhost/OU=X00/CN=localhost" )
cd /etc/nginx/master/_server/defaults/certs
# Private key + Self-signed certificate
( _fd="defaults.key" ; _fd_crt="nginx_defaults_bundle.crt" ; \
openssl req -x509 -newkey rsa:4096 -keyout ${_fd} -out ${_fd_crt} -days 365 -nodes \
-subj "/C=X1/ST=default/L=default/O=default/OU=X11/CN=default_server" )
cd /etc/nginx/master/_server/example.com/certs
# For multidomain:
certbot certonly -d example.com -d www.example.com --rsa-key-size 4096
# For wildcard:
certbot certonly --manual --preferred-challenges=dns -d example.com -d *.example.com --rsa-key-size 4096
# Copy private key and chain:
cp /etc/letsencrypt/live/example.com/fullchain.pem nginx_example.com_bundle.crt
cp /etc/letsencrypt/live/example.com/privkey.pem example.com.key
# At the end of the file (in 'IPS/DOMAINS' section)
include /etc/nginx/master/_server/domain.com/servers.conf;
include /etc/nginx/master/_server/domain.com/backends.conf;
cd /etc/nginx/cd master/_server
cp -R example.com domain.com
cd domain.com
find . -depth -name '*example.com*' -execdir bash -c 'mv -v "$1" "${1//example.com/domain.com}"' _ {} \;
find . -type f -print0 | xargs -0 sed -i 's/example_com/domain_com/g'
find . -type f -print0 | xargs -0 sed -i 's/example.com/domain.com/g'
nginx -t -c /etc/nginx/nginx.conf