add in http {}
proxy_cache_path D:/nginx-1.18.0/cache levels=1:2 keys_zone=cache_one:500m inactive=1d max_size=30g; proxy_cache_key "$host$request_uri$cookie_user";
server {} config
server { listen 80; server_name vpseo.com www.vpseo.com; proxy_redirect off; server_name_in_redirect off; proxy_set_header Host $host:$server_port; proxy_set_header X-Real-IP $remote_addr; proxy_set_header REMOTE-HOST $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; location / { proxy_cache cache_one; proxy_cache_valid 200 304 302 6s; proxy_pass http://127.0.0.1:81; #Fine‑Tuning the Cache and Improving Performance #https://www.nginx.com/blog/nginx-caching-guide/ proxy_cache_revalidate on; proxy_cache_min_uses 3; proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504; proxy_cache_background_update on; proxy_cache_lock on; proxy_hide_header Cache-Control; proxy_hide_header Set-Cookie; proxy_ignore_headers X-Accel-Expires Expires Cache-Control Set-Cookie; } }
Delivering Cached Content When the Origin is Down
A powerful feature of NGINX content caching is that NGINX can be configured to deliver stale content from its cache when it can’t get fresh content from the origin servers. This can happen if all the origin servers for a cached resource are down or temporarily busy. Rather than relay the error to the client, NGINX delivers the stale version of the file from its cache. This provides an extra level of fault tolerance for the servers that NGINX is proxying, and ensures uptime in the case of server failures or traffic spikes. To enable this functionality, include the proxy_cache_use_stale
directive:
location / {
# ...
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
}
With this sample configuration, if NGINX receives an error
, timeout
, or any of the specified 5xx
errors from the origin server and it has a stale version of the requested file in its cache, it delivers the stale file instead of relaying the error to the client.
Fine‑Tuning the Cache and Improving Performance
NGINX has a wealth of optional settings for fine‑tuning the performance of the cache. Here is an example that activates a few of them:
proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g
inactive=60m use_temp_path=off;
server {
# ...
location / {
proxy_cache my_cache;
proxy_cache_revalidate on;
proxy_cache_min_uses 3;
proxy_cache_use_stale error timeout updating http_500 http_502
http_503 http_504;
proxy_cache_background_update on;
proxy_cache_lock on;
proxy_pass http://my_upstream;
}
}
These directives configure the following behavior:
proxy_cache_revalidate
instructs NGINX to use conditionalGET
requests when refreshing content from the origin servers. If a client requests an item that is cached but expired as defined by the cache control headers, NGINX includes theIf-Modified-Since
field in the header of theGET
request it sends to the origin server. This saves on bandwidth, because the server sends the full item only if it has been modified since the time recorded in theLast-Modified
header attached to the file when NGINX originally cached it.proxy_cache_min_uses
sets the number of times an item must be requested by clients before NGINX caches it. This is useful if the cache is constantly filling up, as it ensures that only the most frequently accessed items are added to the cache. By defaultproxy_cache_min_uses
is set to 1.- The
updating
parameter to theproxy_cache_use_stale
directive, combined with enabling theproxy_cache_background_update
directive, instructs NGINX to deliver stale content when clients request an item that is expired or is in the process of being updated from the origin server. All updates will be done in the background. The stale file is returned for all requests until the updated file is fully downloaded. - With
proxy_cache_lock
enabled, if multiple clients request a file that is not current in the cache (aMISS
), only the first of those requests is allowed through to the origin server. The remaining requests wait for that request to be satisfied and then pull the file from the cache. Withoutproxy_cache_lock
enabled, all requests that result in cache misses go straight to the origin server.