Asked  7 Months ago    Answers:  5   Viewed   35 times

I am getting these kind of errors:

2014/05/24 11:49:06 [error] 8376#0: *54031 upstream sent too big header while reading response header from upstream, client:, server:, request: "GET /the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/,%20https:/,%20https:/,%20https:/,%20https:/,%20https:/,%20https:/,%20https:/,%20https:/,%20https:/,%20https:/,%20https:/,%20https:/,%20https:/,%20https:/,%20,%20https:/,%20https:/,%20https:/,%20https:/,%20https:/,%20https:/,%20ht

Always it is the same. A url repeated over and over with comma separating. Can't figure out what is causing this. Anyone have an idea?

Update: Another error:

http request count is zero while sending response to client

Here is the config. There are other irrelevant things, but this part was added/edited

fastcgi_cache_path /var/nginx-cache levels=1:2 keys_zone=WORDPRESS:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
fastcgi_cache_use_stale error timeout invalid_header http_500;
fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
proxy_buffer_size   128k;
proxy_buffers   4 256k;
proxy_busy_buffers_size   256k;
    # Upstream to abstract backend connection(s) for PHP.
    upstream php {
            #this should match value of "listen" directive in php-fpm pool
            server unix:/var/run/php5-fpm.sock;

And then in the server block: set $skip_cache 0;

    # POST requests and urls with a query string should always go to PHP
    if ($request_method = POST) {
            set $skip_cache 1;
    if ($query_string != "") {
            set $skip_cache 1;

    # Don't cache uris containing the following segments
    if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") {
            set $skip_cache 1;

    # Don't use the cache for logged in users or recent commenters
    if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") {
            set $skip_cache 1;

    location / {
            # This is cool because no php is touched for static content.
            # include the "?$args" part so non-default permalinks doesn't break when using query string
            try_files $uri $uri/ /index.php?$args;

    location ~ .php$ {
            try_files $uri /index.php;
            include fastcgi_params;
            fastcgi_pass php;
            fastcgi_read_timeout 3000;

            fastcgi_cache_bypass $skip_cache;
            fastcgi_no_cache $skip_cache;

            fastcgi_cache WORDPRESS;
            fastcgi_cache_valid  60m;

    location ~ /purge(/.*) {
        fastcgi_cache_purge WORDPRESS "$scheme$request_method$host$1";



Add the following to your conf file

fastcgi_buffers 16 16k; 
fastcgi_buffer_size 32k;
Wednesday, March 31, 2021
answered 7 Months ago

You can invoke a named location as the default action of your try_files statement.

For example:

location / {
    try_files $uri $uri/ @proxy;
location @proxy {
    proxy_pass http://backend;

See this document for details.

Tuesday, June 15, 2021
answered 4 Months ago

I think that error from Nginx is indicating that the connection was closed by your nodejs server (i.e., "upstream"). How is nodejs configured?

Wednesday, July 28, 2021
answered 3 Months ago

I found the issue.

My [runserver] socket (app.sock) should be pointed under upstream django and my [wsserver] socket (django.sock) should be pointed under location /ws/ like so:

upstream django {
    server unix:/opt/django/app.sock;

server {
    listen 80 default_server;
    charset utf-8;
    client_max_body_size 20M;
    sendfile on;
    keepalive_timeout 0;
    large_client_header_buffers 8 32k;

location /media  {
    alias /opt/django/app/media/media;  

location /static {
    alias /opt/django/app/static;

location / {
    include /opt/django/uwsgi_params; 

location /ws/ {
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_pass http://unix:/opt/django/django.sock;
        proxy_buffers 8 32k;
        proxy_buffer_size 64k;
Monday, August 9, 2021
Ali Tor
answered 3 Months ago

A reverse proxy setup (e.g. nginx forwarding HTTP requests to Starman) has the following advantages:

  • things are a bit easier to debug, since you can easily hit directly the backend server;

  • if you need to scale your backend server, you can easily use something like pound/haproxy between the frontend (static-serving) HTTP and your backends (Zope is often deployed like that);

  • it can be a nice sidekick if you are also using some kind of outward-facing, caching, reverse proxy (like Varnish or Squid) since it allows to bypass it very easily.

However, it has the following downsides:

  • the backend server has to figure out the real originating IP, since all it will see is the frontend server address (generally localhost); there is almost an easy way to find out the client IP address in the HTTP headers, but that's something extra to figure out;

  • the backend server does not generally know the orignal "Host:" HTTP header, and therefore, cannot automatically generated an absolute URL to a local resource; Zope addresses this with special URLs to embed the original protocol, host and port in the request to the backend, but it's something you don't have to do with FastCGI/Plack/...;

  • the frontend cannot automatically spawn backend processes, like it could do with FastCGI for instance.

Pick your favourites pros/cons and make your choice, I guess ;-)

Sunday, September 5, 2021
answered 2 Months ago
Only authorized users can answer the question. Please sign in first, or register a free account.
Not the answer you're looking for? Browse other questions tagged :