Django-Filebrowser, TinyMCE, Amazon S3, Nginx 502 Bad Gateway Error

498 views
Skip to first unread message

Billy Reynolds

unread,
Nov 26, 2014, 5:51:59 PM11/26/14
to mezzani...@googlegroups.com

I'm having some issues with Django-Filebrowser and from what I can tell they seem to be related to Nginx. The two primary issues I'm having is that Django-Filebrowser fails to load directories with large amounts of Amazon S3 files in the mezzanine admin and I get an http error when trying to upload large files (500Mb) to Amazon S3 through filebrowser. I have multiple directories with 400+ large audio files (several hundred MB each) hosted on S3 that when I attempt to load in mezzanine admin/media-library, my server returns an nginx 500 (bad gateway) error. I didn't have any issues with this until the directories started getting bigger. I can also upload normal sized files (images, small audio files, etc.) without any issue. It's not until I try to upload large files that I get an error.

It's probably worth noting a few things:

  1. I only use Amazon S3 to serve the media files for the project, all static files are served locally through nginx.
  2. All django-filebrowser functionality works correctly in directories that will actually load. (with the exception of large file uploads)
  3. I created a test directory with 1000 small files and django-filebrowser loads the directory correctly.
  4. In the nginx.conf settings listed below (proxy buffer size, proxy_connect_timeout, etc), I've tested multiple values, multiple times and I can never get the pages to consistently load. Now that the directories are larger, I'm can't even get them to load.
  5. I've tried adding an additional location in my nginx conf for "admin/media-library/" with increased timeouts, and other settings I've tried... but nginx still did not load these large directories correctly.

I believe my primary issues are an nginx or possible gunicorn issue as I have no trouble loading these directories or uploading large files in a local environment without nginx/gunicorn. My nginx error log throws the following error:

2014/11/24 15:53:25 [error] 30816#0: *1 upstream prematurely closed connection while reading response header from upstream, client: xx.xxx.xxx.xxx, server: server, request: "GET /admin/media-library/browse/ HTTP/1.1", upstream: "http://127.0.0.1:8001/admin/media-library/browse/", host: "server name, referrer: "https://example/admin/"
 

I've researched that error which led me to add these lines to my nginx conf file.

proxy_buffer_size 128k;
proxy_buffers 100 128k;
proxy_busy_buffers_size 256k;
proxy_connect_timeout 75s;
proxy_read_timeout 75s;
client_max_body_size 9999M;
keepalive_timeout 60s;
 

Despite trying multiple nginx timeout configurations, I'm still stuck exactly where I started. My production server will not load large directories from Amazon S3 through django-filebrowser nor can I upload large files through django-filebrowser.

Here are some other lines from settings/conf files that are relevant.

Settings.py

DEFAULT_FILE_STORAGE = 's3utils.S3MediaStorage'
AWS_S3_SECURE_URLS = True # use http instead of https
AWS_QUERYSTRING_AUTH = False # don't add complex authentication-related query parameters for requests
#AWS_PRELOAD_METADATA = True
AWS_S3_ACCESS_KEY_ID = 'key' # enter your access key id
AWS_S3_SECRET_ACCESS_KEY = 'secret key' # enter your secret access key
AWS_STORAGE_BUCKET_NAME = 'bucket'
AWS_S3_CUSTOM_DOMAIN = 's3.amazonaws.com/bucket'
S3_URL = 'https://s3.amazonaws.com/bucket/'
MEDIA_URL = S3_URL + 'media/'
MEDIA_ROOT = 'media/uploads/'
FILEBROWSER_DIRECTORY = 'uploads'
 

/etc/nginx/sites-enabled/production.conf

upstream name {
server 127.0.0.1:8001;
}

server {
listen 80;
server_name www.example.com;
rewrite ^(.*) http://example.com$1 permanent;
}

server {

listen 80;
listen 443 default ssl;
server_name example.com;
client_max_body_size 999M;
keepalive_timeout 60;

ssl on;
ssl_certificate /etc/nginx/ssl/cert.crt;
ssl_certificate_key /etc/nginx/ssl/key.key;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_ciphers RC4:HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;

location / {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_pass http://example;
add_header X-Frame-Options "SAMEORIGIN";
proxy_buffer_size 128k;
proxy_buffers 100 128k;
proxy_busy_buffers_size 256k;
proxy_connect_timeout 75s;
proxy_read_timeout 75s;
client_max_body_size 9999M;
keepalive_timeout 60s;
}

location /static/ {
root /path/to/static
}

location /robots.txt {
root /path/to/robots;
access_log off;
log_not_found off;
}

location /favicon.ico {
root /path/to/favicon;
access_log off;
log_not_found off;
}

}

Is this even an nginx issue? If so, does anyone have any suggestions for resolving this error? If not, what am I missing that would cause timeouts only on these large directories/large file uploads?

Is there a better way to approach this problem than my current setup?

Any help would be greatly appreciated.

Thanks

Mario Gudelj

unread,
Nov 26, 2014, 10:20:09 PM11/26/14
to mezzani...@googlegroups.com
Is it possible that the app is making loads of DB queries? Have you had a look at django debug toolbar? I have a Mezza site with a large number of products and as this number got larger the admin slowed down drastically. I looked as DDT and it was making 400 DB queries because of some django sites query. So I disabled that and everything sped up. Perhaps something similar is happening here. 

Cheers,

Mario

--
You received this message because you are subscribed to the Google Groups "Mezzanine Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mezzanine-use...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Billy Reynolds

unread,
Dec 8, 2014, 4:29:42 PM12/8/14
to mezzani...@googlegroups.com
I looked at DDT, and it appears the that the queries aren't the issues. There only seems to be 10 db queries on each page in django-filebrowser.

However, the DDT log files are huge. On a single page that where django-filebrowser is loading 10~ images and the directories where the audio is stored, the log file is 2000 lines. When I load the large directory of audio files on my local dev server, the log file is 290,000+ lines and it slows my virtual machine to a crawl.

Boto appears to be logging the following for each file in filebrowser:

DEBUG   03:54:11 12/08/2014
 boto
path=/media/uploads/img_0057_3_-_reduced.jpg
DEBUG   03:54:11 12/08/2014
 boto
auth_path=/bucket/media/uploads/img_0057_3_-_reduced.jpg  path/to/boto/s3/connection.py:653\
DEBUG   03:54:11 12/08/2014
 boto
Method: HEAD
DEBUG   03:54:11 12/08/2014
 boto
Path: /media/uploads/img_0057_3_-_reduced.jpg   path/to/boto/connection.py:898\
DEBUG   03:54:11 12/08/2014
 boto
Data:   path/to/boto/connection.py:899\
DEBUG   03:54:11 12/08/2014
 boto
Headers: \{\}
DEBUG   03:54:11 12/08/2014
 boto
Host: bucket.s3.amazonaws.com
DEBUG   03:54:11 12/08/2014
 boto
Port: 443   path/to/boto/connection.py:902\
DEBUG   03:54:11 12/08/2014
 boto
Params: \{\}  path/to/boto/connection.py:903\
DEBUG   03:54:11 12/08/2014
 boto
Token: None
DEBUG   03:54:11 12/08/2014
 boto
StringToSign:\
HEAD
HEAD\
\
\
Mon, 08 Dec 2014 20:54:11 GMT
Mon
 08 Dec 2014 20:54:11 GMT\
/bucket/media/uploads/img_0057_3_-_reduced.jpg
/bucket/media/uploads/img_0057_3_-_reduced.jpg
path/to/boto/auth.py:144\
DEBUG   03:54:11 12/08/2014
 boto
Signature:\
AWS key:key=   path/to/boto/auth.py:148\
DEBUG   03:54:11 12/08/2014
 boto
Final headers: \{'Date': 'Mon
 08 Dec 2014 20:54:11 GMT'
DEBUG   03:54:11 12/08/2014
 boto
Response headers: [('content-length'
 '132168')
So I guess this is how my log files are so long in a directory with 3000 audio files. Is this a standard for boto?

At this point I'm at a loss as to how to solve this issue, and I feel like my only option moving forward is to bypass django-filebrowser for my audio files (which currently uses the mezzanine.FileField) and just using a standard django.FileField. I didn't want to do this as I like having the ability to change the audio file for each product from the admin without uploading a new one to the bucket anytime I want to change that particular product.

Mario Gudelj

unread,
Dec 8, 2014, 9:11:18 PM12/8/14
to mezzani...@googlegroups.com

wongo888

unread,
Dec 9, 2014, 2:19:56 PM12/9/14
to mezzani...@googlegroups.com
I ran into this a few months ago. I believe that this was solved by adding the following parameters (for a 150M limit).

In nginx:

server {
...
client_max_body_size 150M;
...
}

In settings.py:

FILEBROWSER_MAX_UPLOAD_SIZE = 157286400

This was a while ago, but I believe that the issue that I was having was the Flash uploader used (filebrowser) has a limit on it as well as the limit imposed by nginx. Our upstream was uwsgi.

K
Reply all
Reply to author
Forward
0 new messages