I'm having some issues with Django-Filebrowser and from what I can tell they seem to be related to Nginx. The two primary issues I'm having is that Django-Filebrowser fails to load directories with large amounts of Amazon S3 files in the mezzanine admin and I get an http error when trying to upload large files (500Mb) to Amazon S3 through filebrowser. I have multiple directories with 400+ large audio files (several hundred MB each) hosted on S3 that when I attempt to load in mezzanine admin/media-library, my server returns an nginx 500 (bad gateway) error. I didn't have any issues with this until the directories started getting bigger. I can also upload normal sized files (images, small audio files, etc.) without any issue. It's not until I try to upload large files that I get an error.
It's probably worth noting a few things:
I believe my primary issues are an nginx or possible gunicorn issue as I have no trouble loading these directories or uploading large files in a local environment without nginx/gunicorn. My nginx error log throws the following error:
2014/11/24 15:53:25 [error] 30816#0: *1 upstream prematurely closed connection while reading response header from upstream, client: xx.xxx.xxx.xxx, server: server, request: "GET /admin/media-library/browse/ HTTP/1.1", upstream: "http://127.0.0.1:8001/admin/media-library/browse/", host: "server name, referrer: "https://example/admin/"I've researched that error which led me to add these lines to my nginx conf file.
proxy_buffer_size 128k;proxy_buffers 100 128k;proxy_busy_buffers_size 256k;proxy_connect_timeout 75s;proxy_read_timeout 75s;client_max_body_size 9999M;keepalive_timeout 60s;
Despite trying multiple nginx timeout configurations, I'm still stuck exactly where I started. My production server will not load large directories from Amazon S3 through django-filebrowser nor can I upload large files through django-filebrowser.
Here are some other lines from settings/conf files that are relevant.
Settings.py
DEFAULT_FILE_STORAGE = 's3utils.S3MediaStorage'AWS_S3_SECURE_URLS = True # use http instead of httpsAWS_QUERYSTRING_AUTH = False # don't add complex authentication-related query parameters for requests#AWS_PRELOAD_METADATA = TrueAWS_S3_ACCESS_KEY_ID = 'key' # enter your access key idAWS_S3_SECRET_ACCESS_KEY = 'secret key' # enter your secret access keyAWS_STORAGE_BUCKET_NAME = 'bucket'AWS_S3_CUSTOM_DOMAIN = 's3.amazonaws.com/bucket'S3_URL = 'https://s3.amazonaws.com/bucket/'MEDIA_URL = S3_URL + 'media/'MEDIA_ROOT = 'media/uploads/'FILEBROWSER_DIRECTORY = 'uploads'
/etc/nginx/sites-enabled/production.conf
upstream name {server 127.0.0.1:8001;}server {listen 80;server_name www.example.com;rewrite ^(.*) http://example.com$1 permanent;}server {listen 80;listen 443 default ssl;server_name example.com;client_max_body_size 999M;keepalive_timeout 60;ssl on;ssl_certificate /etc/nginx/ssl/cert.crt;ssl_certificate_key /etc/nginx/ssl/key.key;ssl_session_cache shared:SSL:10m;ssl_session_timeout 10m;ssl_ciphers RC4:HIGH:!aNULL:!MD5;ssl_prefer_server_ciphers on;location / {proxy_redirect off;proxy_set_header Host $host;proxy_set_header X-Real-IP $remote_addr;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;proxy_set_header X-Forwarded-Protocol $scheme;proxy_pass http://example;add_header X-Frame-Options "SAMEORIGIN";proxy_buffer_size 128k;proxy_buffers 100 128k;proxy_busy_buffers_size 256k;proxy_connect_timeout 75s;proxy_read_timeout 75s;client_max_body_size 9999M;keepalive_timeout 60s;}location /static/ {root /path/to/static
}location /robots.txt {root /path/to/robots;access_log off;log_not_found off;}location /favicon.ico {root /path/to/favicon;access_log off;log_not_found off;}}
Is this even an nginx issue? If so, does anyone have any suggestions for resolving this error? If not, what am I missing that would cause timeouts only on these large directories/large file uploads?
Is there a better way to approach this problem than my current setup?
Any help would be greatly appreciated.
Thanks
--
You received this message because you are subscribed to the Google Groups "Mezzanine Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mezzanine-use...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
DEBUG 03:54:11 12/08/2014 |
boto |
path=/media/uploads/img_0057_3_-_reduced.jpg |
|
DEBUG 03:54:11 12/08/2014 |
boto |
auth_path=/bucket/media/uploads/img_0057_3_-_reduced.jpg path/to/boto/s3/connection.py:653\ |
|
DEBUG 03:54:11 12/08/2014 |
boto |
Method: HEAD |
|
DEBUG 03:54:11 12/08/2014 |
boto |
Path: /media/uploads/img_0057_3_-_reduced.jpg path/to/boto/connection.py:898\ |
|
DEBUG 03:54:11 12/08/2014 |
boto |
Data: path/to/boto/connection.py:899\ |
|
DEBUG 03:54:11 12/08/2014 |
boto |
Headers: \{\} |
|
DEBUG 03:54:11 12/08/2014 |
boto |
Host: bucket.s3.amazonaws.com |
|
DEBUG 03:54:11 12/08/2014 |
boto |
Port: 443 path/to/boto/connection.py:902\ |
|
DEBUG 03:54:11 12/08/2014 |
boto |
Params: \{\} path/to/boto/connection.py:903\ |
|
DEBUG 03:54:11 12/08/2014 |
boto |
Token: None |
|
DEBUG 03:54:11 12/08/2014 |
boto |
StringToSign:\ |
HEAD |
HEAD\ |
|||
\ |
|||
\ |
Mon, 08 Dec 2014 20:54:11 GMT |
||
Mon |
08 Dec 2014 20:54:11 GMT\ |
/bucket/media/uploads/img_0057_3_-_reduced.jpg |
|
/bucket/media/uploads/img_0057_3_-_reduced.jpg |
path/to/boto/auth.py:144\ |
||
DEBUG 03:54:11 12/08/2014 |
boto |
Signature:\ |
|
AWS key:key= path/to/boto/auth.py:148\ |
|||
DEBUG 03:54:11 12/08/2014 |
boto |
Final headers: \{'Date': 'Mon |
08 Dec 2014 20:54:11 GMT' |
DEBUG 03:54:11 12/08/2014 |
boto |
Response headers: [('content-length' |
'132168') |
Have you tried disabling the logging? Here's a good tip http://stackoverflow.com/questions/1661275/disable-boto-logging-without-modifying-the-boto-files