Access to Web Interface using the IPv6 address instead of of DNS fails to go to the Netbox web page

50,046 views
Skip to first unread message

Ricardo Rodriguez

unread,
Nov 14, 2016, 10:50:29 AM11/14/16
to NetBox

Hello, I want to told you that I really do not think that this is a problem with the Netbox software, I think it is a problem of the gunicorn,

I will begin explaining my issue, I am testing the IPv6/IPv4 only connectivity without the use of the DNS server at all to resolve the names.

So I setup everything to work with the IPv6 module and everything is working fine in the IPv4 world, I made a simple test.

So I was working with IPv6 link local address, and tried to go to http://[fe80::20c:29ff:fe45:9d8c]/ and later to test if the problem was with the link local address, I added a global unicast IPv6 address into the Linux interface and to my virtual machine adapter and I assure myself to stop the firewall of the physical machine and the VM too and disabled selinux. But I was not able to go the Netbox web page instead I was going to the NGNX default index.html page.

I tried the IPv4 address http://192.168.220.6/ and the browser went to the Netbox page, the IPv6 I was getting few errors get rid of the error after doing a few mods into the files. , but if I do an DNS query, adding to the /etc/hosts in the OS windows.

#Windows /etc/hosts
2016:fade::22 ipam.lab.local

I am using centos 7.0.
Linux ipam 3.10.0-327.36.3.el7.x86_64 #1 SMP Mon Oct 24 16:09:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

cd /opt/netbox/netbox/netbox
vim configuration.py

ALLOWED_HOSTS = ['fe80::20c:29ff:fe45:9d8c', '2016:fade::22', '192.168.220.6', 'ipam.lab.local']`

and to the file located in /etc/nginx/sites-available/netbox, put some tiher parameters missing fro the guide.

'server {
listen 80;
listen [::]:80;
server_name ipam.lab.local 192.168.220.6 2016:fade::22 fe80::20c:29ff:fe45:9d8c;

access_log off;

location /static/ {
    alias /opt/netbox/netbox/static/;
}

location / {
proxy_pass http://127.0.0.1:8001;
    proxy_set_header X-Forwarded-Host $server_name;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-Proto $scheme;
    add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}

}
`
I made some tests, That I will explain now.

To the file /etc/nginx/sites-available/netbox
I added the listen [::]:80; and changed the server_name to see the importance

`#server_name 192.168.220.6;
#server_name 192.168.220.6 fe80::20c:29ff:fe45:9d8c 2016:fade::22 ipam.lab.local;
#server_name ipam.lab.local 192.168.220.6;
server_name ipam.lab.local 2016:fade::22 fe80::20c:29ff:fe45:9d8 192.168.220.6;
#server_name 2016:fade::22;
#server_name 2016:fade::22 fe80::20c:29ff:fe45:9d8c ipam.lab.local 192.168.220.6;

location / {
    #proxy_pass http://127.0.0.1:8001; 
	#proxy_pass http://192.168.220.6:8001;
	#proxy_pass http://localhost:8001;
	proxy_pass http://[::1]:8001; 
	#proxy_pass http://[2016:fade::22]:8001; 

`

IPv6 go go to nginx default but dns IPv6 http://ipam.lab.local/ or IPv4 address http://192.168.220.6/
go to Bad Request (400)

Paramenter combination
server_name 2016:fade::22 fe80::20c:29ff:fe45:9d8c ipam.lab.local 192.168.220.6;
proxy_pass http://localhost:8001;
proxy_pass http://127.0.0.1:8001;

If i use in the gunicorn file /opt/netbox-1.7.0/gunicorn_config.py
bind = '127.0.0.1:8001'

502 Bad Gateway in the web browser using dns IPv6 http://ipam.lab.local/ or IPv4 address http://192.168.220.6/
the http://[2016:fade::22]/ is still going to nginx HTTP server default index.html

So I made all the posible combinations to see if one works but nothing to be happy.

Then I add the localhost and IPv6 loopback to the bind, uncommented the IPv6 bind and commented the IPV4 and then it works using IPv6 DNS and IPv4 address but the IPv6 link local or global is still going to nginx HTTP server default index.html
only

[root@ipam ~]# vim /opt/netbox/gunicorn_config.py

command = '/usr/bin/gunicorn'
pythonpath = '/opt/netbox/netbox'
#bind = '127.0.0.1:8001'
bind = '[::1]:8001'
#bind = 'localhost:8001'
workers = 3
user = 'nginx'

So to by default listen to IPv4 and IPv6
proxy_pass http://[::1]:8001;

and the gunicorn_config.py bind IPv4 and IPv6 must be:
bind = '[::1]:8001'

the problem is that the directly Ipv6 addresses

http://[2016:fade::22]/
or
http://[fe80::20c:29ff:fe45:9d8c]/

Is not taking the proxy to the netbox, so what or where is the problem?

For me, it is like the netbox file with the nginx is not reading or taking in consideration the IPv6 addresses. After I discovered if I chooce Ipv6 address first and the the name and the IPv4 it gives me the Bad Request (400).

A developer friend told me that HTTP1.1 does not support IPv6 natively only HTTP2.x so he recommend me to use the DNS host to test.
./manage.py runserver [::0]:8000 --insecure.

Gatis Visnevskis

unread,
Dec 19, 2017, 3:32:12 PM12/19/17
to NetBox
Hello,

I was trying to set up Netbox as IPv6 only LXC container. Things are not going easy, but it is stupid to develop and implement new services IPv4 only in year 2017.
Postgres database and application works, after to change localhost to [::0]:8000 

However, i suspect, that gunicorn is broken with IPv6 only. I replaced ALL 127.0.0.1 with localhost6 and it just silently starts and netstat -nat shows no LISTEN ports.
Apache shows plain default page.
Any ideas how to debug ?

Gasha

Brian Candler

unread,
Dec 19, 2017, 5:01:44 PM12/19/17
to NetBox
For development I run netbox locally using

    /usr/bin/python3 manage.py runserver '[::]:8000' --insecure

and that's fine over both IPv4 and IPv6.

For your production environment, I see two different issues:

1. gunicorn binding ::1 localhost6 rather than 127.0.0.1 localhost
2. access to nginx over IPv6

If (1) is your only problem, then I'd be inclined to leave it as 127.0.0.1.  This doesn't not expose any IPv4 outside of your machine.  However if you can prove that gunicorn won't listen properly on [::1] then by all means take it up with them.

But it sounds like your real problem is (2). You write:

> the problem is that the directly Ipv6 addresses

> http://[2016:fade::22]/
> or
> http://[fe80::20c:29ff:fe45:9d8c]/

> Is not taking the proxy to the netbox, so what or where is the problem?


Well firstly, the address http://[fe80::20c:29ff:fe45:9d8c]/ is unlikely to work, ever.  This is a link-local address and only has significance when combined with an interface name.  Something like http://[fe80::20c:29ff:fe45:9d8c%eth0]/ *might* work, but most web clients don't support it (not even wget!)


As for http://[2016:fade::22]/, if you are getting a 400 bad request for a netbox URL, then you can use tcpdump to capture the traffic between nginx and gunicorn:

tcpdump -i lo0 -nn -s0 -A tcp port 8001

This will show you if:

1. The 400 error is coming directly from nginx (without being proxied); or
2. The request is being proxied to gunicorn, and the 400 response is coming from gunicorn.

Either way, you should have nginx logs and gunicorn logs to look at as well.

Also, what about static files, e.g. http://[2016:fade::22]/static/css/base.css ?  Do they work?

None of this sounds like a problem with Netbox, especially if you leave gunicorn listening on 127.0.0.1 and proxy to that.
Message has been deleted

Chris

unread,
Dec 20, 2017, 3:16:02 AM12/20/17
to NetBox
The '400 Bad Request' error is usually when ALLOWED_HOSTS is wrong. Could it be, that you need to put brackets around your IPv6s, like you have to do in the url? I.e. have you tried ALLOWED_HOSTS like that:

ALLOWED_HOSTS = ['[fe80::20c:29ff:fe45:9d8c]', '[2016:fade::22]', '192.168.220.6', 'ipam.lab.local']`

At least curl sends the IPv6 address in Host header wrapped in brackets:

$ curl -v "http://[::1]:8080"
* Rebuilt URL to: http://[::1]:8080/
*   Trying ::1...
* TCP_NODELAY set
* Connected to ::1 (::1) port 8080 (#0)
> GET / HTTP/1.1
> Host: [::1]:8080
> User-Agent: curl/7.57.0
> Accept: */*

~Chris

Am Dienstag, 19. Dezember 2017 23:09:01 UTC+1 schrieb Joshua Miller:
For unrelated reasons I decided to remove gunicorn and only use the mod_wsgi module in Apache. I had issues with the PATH_INFO environment variable so I had to wrap the application function in wsgi.py. Evidently gunicorn handles it differently than mod_wsgi in Apache.

"""
WSGI config for do_ipam project.

It exposes the WSGI callable as a module-level variable named ``application``.

For more information on this file, see
"""

import os
import site
import sys
from django.core.wsgi import get_wsgi_application

BASE_DIR = os.path.dirname(os.path.abspath(__file__))
sys.path.append(os.path.join(BASE_DIR, '..'))
site.addsitedir('/usr/lib/python2.7/site-packages')
os.environ["DJANGO_SETTINGS_MODULE"] = "netbox.settings"
_application = get_wsgi_application()

def application(environ, start_response):
    # Concatenate SCRIPT_NAME and PATH_INFO into PATH_INFO to force gunicorn behavior.
    # For some reason Django or Netbox doesn't like when mod_wsgi splits the path between them
    environ['PATH_INFO'] = environ.get('SCRIPT_NAME', '') + environ.get('PATH_INFO', '')
    environ['SCRIPT_NAME'] = ''
    return _application(environ, start_response)



Here is my sanitized Apache config:

<VirtualHost *:443>
    ProxyPreserveHost On
    ServerName ipam.test.lan

    Alias /netbox/static /opt/netbox/netbox/static
    <Location /netbox>
        WSGIProcessGroup netbox
    </Location>
    <Location /netbox/api>
        WSGIPassAuthorization on
    </Location>
    <Directory /opt/netbox/netbox/static>
        Options Indexes FollowSymLinks MultiViews
        AllowOverride None
        Require all granted
    </Directory>
    <Directory /opt/netbox/netbox/netbox>
        <Files "wsgi.py">
            Require all granted
        </Files>
    </Directory>
    <Location /netbox/login/>
        AuthType Kerberos
        AuthName "Netbox Login"
        KrbMethodNegotiate on
        KrbSaveCredentials on
        KrbVerifyKDC off
        KrbMethodK5Passwd off
        KrbAuthoritative off
        Krb5Keytab /etc/httpd/conf/keytab
        KrbServiceName HTTP
        KrbAuthRealms GDOT.AD.LOCAL
        Require valid-user
    </Location>
    WSGIScriptAlias /netbox /opt/netbox/netbox/netbox/wsgi.py

    SSLEngine on
    SSLCertificateFile /etc/httpd/host.crt
    SSLCertificateKeyFile /etc/httpd/host.key
</VirtualHost>

Gatis Visnevskis

unread,
Dec 20, 2017, 7:27:07 AM12/20/17
to NetBox

There are not too many forum topics related to IPv6, so i will share my experiences here, it is related.
Netbox is great, and i learned a lot yesterday.

1) it looks like gunicorn does not resolve DNS (hosts file). so if you write bind=localhost:8001 it is silently ignored. i figured this out just by browsing forum archive. now it listens to ::1:8001
2) it is good idea to put '*' in ALLOWED_HOSTS, as it is used to give access control. localhost and localhost6 is not the same, regardless of how it is resolved in /etc/hosts. and indeed, it is not resolved.
3) perhaps the same problem is /etc/apache2/sites-enabled/netbox.conf
i will try to replace [::1] with 127.0.0.1 again, just to see difference. Apache mod_proxy also can be tricky to set up with IPv6.


Gasha

Brian Candler

unread,
Dec 21, 2017, 5:53:43 AM12/21/17
to NetBox
On Wednesday, 20 December 2017 12:27:07 UTC, Gatis Visnevskis wrote:


2) it is good idea to put '*' in ALLOWED_HOSTS, as it is used to give access control. localhost and localhost6 is not the same, regardless of how it is resolved in /etc/hosts. and indeed, it is not resolved.

Incidentally, ALLOWED_HOSTS is not about access control, in the sense of which source addresses are allowed to connect.

It's about which Host: headers their browser can send, in other words, what names your Netbox instance can be accessed as.  See: https://docs.djangoproject.com/en/2.0/ref/settings/#allowed-hosts

The idea is to protect against DNS rebinding attacks:

For example, if you allow people to access the server as netbox.localdomain, but the webserver is also visible as private address 192.168.1.1, setting ALLOWED_HOSTS to ['netbox.localdomain'] prevents people accessing it directly as http://192.168.1.1/.

The DNS rebinding attack involves someone browsing a malicious site attacker.com, downloading some Javascript, and then a subsequent DNS request for attacker.com resolving to 192.168.1.1.  This gives a way for malicious Javascript to be able to access the private internal site and relay its contents to the outside world, breaking the common assumption that "sites on private IP addresses cannot be accessed from the Internet"

This means it's generally *not* a good idea to put '*' in ALLOWED_HOSTS, unless you don't care about this sort of firewall bypassing - e.g. if your Netbox instance is already accessible to the Internet on a public IP address.
Reply all
Reply to author
Forward
0 new messages