CSRF verification failed. Request aborted.

1,318 views
Skip to first unread message

Samuel Mutel

unread,
Jul 16, 2021, 3:23:42 PM7/16/21
to NetBox
Hello,

I have setup two netbox instances. These two instances are sharing the same postgresql database and the same redis cluster (sentinels).

On the first one, everything is working fine. On the second one after entering my login and password the message CSRF verification failed. Request aborted. is displayed.

Reason given for failure:

    Referer checking failed - https://xxx/login/?next=/ does not match any trusted origins.
    
The netbox configuration is the same on the new nodes.

The allowed_hosts is equal to:
ALLOWED_HOSTS = json.loads(r'''["*"]''')

So I don't understand what's wrong with my config ...

Thanks.


Brian Candler

unread,
Jul 17, 2021, 12:50:19 PM7/17/21
to NetBox
Any particular reason for using json.loads there, instead of the simpler:

ALLOWED_HOSTS = ['*']

?  Presumably you also have "import json" earlier in the file?

Can you explain why you have two netbox "instances" using the same backend database and redis?  The normal way to scale Netbox is just to change the number of gunicorn workers.  Is this for some redundancy/failover scenario?

I'm just trying to get to the bottom of what's different between your config and a standard one.

If you do:

cd /opt/netbox/netbox/netbox
diff -u configuration.example.py configuration.py

then it may give some clues.

One other thing to check is for misconfiguration of your front-end proxy (apache2 or nginx) on one of the instances; it may not be passing the Host: or X-Forwarded-Host: header through.

Samuel Mutel

unread,
Jul 19, 2021, 4:53:59 AM7/19/21
to NetBox
Any particular reason for using json.loads there, instead of the simpler:
ALLOWED_HOSTS = ['*']
?  Presumably you also have "import json" earlier in the file?

=> I am using an ansible role to setup netbox:

Can you explain why you have two netbox "instances" using the same backend database and redis?  The normal way to scale Netbox is just to change the number of gunicorn workers.  Is this for some redundancy/failover scenario?

=> Yes for high availability purpose

cd /opt/netbox/netbox/netbox
diff -u configuration.example.py configuration.py
=> file attached

One other thing to check is for misconfiguration of your front-end proxy (apache2 or nginx) on one of the instances; it may not be passing the Host: or X-Forwarded-Host: header through.
=>
location / {
      add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
      proxy_pass http://127.0.0.1:8000;
      proxy_set_header X-Forwarded-Host $server_name;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-Proto $scheme;
    }
netbox_config.py

Brian Candler

unread,
Jul 19, 2021, 11:45:54 AM7/19/21
to NetBox
I notice that

     proxy_set_header X-Forwarded-Host $server_name;

is not what's in the recommended nginx config (it uses $http_host instead); but as I don't use nginx, I can't really comment further.

Other than that: if it works when you hit one server but not the other, then the only explanation I can think of is that there is some difference between them.  You'll need to go through the servers, side by side, until you find that difference.  Also, I note that the default config runs gunicorn on port 8001, not port 8000, although obviously you're free to diverge from recommended configs if you know what you're doing.

You may also find it helpful to run tcpdump on the loopback interface to see the requests hitting gunicorn, and compare them between the two hosts:

tcpdump -i lo -nn -s0 -A tcp port 8000 or tcp port 8001

Samuel Mutel

unread,
Jul 20, 2021, 4:59:01 AM7/20/21
to NetBox
Hello,

Thank you for your help. I resolved my issue. I think it was due to wrong proxy_set_header parameter.

Thanks.
Reply all
Reply to author
Forward
0 new messages