I just recently posted on the same issue:
I definitely would like to see some change to make communicating between trusted subdomains easier. In my case it's
https://example.com posting data to
https://api.example.com which currently fails csrf referer validation when using a secure (https) site.
My thoughts on the proposed fix:
I don't have any issues with validating against the CSRF_COOKIE_DOMAIN. I would like to say I'll always own my subdomains, but who knows, that could change in the future and that may not be everyone else's same opinion. I have worked with other companies that subdomains were passed off to different organizations within the company so all sites technically didn't have the same owner. Yes, we should trust other orgs in our same company, but the point is, the root owner of
https://example.com was not in fact the owner of all subdomains. In this case, I wouldn't want CSRF_COOKIE_DOMAIN checking, but instead a more explicit approach (see below).
The other thought here would be to add an additional setting that more be explicit than
CSRF_COOKIE_DOMAIN. Instead, add a new setting, something like
CSRF_WHITELIST_ORIGINS, that explicitly calls out which origins are legit. This would handle the case better than the "one-size-fits-all" CSRF_COOKIE_DOMAIN matching. Inside CSRF_WHITELIST_ORIGINS you can define wildcards if you'd like to do the same thing as the
proposed change, bit it's more explicit on your intentions for the behavior that you'd expect.
Match all subdomains example:
More explicit whitelisting:
Security should always more explicit than implicit IMHO, but just to be clear, I am in big favor of a change as I need if for my sites and there are too many hacky solutions [1][2][3][4] that currently get around this by manually changing the
request.META['HTTP_REFERER']. Eeeek!
Troy