This is kinda of a lose lose for us. If we follow it serves you well, if we don't follow then it serves others well.
I "guess" we could add an option for it 🤷♂️ Though that's kinda questionable as well.
Here's how Google treats it, though I couldn't find a specific standard or RFC that states redirects are or aren't supported. Arguably from my point of view, if it isn't a 200 right off the bat then it isn't in the expected location and shouldn't be found/obeyed. (Yes I understand that ZAP isn't using it the same as as web crawlers, but gotta be reasonable somehow).
> When requesting a robots.txt file, the HTTP status code of the server's response affects how
the robots.txt file will be used by Google's crawlers. The following table summarizes how
Googlebot treats robots.txt files for different HTTP status codes.
>
3xx (redirection)
> Google follows at least five redirect hops as defined by
RFC 1945 and then
stops and treats it as a
404 for the robots.txt file. This also applies to any
disallowed URLs in the redirect chain, since the crawler couldn't fetch rules due to
the redirects.
> Google doesn't follow logical redirects in robots.txt files (frames, JavaScript, or
meta refresh-type redirects).