Others may call me over-cautious, but if this were for a service I was running in production on the public Internet, running nginx worker processes as root would make me feel very anxious.
nginx (et al) drop as many privileges as they can, as soon as they can, so as to avoid exposing processes with root privileges to any old client on Internet who could be fuzzing for whatever they can find, searching brute-force for exploits. Or people might have nasty zero-days. We don't know.
It might be a trade-off you're happy to make in order to be able to stop the server as you need, and that's totally your decision :-) But for what it's worth, I wouldn't want to do that, and would try to find some other way if possible. I don't know exactly how to do that, and haven't looked into it deeply - maybe you could do something somewhere in some part of you your orchestration layer, or systemd, or whatever you have starting and running nginx, maybe even thinking round the edges and coordinating somehow with some other service. I don't know. I just know that I wouldn't want to run services open to Internet abuse as root.
nginx and OpenResty certainly are solid and well-tested in battle, and I'm sure given OpenResty's heritage it seems at least very unlikely that there are many buffer overflows lying in wait - but nginx itself is not immune to vulnerabilities (
https://www.cvedetails.com/product/17956/Nginx-Nginx.html?vendor_id=10048), and OpenResty is so much less high-profile than nginx that anything like that hiding inside it may simply not have been discovered yet.
Of course it's entirely your call, and I'd be interested to hear other opinions if people here disagree with me, but, just my 2 cents: I would try to avoid doing that.
cheers,