Thanks for the reply Yves,Yes, even when deployed out to Azure the azure queues performed very slowly (even with nagling turned off and the connection count bumped etc...). Understood that the emulator isn't the same thing..but I think it's clear this is somewhere in the NSB transport adapter, not the transport itself. I've used Azure queues in the past and they have latency ~10ms in the same virtual network with nagling off. We're running on pretty basic A1 VM's on Azure...but it's not like we're loading them either. Even the SQL transport was substantially slower going against both a local sql express or Sql Azure when hosted.
I'm guessing some of this is due to the nature of how those transports poll for messages. One of my coworkers stated that they used SQL as a transport in the past and didn't remember seeing that level of latency. It seems like SQL is definitely faster when it's getting a constant stream of messages (perhaps due to connection/polling backoff when no messages are present) but even in that case I was seeing pretty slow perf a good bit of the time when a fed it a constant stream of messages.
Rabbit on the other hand was very performant (both locally and hosted in azure), more along the lines of what I'd expect. I installed Rabbit in Azure via a docker on a low power A1 VM running linux. It blew away the other transports...hands down. Don't get me wrong, I love rabbit, but we were hoping to use Azure Queues to keep costs down as we'd need to either go with an expensive Rabbit Azure 3rd party service or create our own rabbit clusters via our own VM's. Using Azure queues is way cheaper.
I'm also going to try with Azure Service Bus to see of what that yields...unfortunately there isn't really a good way to test that locally and get any real idea of perf.