Product name: NServiceBus.Azure
Version: 4.1.1
We are in the process of upgrading from NServiceBus and NServiceBus.Azure 4.0.4 to 4.1.1 (I know, we are way behind). One of the things we've noticed is that, while the input queue for each endpoint is honored, the tertiary queues (Endpoint.retries, endpoint.timeouts, endpoint.timeoutsdispatcher) are not. For instance, we have an endpoint that has the following setting in the app.config:
<AzureServiceBusQueueConfig QueueName="senders" IssuerKey="our issuer key" ServiceNamespace="our namespace" />
The full namespace of the endpoint configuration class for this endpoint is lhpt.backend.senders.azurehost.endpointconfiguration. We find that on deployment, the input queue is named "senders" as expected, but the other queue names are:
lhpt.backend.senders.azurehost.endpointconfiguration_v1.2.3.4.retries
lhpt.backend.senders.azurehost.endpointconfiguration_v1.2.3.4.timeouts
lhpt.backend.senders.azurehost.endpointconfiguration_v1.2.3.4.timeoutsdispatcher
That's a concern as each time we deploy a new revision we are getting new queues created rather than re-using the old queues (and potentially abandoning messages). Furthermore, this same pattern appears to be in effect when creating topic subscriptions. We have a topic "OurPublisher.Events". When we referenced NServiceBus 4.0.4 and deployed, our topic subscription was named "senders.IMyEventMessage". Now the topic subscription name is a GUID. I'm assuming the GUID is the MD5 hash of a name that exceeded the 50 character limit for topic subscription names, but we're seeing a new topic subscription created with each revision we deploy, rather than reusing the existing subscription. This has lead to a large number of topic subscriptions, many with unprocessed messages as the newly deployed instance appears only to read messages from the newly created subscription, and not from the previous subscription.
Is there a way to control the naming of the topic subscriber and tertiary queues?
Finally, we have one topic subscriber that appears to be deadlocking, but we're having a terrible time getting any details on why. This endpoint is subscribed to multiple topics. The endpoint will operate normally, processing both messages sent directly to the endpoints input queue, as well as messages received via topic subscriptions. After a few minutes of operation, the endpoint will hang. All of our endpoints are child workers of a shared parent host. We can see that the other child workers are not affected by this hang, and continue to run normally. However, the trouble endpoint is hung. The hang does not appear to correlate to message processing, as we've observed it hang both in the middle of message processing and when no messages are queued. Once hung, the endpoint no longer retrieves messages from the input queue nor the topic subscriptions.
What next steps should we take to investigate this hang?