That's a question better asked on the Particular forum. They have a feature coming up to help with this.
Without that feature, you'll have to make all your db writes idempotent. If you fail, the messages still one be sent/published because you have a transaction with the queue. A transaction with a single resource is fine. It's when you're trying to Coordinate the Transaction with two or more Distributed resources that ....... pain ensues. When your messages get retried, if the data for a step is already in the db, you can proceed. If you get past the original failure, everything can continue normally.
Say you have a command handler flow like this.
Command -> Write DB1 -> Write DB2 -> Publish Event
Lets say the write to DB2 fails:
Command -> Write DB1 -> Write DB2 -X Publish Event
The message is in a transaction to the queue, so it rolls back.
The next time the message is processed, the DB2 write can happen.
The key here is that you have to handle the DB1 write since it succeeded previously. If you are doing inserts with a known key, this will fail. You'd have to handle the key conflict and proceed. If it's an update, you may be ok just updating again taking into account newer messages that may have written, but you'd need to do that anyway. If you were, say, storing to ravendb a document that this message instance is uniquely responsible for, you can just do that and not worry.
DTC is convenient, but, it ain't fun. :-D