Alternative to pessimistic locking in high contention environment

215 views
Skip to first unread message

Alexandre Potvin Latreille

unread,
Apr 17, 2015, 3:22:40 PM4/17/15
to ddd...@googlegroups.com
I read somewhere that CQRS is a good candidate when a large number of users interacts with a small number of resources.

Let's take the concept of a Channel in a chat server. There would be a lot of contention around a Channel aggregate, since many users could join/part/be banned/made moderator at the same time and it would be hard to reduce contention by breaking apart a Channel aggregate since many invariants have to be enforced, such as a banned user shall not be able to speak on that channel.

We could use application-level locks to make a Channel aggregates thread-safe, but I was wondering if some CQRS principles could be helpful for simplifying concurrency.

I was thinking that commands could be received concurrently and be concurrently acknowledged while processed on a single thread, therefore eliminating concurrency, but a single command taking too long to execute would be problematic.

From what I understood, Actor-Model could be the solution here, where every Channel would be an actor with it's own mailbox, but is there another alternative that doesn't involve actors?


David Ackerman

unread,
Apr 17, 2015, 11:15:40 PM4/17/15
to ddd...@googlegroups.com
I think you still want to organize your write model for less contention.  I don't see a ton of value in "channel" being an aggregate with the states of every user in there.  You could just have a "user" aggregate with their information for all channels stored in it.  That would have way less contention, and then the read model could be built up using user and channel to get the list of users currently online.  I think in the case you describe, if someone is banned in the same millisecond their message is being posted, it wouldn't be the end of the world if the message got out and then they were banned.  What I am saying is, I think not being 100% atomic with these operations is fine.


--
You received this message because you are subscribed to the Google Groups "DDD/CQRS" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dddcqrs+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Manuel Rascioni

unread,
Apr 18, 2015, 1:52:59 PM4/18/15
to ddd...@googlegroups.com
Some CQRS and DDD principles can help you.
If you want a strong consistency, you need to manage the operations (commands) inside the boundaries of an aggregate. The aggregate is what gave you the consistency.
How you achive this is a different problem. Akka can help you having an aggregate manage a single message at time.

When you say: 

"I was thinking that commands could be received concurrently and be concurrently acknowledged while processed on a single thread, therefore eliminating concurrency, but a single command taking too long to execute would be problematic."

That's what akka bring to you, and the "commands taking too long" should never happen. If it happens is because you need to do a lot of stuffs, and the results of these operations will influence the execution of the following operations (ban user x, publish message of user x).
So if you have to wait too much, you have to evaluate if a weak consistency is accettable. 
As David says maybe is not a problem if is published a message from a user that was sent in the same moment as the user was banned.
In this case you can prefer "speed" over "consistency", in some case you need both, and, in that case you can choose the weak consistency but also implement some component that detect the inconsistency and correct it (compensating actions).

Akka is just a model to have many pieces of code that execute one message at time, than it is your choice to partition these operations in an acceptable way for your needs.

Tom Janssens

unread,
Apr 18, 2015, 5:34:37 PM4/18/15
to ddd...@googlegroups.com
If you are concerned about contention, a chat channel is not a good candidate IMO; think about how many messages get posted in a single room/second: I'd assume you could even run a chat server on a mobile phone. Also, ordering in chat servers is usually not life-threatening afaik, so you could just repost on failure.

Try to opt for the simplest thing possible.

Op vrijdag 17 april 2015 21:22:40 UTC+2 schreef Alexandre Potvin Latreille:

Alexandre Potvin Latreille

unread,
Apr 20, 2015, 10:05:31 AM4/20/15
to ddd...@googlegroups.com
Yes, it looks like making User the AR would reduce contention a lot. Therefore, the User would be responsible for maintaining the list of channels he's on as well as the ones he's been banned from.

Let's say we have a rule stating that a Channel's topic can only be changed by a user that is on that channel, but also is an operator. What I find weird is that User is responsible for enforcing the invariants, but the mutation actually occurs on the Channel aggregate. Does that make any sense?

E.g. (pseudo-code)

class UserChannel {
   
int channelId;
   
UserChannelRole role;
}

class User {
   
Set<UserChannel> channels;
   
Set<Integer> channelsBannedFrom;
   
...
   

   
//Here, I'm often wondering if the AR instance should be passed even when we are not holding
   
//onto the reference, but only it's id. The ubiquitous language is not really well expressed
   
//by the following signature and join (Channel channel) seems to make more sense, but at the same time
   
//the reference is unnecessary.
   
public void join (int channelId) {
       
//check if not banned
       
//add to channels
   
}

   
public void changeTopicOf(int channelId, String newTopic) {
       
//assert that on channel
       
//assert that user is operator
       
//emit a ChannelTopicChanged event
   
}
}



A subscriber would then listen to ChannelTopicChanged and make Channel consistent with the change. The design seems weird to me, but more usable at the same time...





David Ackerman

unread,
Apr 21, 2015, 12:05:04 AM4/21/15
to ddd...@googlegroups.com
What about Channel.changeTopic(String newTopic, User user)?  The command could be on the channel, and the application provides the user to the channel so it can check the proper invariants.  Then the channel is updating itself (and emitting it's own ChannelTopicChanged event), and just collaborating with the User.  This does mean, of course, that the User may be concurrently being banned or demoted from operator while this is going on, but I don't see a huge issue with that.

You might even be able to do something like have the channel listen for UserBanned events and if someone was banned within 5 seconds of updating the channel topic, it is reverted to the previous value (by keeping around the history and checking timestamps etc).  That way you could mute someone retroactively. 

Alexandre Potvin Latreille

unread,
Apr 21, 2015, 12:43:04 PM4/21/15
to ddd...@googlegroups.com
I could, but placing the operation on channel sacrifices the strong consistency we managed to enforce for some operations and we now have additional complexity in order to be able to issue compensating actions. Some actions also cannot be easily compensated. Imagine one user flooding a channel with many messages per seconds and another one automatically issuing a ban command.

It would be weird for clients to see the following, no? The only compensation that could be made is tell clients that some messages shall be ignored after the fact.

flooder> FLOOD
flodder> FLOOD
flodder has been banned
flodder> FLOOD
flodder> FLOOD
...

It looks like modeling a scalable chat server is much more of a challenge than I expected. At the same time, I guess that perhaps I should start with a large Channel aggregate as an actor or using pessimistic locking and eventually compromise on consistency as scalability issues arise, if they ever do...



Greg Young

unread,
Apr 21, 2015, 12:58:48 PM4/21/15
to ddd...@googlegroups.com
To be fair building a scalable chat server is very difficult but not
for any of the reasons you suggest.
singl
1) Why would you ever allow more than say 3-5 messages/second from a
client? At 200 clients/sec doing this your chat room would become
useless so would likely limited again.

Even at that point you could have a channel/room and have strong
consistency. This would do well up to a few hundred thousand chat
messages/second/room ... and seriously who is reading this chat room?

If you want to handle high contention the easiest way is ... don't.
Instead put boundaries in your problem and try to keep everything
within a boundary single threaded. A real world example of this.
Instrument in a financial market.

Cheers,

Greg
> --
> You received this message because you are subscribed to the Google Groups
> "DDD/CQRS" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dddcqrs+u...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.



--
Studying for the Turing test

Alexandre Potvin Latreille

unread,
Apr 21, 2015, 2:15:16 PM4/21/15
to ddd...@googlegroups.com
@Greg, Yeah I guess that my performance concerns in having a large cluster Channel aggregate were unfounded. How would I go in practice to process every channel commands in a dedicated thread? Should I setup a message queue per channel and as commands are received, place them in the appropriate queue?

"To be fair building a scalable chat server is very difficult but not for any of the reasons you suggest"

I would be interested to know what aspects you consider to be difficult?

Greg Young

unread,
Apr 21, 2015, 2:27:36 PM4/21/15
to ddd...@googlegroups.com
On Tue, Apr 21, 2015 at 9:15 PM, Alexandre Potvin Latreille
<alexandre.pot...@gmail.com> wrote:
> @Greg, Yeah I guess that my performance concerns in having a large cluster
> Channel aggregate were unfounded. How would I go in practice to process
> every channel commands in a dedicated thread? Should I setup a message queue
> per channel and as commands are received, place them in the appropriate
> queue?
>

You wouldnt have a thread per but conceptually each runs in its own
thread (eg no concurrency and things happen on a current thread).
Processing models such as rx or actor models give this easily.


> "To be fair building a scalable chat server is very difficult but not for
> any of the reasons you suggest"
>
> I would be interested to know what aspects you consider to be difficult?

Google c1m (1 million connections) used to be c10k. Keeping lots of
mostly idle connections open is tough :)
Reply all
Reply to author
Forward
0 new messages