Domain Unique Constraints With Event Sourcing

3,341 views
Skip to first unread message

Jonathan Matheus

unread,
Aug 12, 2010, 12:32:55 AM8/12/10
to ddd...@googlegroups.com
I'm having issues with validating non-id unique constraints in my domain without loading every aggregate using the proposed event store schema in the link below. I can use a view cache to validate the constraint client side, but I don't know how to perform the validation in my domain.

Proposed Event Store:
http://ddddreviewdiscussion.googlegroups.com/web/Building+an+Event+Storage.docx

Here's the example I'm using:
A system that handles user registration for 2 million+ active users. These users should be able to login with their email address and password. They should also be able to change their associated email.

I have the following design:

Entities:
User(Guid Id, string email, string hashedPassword)

Events:
UserRegistered(Guid Id, string email, string hashedPassword)
UserEmailAddressUpdated(Guid Id, string email)

Commands:
RegisterUser(Guid Id, string email, string hashedPassword)
UpdateEmailAddressForUser(Guid Id, string email)

ViewCaches:
RegisteredEmailAddresses(emailAddress) - Used for client side validation on email prior to sending a RegisterUser command

When processing a RegisterUser command, I need to validate that no other user has registered with that email. How can I do that without loading every user in the system? I could use a view cache like the client side, but then I would have business logic outside of my domain. Any suggestions?

Thanks,
Jonathan

Rinat Abdullin

unread,
Aug 12, 2010, 2:00:21 AM8/12/10
to ddd...@googlegroups.com
Why don't you use indexed table in the DB just for emails (moving it into a memory on some server if it gets too large)? 

Rinat

Scott Reynolds

unread,
Aug 12, 2010, 2:03:39 AM8/12/10
to ddd...@googlegroups.com
This may be a really stupid question but Do you not have a read model data store containing this info? And if so can't you just figure out a way to read from that.... I mean duplication here doesn't seem right. 

Jonathan Matheus

unread,
Aug 12, 2010, 2:08:50 AM8/12/10
to ddd...@googlegroups.com
I could deploy an instance of RegisteredEmailAddresses view cache for use in my RegisterUserHandler, but now I've pulled business rules out of my domain and into my command handlers. That doesn't feel right. The view cache is also eventually consistent which means that I could potentially register a duplicate user in my domain. I'd like to be as consistent as possible in my domain (write side).

Jonathan Matheus

unread,
Aug 12, 2010, 2:11:23 AM8/12/10
to ddd...@googlegroups.com
What DB? My event store? Based on the schema below, it doesn't have a place to index on aggregate data, only on the id (guid). On the client side, I think that's what I'm doing in my view cache. It's a data store that holds all emails to validate on the client side before sending a RegisterUser command. I need a solution to validate on the domain side (using the event store) that is as consistent as possible.

Rinat Abdullin

unread,
Aug 12, 2010, 2:25:46 AM8/12/10
to ddd...@googlegroups.com
No, you can't place entire index on aggregate data. You need a separate entity to do the secondary indexing. Pushing the logic, it can be exposed as a service, accessible to domain.

There will be a slight chance of inconsistency which could be handled by sending an email "are you trying to register twice on this email using different browser sessions at the same instance in time?"

Rinat

Jonathan Matheus

unread,
Aug 12, 2010, 2:37:15 AM8/12/10
to ddd...@googlegroups.com
Sorry, I'm not really following you. What operations would this service have? What does this secondary entity look like?

Rinat Abdullin

unread,
Aug 12, 2010, 2:55:53 AM8/12/10
to ddd...@googlegroups.com
Sorry for being confusing here. Secondary entity here meant - "just another entity outside of the current aggregate"

The service might just have these methods, that will be essentially blocking and sync: 

bool TryEnlistEmail(string email);
void RemoveEmail(string email);

Underlying storage - whatever you need.

Rinat

Jonathan Matheus

unread,
Aug 12, 2010, 3:08:33 AM8/12/10
to ddd...@googlegroups.com
So in your example, would this secondary entity use an event store as it's underlying storage? If so, then I think I'm back at square one. I can't enlist an email without loading all my entities to ensure that the email hasn't been enlisted. Using the proposed event store, the unique identifier is not the email address; it's a guid (like all other entities). Sorry if I'm missing something.

Scott Reynolds

unread,
Aug 12, 2010, 3:09:22 AM8/12/10
to ddd...@googlegroups.com
I'd like create an

 EmailValidationService with a service EmailAddressExists. This can read from anywhere - e.g. the read model. 

then  in the command handler or the domain 

e.g Domain 

Customer 
public static Customer CreateNewCustomer(EmailValidationService evs, string email, string name)
 {
if EmailValidationService.EmailAddressExists(string email)
throw new EmailAlreadyUsedException()
}

or something like that. 

There's likely a better way but this works and keep the rules inside your domain but the implementation details outside the core. 

Rinat Abdullin

unread,
Aug 12, 2010, 3:15:47 AM8/12/10
to ddd...@googlegroups.com
It might be better to use something else than event store for the email service (unless email validation is at core of your business and requires proper domain model).

For a start I would've used relational database.

Rinat

Jonathan Matheus

unread,
Aug 12, 2010, 3:29:39 AM8/12/10
to ddd...@googlegroups.com
Interesting suggestion. That brings up a more fundamental question. Where should event sourcing be used? Is it only in services backed by a domain? I think one of the main benefits of event sourcing is knowing the context of state changes ("how did I get here"). In my mind, I'd want to use an event store anywhere I'm processing commands / events. Even though email validation probably doesn't warrant a domain model, I'd still want to know the context of its changes in state. Therefore, it would need to use event sourcing.

Jason Stevens

unread,
Aug 12, 2010, 3:34:48 AM8/12/10
to ddd...@googlegroups.com
Good question, how about this:

Create a domain object, say, UniqueEmailConstraint, which contains a HashSet of emails that can check the uniqueness constraint efficiently.  This can be lazy-loaded by the User AR, which it's part of.

Pulling numbers out of my AR*, 2 million emails is about 50Mb, compressed about 5Mb very roughly, which is manageable in an event store I'd say, if cached.  HashSets are serializable I believe.

On the client, you can check the read-side efficiently for uniqueness, which will handle 99% of cases, but that last 1% will certainly be caught server-side with this check.

Dunno, what do you reckon?  I'd say that if the memory foot-print was too high, then generalise this domain object (in case of other uniqueness constraints) and back it with a relational database.  So logically it's still a domain object but physically it uses a dirty old db.

Rinat Abdullin

unread,
Aug 12, 2010, 3:40:47 AM8/12/10
to ddd...@googlegroups.com
I'd use domain model and all the DDD things in the areas, that are the focus of the business and are important for the competitive advantage. Where it pays the effort to understand and to model.

Customer registration process - focus - use domain and event sourcing.
Email uniqueness - non-focus - forget the word "domain" and use service backed by whatever tech is available as long as it does the job.

Best regards,

Jonathan Matheus

unread,
Aug 12, 2010, 3:49:05 AM8/12/10
to ddd...@googlegroups.com
I definitely agree that you should only invest in a domain model in your core competency. However, for risk management and agility, wouldn't you want to use event sourcing in ANY service (write model) that you're implementing - domain or no domain? This would enable you to retrace you changes in state & make it easier to implement future requirements (like a new report for the email validation service).

Jonathan Matheus

unread,
Aug 12, 2010, 4:05:01 AM8/12/10
to ddd...@googlegroups.com
Is UniqueEmailConstraint a sub entity of User? There's 2 million+ instances of User AR would that be 5mb * 2 mil?

We could make this hash an AR like RegisteredUsers. It could be a single instance with a long lifetime (backed by snapshots) that would ensure consistency & enable validation of the unique constraint. Thoughts?

Rinat Abdullin

unread,
Aug 12, 2010, 4:16:59 AM8/12/10
to ddd...@googlegroups.com
I wouldn't. Besides, events for the user will contain all email changes anyway.

Jason Stevens

unread,
Aug 12, 2010, 4:30:45 AM8/12/10
to ddd...@googlegroups.com
Yeah, I was thinking a sub entity of User since it seems to be only a User concern, but just one instance of it otherwise it could get out of hand.  So I think this points to using factory for the unique constraint entity which loads and maintains just one instance.  The 2 million+ instances of User will share the same object, flyweight styles.  Hmmm... is this okay or is it bad DDD?

Definitely snapshotted and cached so that it's quick to instantiate.  It would need to be made threadsafe of course.

Stefan Holmberg

unread,
Aug 12, 2010, 4:51:40 AM8/12/10
to ddd...@googlegroups.com
Just thinking out loud around the original problem, hope that is
allowed and dont take it as what I truly think, but rather
as something we could discuss:

One different way of seeing it is to simply accept the fact. Someone
tries to do a registration with the same email twice. Probably wont
happen often at all.
So what happens if we let it go through.

When you (eventually) update your read model (viewcache as you call
it) you will get an unique key error for the second one.
There you can decide how to handle it - for example send a command
back to domain (HandleDoubleRegistration, which could do email sending
stating
"double reg error, your first registration with ID bla bla is the
valid one"), or whatever.

There will never be any more commands sent to the second AR since it
never makes it to the read model (and therefore never to the client).

This however would of course only apply in scenarios where
registration is a really cheap process, no money transactions or
whatever involved.

Thoughts?

--
Stefan Holmberg

Systementor AB
Blåklockans väg 8
132 45  Saltsjö-Boo
Sweden
Cellphone : +46 709 221 694
Web: http://www.systementor.se

Jonathan Matheus

unread,
Aug 12, 2010, 5:09:18 AM8/12/10
to ddd...@googlegroups.com
I struggled with that as well. In this example, you're probably right. However, let me play devil's advocate to get back to the non-id unique constraint validation in a domain.

Would there only by 1 physical read model? Specifically in this case, I think that we might want to physically deploy the read model in a locally on 10 web nodes. Would all of them issue this command?

Also, by allowing duplicate user registrations in our domain, we threw a UserRegistered event. What if we had an event handler that sent an registration successful email upon receiving on a UserRegistered event. This could lead to bad user experiences. Joe Smith gets a you've registered email, b/c John Smith mistyped his email. The likelihood of a uniqueness violation in this example is low, but that's not the case in all domains.

Now every other view model / service that depends on UserRegistered would also have to handle another event that describes the rollback of an account due to duplication.

Jason Stevens

unread,
Aug 12, 2010, 5:19:48 AM8/12/10
to ddd...@googlegroups.com
Good angle but not sure about that one.  I've seen some CQRS/ES diagrams that look like they do this but personally I'd try to avoid it.  Business concerns have kind of leaked out of the domain.

Moreover, the read model is assumed to be stale data and not the source of truth, and so having constraints there to check business concerns is a bit flimsy.  Not that it's flimsy in this particular case, but in general.

It would be good to work out a general approach that can handle critical situations too, since it's bound to come up.

Richard Dingwall

unread,
Aug 12, 2010, 7:33:17 AM8/12/10
to ddd...@googlegroups.com
On 12 August 2010 10:19, Jason Stevens <jasonk...@gmail.com> wrote:
> Good angle but not sure about that one.  I've seen some CQRS/ES diagrams
> that look like they do this but personally I'd try to avoid it.  Business
> concerns have kind of leaked out of the domain.
> Moreover, the read model is assumed to be stale data and not the source of
> truth, and so having constraints there to check business concerns is a bit
> flimsy.  Not that it's flimsy in this particular case, but in general.
> It would be good to work out a general approach that can handle critical
> situations too, since it's bound to come up.

If I recall correctly, Udi Dahan often suggests the following for
cases like these:

1. Do a "best attempt" checking username/email uniqueness against the read model
2. Perform insert
3. Catch any database/unique constraint violation exception and bubble
back to the user

I like this approach. Unique username/email race conditions are rare;
don't bend over backwards to handle them. Sometimes an error is ok!

Obviously this is geared towards having users in an RDBMS; with event
sourcing you would need to set up your own index somewhere alongside
(maybe make that part of your read model transaction/fully consistent
with the domain).

Original: http://tech.groups.yahoo.com/group/domaindrivendesign/message/16403

--
Richard Dingwall
http://richarddingwall.name

Greg Young

unread,
Aug 12, 2010, 10:27:44 AM8/12/10
to ddd...@googlegroups.com
I am just replying to the last one on the list after reading through.

To me the most important concepts have been completely missed in this
thread and they are a big part of why eventual consistency is so cool
(it makes you think about things).

*What is the business impact of having a failure*

This is the key question we need to ask and it will drive our solution
in how to handle this issue as we have many choices of varying degrees
of difficulty.

Most of the time the business impact of such a failure is low and the
probability of it happening is low. If we query the eventually
consistent store at the time of submission (either from client or from
server as this is a big part of how one-way commands work) then our
probability of receiving a duplication is directly calculable based on
the amount of eventual consistency. We can drive this probability down
by lowering our SLA very often this is enough.

We can detect asynchronously if we broke our invariant. Imagine an
eventhandler that inserts into a table with a constraint. If it gets
an exception, we broke the constraint (note this is not really the
"read model" but the same db can be used if convenient, it is
important to note the distinction as if we scale to have 5 read models
we don't have 5 of these ...).

What do we do if we break the constraint? We need to come back to that
business impact statement above. For most circumstances, just raising
an alert to an admin etc is enough, these things are very low
probability of happening and are often not worth the time/cost of
implementing automatic recovery. Just imagine 1 username create out of
1,000,000 fails this way. How long would it take to automate the
process of handling the situation? Consider discussions with domain
experts etc. 5 minutes of admin time once a year is much better ROI in
most of these situations than a week of developer time to automate.

Continuing along it has now been decided that this has large enough
impact that it should be automated. The said process that finds the
duplicates could either raise an event DuplicateUsernameDetected or
directly call a command ResolveDuplicateUsername (which involves more
discussion). It is important to note that in either of these cases we
are discussing the "What" not the "How" it would never issue a command
"DeleteUser" etc, how to handle these situations is core domain logic
and should be modeled within the domain. In the username example
perhaps ResolveDuplicateUsername marks the user as not being able to
login (and as a duplicate) and it sends an email to the user saying
"Hey we screwed up but its your lucky day! you get to create a new
username ..."

But even after all of this if from a business perspective the impact
is too high we can still make things consistent. We could drop in a
service to the domain that deals with a consistent set. This would of
course be the last resort as its the most complicated of these
solutions and brings with it many limiting factors in terms of our
architecture.

Udi had a great example of this in his explanation of 1-way commands.
It was an ATM that would spit out money having only read your balance
from an eventually consistent read model. The reason this can work is
that from a business perspective the risk is low (and it is built into
the business model itself). You have a bank account with me, I know
your SS# and all of your information. For people who overdraw their
accounts I will recover atleast 90% of the money that has been
overdrawn. On top of that I charge a fee for each overdraw that
occurs. For these reasons the business impact of such a problem is
low.

To sum up I just want to reiterate that this is a *good* thing.
Eventual consistency is forcing us to learn more about our domain. It
is forcing us to ask questions that are otherwise often not asked.

Consistency is over-rated.

HTH,

Greg

--
Les erreurs de grammaire et de syntaxe ont été incluses pour m'assurer
de votre attention

Adam Dymitruk

unread,
Aug 12, 2010, 3:02:55 PM8/12/10
to ddd...@googlegroups.com
To contribute to the unnecessary battle:

We can store another "entity" based on the SHA-1 of the email itself.
The existence of this tells you whether that email was already used.
This requires very little coding. This is how Git works when it stores
anything. Simple and fast and guaranteed to be unique for a very large
set of objects stored. This scheme can be repeated for many other
non-id uniqueness requirements in the domain.

HTH,

Adam

--
Adam

http://adventuresinagile.blogspot.com/
http://twitter.com/adymitruk/
http://www.agilevancouver.net/
http://altnetvancouver.ning.com/

Greg Young

unread,
Aug 12, 2010, 3:08:58 PM8/12/10
to ddd...@googlegroups.com
Yes but this is just a heuristic. The key to the problem is looking at it in a different way.

Adam Dymitruk

unread,
Aug 12, 2010, 3:19:07 PM8/12/10
to ddd...@googlegroups.com

Hence my first sentence was the qualifier :)

Jason Stevens

unread,
Aug 12, 2010, 5:50:29 PM8/12/10
to ddd...@googlegroups.com
This architecture seems to make some previously trivial things more difficult.  The response, "yes, but do you really need that from a business perspective?" feels like I'm being guided to bury my head in the sand with respect to fundamental architectural problems.  I'm thinking, "of course this is only email registration, but what if..."

This is just how if feels.

But you're right Greg, we've been asking the wrong questions.  This architecture isn't a silver bullet - some things will be easier and other things harder - but it looks very promising and has a lot to offer including getting us to think outside our (read: me) developer bubbles.  The question, "do you need that from a business perspective?" isn't burying our heads in the sand, it's getting our heads out of the sand and jolly well going and talking to the business.  It pulls the wool from beneath the problem: "what problem?"

Cheers

Jason Stevens

unread,
Aug 12, 2010, 5:52:33 PM8/12/10
to ddd...@googlegroups.com
Nice trick, thanks for that.

Adam Dymitruk

unread,
Aug 12, 2010, 5:56:37 PM8/12/10
to ddd...@googlegroups.com
For further reading, see how git stores the snapshots of files,
commits, trees and signed tags. It's part of the content addressable
system idea. It can be used to solve lots of issues of this nature:
http://progit.org/book/ch9-2.html

HTH,

Adam

Jason Stevens

unread,
Aug 12, 2010, 6:06:31 PM8/12/10
to ddd...@googlegroups.com
Great, thanks for that.  That's a good point too, Git is very event-source like.  I'll have a read.

Rinat Abdullin

unread,
Aug 12, 2010, 6:16:23 PM8/12/10
to ddd...@googlegroups.com
BTW, one of our developers wrote an article on content-based storage for the cloud:

This might work nicely for aggregate snapshots.

Best regards,
Rinat

Stefan Holmberg

unread,
Aug 13, 2010, 1:55:28 AM8/13/10
to ddd...@googlegroups.com
@adam

>>We can store another "entity" based on the SHA-1

just checking if I understand the effects and "costs" of that:
Wouldn't that introduce the need for UOW into our commandhandler ?

Adam Dymitruk

unread,
Aug 13, 2010, 1:59:34 AM8/13/10
to ddd...@googlegroups.com

At the basic level. But for the sake of argument, I'll say "no"

Stefan Holmberg

unread,
Aug 13, 2010, 2:26:17 AM8/13/10
to ddd...@googlegroups.com
> At the basic level. But for the sake of argument, I'll say "no"

lol...Care to explain the "no" a little more?

Adam Dymitruk

unread,
Aug 13, 2010, 2:51:19 AM8/13/10
to ddd...@googlegroups.com

Because it is so closely tied to a particular AR. NOT really a uow.

Stefan Holmberg

unread,
Aug 13, 2010, 3:14:28 AM8/13/10
to ddd...@googlegroups.com
Sorry, I'm slow, still not completely getting it...Without a transaction?

query sha1
if not exist
insert sha1
save ar


or is the secret to never query, but to always insert sha1 - if Unique
Key Exception then we know its already existing and should not save
AR?

--

Adam Dymitruk

unread,
Aug 13, 2010, 3:26:37 AM8/13/10
to ddd...@googlegroups.com

The idea of the transaction is purely in your hands. So go ahead and get a mutex if it's living on one server. If you need more we'll talk.

On 2010-08-13 12:14 AM, "Stefan Holmberg" <stefan....@systementor.se> wrote:
Sorry, I'm slow, still not completely getting it...Without a transaction?

query sha1
if not exist
      insert sha1
      save ar


or is the secret to never query, but to always insert sha1 - if Unique
Key Exception  then we know its already existing and should not save
AR?



On Fri, Aug 13, 2010 at 8:51 AM, Adam Dymitruk <ad...@dymitruk.com> wrote:

> Because it is so close...

--

Stefan Holmberg

Systementor AB
Blåklockans väg 8
132 45  Saltsjö-Boo
Sweden

Cellphone : +46 709 221...

Stefan Holmberg

unread,
Aug 13, 2010, 3:31:43 AM8/13/10
to ddd...@googlegroups.com
>If you need more we'll talk
no I finally get it Thanks for your patience :)

--
Stefan Holmberg

Adam Dymitruk

unread,
Aug 13, 2010, 3:33:22 AM8/13/10
to ddd...@googlegroups.com

Now I'm worried ;)

Greg Young

unread,
Aug 13, 2010, 9:26:15 AM8/13/10
to ddd...@googlegroups.com
There are lots of ways of solving the problem each has their own properties. We tend to try to avoid the consistency as it forces a lock but it can be done if absolutely needed by the business.

seagile

unread,
Aug 15, 2010, 10:02:48 AM8/15/10
to DDD/CQRS
"Looking at it in a different way." - I believe this is were most
people battle with the conservative thinking of most organizations (to
them it's just a database constraint). We're so used to automating
this stuff that "going back" seems like "giving up" on something that
used to be so easy (from a technical perspective). Developers will
have to start discussing the business with domain experts (if there is
one in the form of a BA or PO) and come up with convincing arguments
(how are they going to back this up?). It requires "good will" on both
sides, when there shouldn't even be "sides".

To me consistency is indeed overrated, if it does not cause human
burden that was not there in "the old way" of doing things (but I
admit this is highly subjective). If allowing inconsistency causes too
much manual labor which is not natural to the process you are trying
to automate, more discussion is needed to get to the bottom of things.
And that's when I like to talk to the end-users, bcoz they know when
it hurts ;-) - not my PO/BA.

On 12 aug, 21:08, Greg Young <gregoryyou...@gmail.com> wrote:
> Yes but this is just a heuristic. The key to the problem is looking at it in
> a different way.
>
>
>
>
>
> On Thu, Aug 12, 2010 at 3:02 PM, Adam Dymitruk <a...@dymitruk.com> wrote:
> > To contribute to the unnecessary battle:
>
> > We can store another "entity" based on the SHA-1 of the email itself.
> > The existence of this tells you whether that email was already used.
> > This requires very little coding. This is how Git works when it stores
> > anything. Simple and fast and guaranteed to be unique for a very large
> > set of objects stored. This scheme can be repeated for many other
> > non-id uniqueness requirements in the domain.
>
> > HTH,
>
> > Adam
>
> > On Thu, Aug 12, 2010 at 7:27 AM, Greg Young <gregoryyou...@gmail.com>
> > > On Thu, Aug 12, 2010 at 7:33 AM, Richard Dingwall <rdingw...@gmail.com>
> > wrote:

developmentalmadness

unread,
Aug 16, 2010, 1:17:31 PM8/16/10
to DDD/CQRS
I'm still pretty new to all this, but why hasn't anyone brought up the
probability of an actual consistency error in this case?

What are the odds that two people are going to type in the same value
for an email address during the window of time where the read store is
out of sync? How many requests are there to either create a new
account or update an email address within that time frame? This is
where the comment "consistency is overrated" comes in. I really don't
think this is an issue, 2 million users is certainly a number to be
respected, but in the global scheme of things the number of possible
collisions in the window here is still relatively small. I would think
that reducing the window of time the two are out of sync for this case
would be less expensive than ensuring that the case could never
possibly happen. Then, create a process to alert you when it does and
provide a means to manually correct it or alert the user and allow
them to correct it. But IMHO trying to make sure the case never
happens is going to cost you more than it is worth.

Greg Young

unread,
Aug 16, 2010, 1:20:37 PM8/16/10
to ddd...@googlegroups.com
How is this differing from what I suggested above?

Neil Robbins

unread,
Aug 16, 2010, 1:21:05 PM8/16/10
to ddd...@googlegroups.com
Check Greg's earlier comment in this thread. I think he has discussed this.

developmentalmadness

unread,
Aug 17, 2010, 10:49:04 AM8/17/10
to DDD/CQRS
[Remove foot from mouth]....yeah, well maybe I'll read a little slower
next time :p.

James Hicks

unread,
Aug 19, 2010, 7:39:30 PM8/19/10
to ddd...@googlegroups.com
As an alternative to having a table in a database with a unique constraint on the username field, you could have a UsernameRegistry aggregate root.  The registry would contain a collection of all usernames in the system.  During the process of handling a RegisterUser command, the handler would load the UsernameRegistry from the event store and send the command to it.  If no exception is thrown, the handler would create a new User.  There would be only one UsernameRegistry in your domain. 

Snapshots could be generated on the UsernameRegistry to keep the rehydration time low.

Thoughts?

Sample code

public class UsernameRegistry() : AggregateRoot
{
   private List<string> usernames;

    private void Handle(UsernameAddedToRegistry @event)
    {
        usernames.Add(@event.Username)
    }

    public void RegisterUsername(RegisterUserCommand command)
    {
        if(usernames.Contains(command.Username))
        {
            throw new UsernameAlreadyInUseException(command);
        }

        Raise(new UsernameRegistered(command.Username));
    }
}

public class RegisterUserHandler : ICommandHandler<RegisterUserCommand>
{
    public void Handle(RegisterUserCommand command)
    {
        var registry = _repository.GetById<UsernameRegistry>(Application.UserRegistryId);
        registry.RegisterUsername(command);
        var user = User.RegisterUser(command);
        _repository.Add(user);

Jason Stevens

unread,
Aug 19, 2010, 8:55:59 PM8/19/10
to ddd...@googlegroups.com
Use a HashSet rather than List.  It's designed for this kind of thing and will give significantly better performance.  Otherwise looks good to my CQRS-noob eyes.

I do prefer Adam's approach of hashing the username to the aggregate id though, much simpler.  I've adopted this approach using SH1 and trimming off the excess bytes to form a guid.  Although... come to think of it... I'm not sure this is such a great plan because you really want the guids to be mostly sequential for read performance.  So maybe what you're doing is the way to go.  Greg suggested throwing an error when you get a collision on the read side - worth exploring.

Udi Dahan

unread,
Aug 20, 2010, 2:30:00 AM8/20/10
to ddd...@googlegroups.com

Use a saga to create users to handle the pending data until the user clicks the confirmation link in the email.

 

Cheers,

-- Udi

Adam Dymitruk

unread,
Aug 20, 2010, 2:34:33 AM8/20/10
to ddd...@googlegroups.com

You're almost there. The last piece is to form a guid comb. Flip it for random distribution storage.

Greg Young

unread,
Aug 20, 2010, 3:20:25 AM8/20/10
to ddd...@googlegroups.com
Not on the "read side" conceptually ... if you have two read models you don't want to throw 2 errors.

Greg Young

unread,
Aug 20, 2010, 3:24:19 AM8/20/10
to ddd...@googlegroups.com
If there is an email (that was not a constraint upon the original problem)
Reply all
Reply to author
Forward
0 new messages