One different way of seeing it is to simply accept the fact. Someone
tries to do a registration with the same email twice. Probably wont
happen often at all.
So what happens if we let it go through.
When you (eventually) update your read model (viewcache as you call
it) you will get an unique key error for the second one.
There you can decide how to handle it - for example send a command
back to domain (HandleDoubleRegistration, which could do email sending
stating
"double reg error, your first registration with ID bla bla is the
valid one"), or whatever.
There will never be any more commands sent to the second AR since it
never makes it to the read model (and therefore never to the client).
This however would of course only apply in scenarios where
registration is a really cheap process, no money transactions or
whatever involved.
Thoughts?
--
Stefan Holmberg
Systementor AB
Blåklockans väg 8
132 45 Saltsjö-Boo
Sweden
Cellphone : +46 709 221 694
Web: http://www.systementor.se
If I recall correctly, Udi Dahan often suggests the following for
cases like these:
1. Do a "best attempt" checking username/email uniqueness against the read model
2. Perform insert
3. Catch any database/unique constraint violation exception and bubble
back to the user
I like this approach. Unique username/email race conditions are rare;
don't bend over backwards to handle them. Sometimes an error is ok!
Obviously this is geared towards having users in an RDBMS; with event
sourcing you would need to set up your own index somewhere alongside
(maybe make that part of your read model transaction/fully consistent
with the domain).
Original: http://tech.groups.yahoo.com/group/domaindrivendesign/message/16403
--
Richard Dingwall
http://richarddingwall.name
To me the most important concepts have been completely missed in this
thread and they are a big part of why eventual consistency is so cool
(it makes you think about things).
*What is the business impact of having a failure*
This is the key question we need to ask and it will drive our solution
in how to handle this issue as we have many choices of varying degrees
of difficulty.
Most of the time the business impact of such a failure is low and the
probability of it happening is low. If we query the eventually
consistent store at the time of submission (either from client or from
server as this is a big part of how one-way commands work) then our
probability of receiving a duplication is directly calculable based on
the amount of eventual consistency. We can drive this probability down
by lowering our SLA very often this is enough.
We can detect asynchronously if we broke our invariant. Imagine an
eventhandler that inserts into a table with a constraint. If it gets
an exception, we broke the constraint (note this is not really the
"read model" but the same db can be used if convenient, it is
important to note the distinction as if we scale to have 5 read models
we don't have 5 of these ...).
What do we do if we break the constraint? We need to come back to that
business impact statement above. For most circumstances, just raising
an alert to an admin etc is enough, these things are very low
probability of happening and are often not worth the time/cost of
implementing automatic recovery. Just imagine 1 username create out of
1,000,000 fails this way. How long would it take to automate the
process of handling the situation? Consider discussions with domain
experts etc. 5 minutes of admin time once a year is much better ROI in
most of these situations than a week of developer time to automate.
Continuing along it has now been decided that this has large enough
impact that it should be automated. The said process that finds the
duplicates could either raise an event DuplicateUsernameDetected or
directly call a command ResolveDuplicateUsername (which involves more
discussion). It is important to note that in either of these cases we
are discussing the "What" not the "How" it would never issue a command
"DeleteUser" etc, how to handle these situations is core domain logic
and should be modeled within the domain. In the username example
perhaps ResolveDuplicateUsername marks the user as not being able to
login (and as a duplicate) and it sends an email to the user saying
"Hey we screwed up but its your lucky day! you get to create a new
username ..."
But even after all of this if from a business perspective the impact
is too high we can still make things consistent. We could drop in a
service to the domain that deals with a consistent set. This would of
course be the last resort as its the most complicated of these
solutions and brings with it many limiting factors in terms of our
architecture.
Udi had a great example of this in his explanation of 1-way commands.
It was an ATM that would spit out money having only read your balance
from an eventually consistent read model. The reason this can work is
that from a business perspective the risk is low (and it is built into
the business model itself). You have a bank account with me, I know
your SS# and all of your information. For people who overdraw their
accounts I will recover atleast 90% of the money that has been
overdrawn. On top of that I charge a fee for each overdraw that
occurs. For these reasons the business impact of such a problem is
low.
To sum up I just want to reiterate that this is a *good* thing.
Eventual consistency is forcing us to learn more about our domain. It
is forcing us to ask questions that are otherwise often not asked.
Consistency is over-rated.
HTH,
Greg
--
Les erreurs de grammaire et de syntaxe ont été incluses pour m'assurer
de votre attention
We can store another "entity" based on the SHA-1 of the email itself.
The existence of this tells you whether that email was already used.
This requires very little coding. This is how Git works when it stores
anything. Simple and fast and guaranteed to be unique for a very large
set of objects stored. This scheme can be repeated for many other
non-id uniqueness requirements in the domain.
HTH,
Adam
--
Adam
http://adventuresinagile.blogspot.com/
http://twitter.com/adymitruk/
http://www.agilevancouver.net/
http://altnetvancouver.ning.com/
Hence my first sentence was the qualifier :)
HTH,
Adam
>>We can store another "entity" based on the SHA-1
just checking if I understand the effects and "costs" of that:
Wouldn't that introduce the need for UOW into our commandhandler ?
At the basic level. But for the sake of argument, I'll say "no"
lol...Care to explain the "no" a little more?
Because it is so closely tied to a particular AR. NOT really a uow.
query sha1
if not exist
insert sha1
save ar
or is the secret to never query, but to always insert sha1 - if Unique
Key Exception then we know its already existing and should not save
AR?
--
The idea of the transaction is purely in your hands. So go ahead and get a mutex if it's living on one server. If you need more we'll talk.
On 2010-08-13 12:14 AM, "Stefan Holmberg" <stefan....@systementor.se> wrote:
Sorry, I'm slow, still not completely getting it...Without a transaction?
query sha1
if not exist
insert sha1
save ar
or is the secret to never query, but to always insert sha1 - if Unique
Key Exception then we know its already existing and should not save
AR?
On Fri, Aug 13, 2010 at 8:51 AM, Adam Dymitruk <ad...@dymitruk.com> wrote:
> Because it is so close...
--
Stefan Holmberg
Systementor AB
Blåklockans väg 8
132 45 Saltsjö-Boo
Sweden
Cellphone : +46 709 221...
--
Stefan Holmberg
Now I'm worried ;)
Use a saga to create users to handle the pending data until the user clicks the confirmation link in the email.
Cheers,
-- Udi
You're almost there. The last piece is to form a guid comb. Flip it for random distribution storage.