IME domains geared towards data analysis and such are better served by
a more traditional architecture.
Anyway in your scenario I would suggest a few things:
You can have CQRS without event sourcing. Just external events and
command handlers doing (for example) transaction scripts.
If you do have a rich analysis domain I would try to work only with an
expression domain and leave the actual calculation in the query side.
For example if the user wants to make an analysis to compare some
trends or create a what if scenario you can just build a bunch of
expression value objects and interpret it on the query side to
generate some smart queries in SQL or map/reduce functions in a NoSQL
solution.
> How would you handle this type of setup in CQRS. When the
> 'CalculateTechnicalIndicator' command arrives, and requires 'n' prices
> for a stock. If I have Stock as an aggregate root, with a collection
> of prices, then this will result in a performance issue because all
> prices will be loaded into memory when in fact I might only need 30 of
> them.
You should consider that CQRS useful only when you need multiple representations of the same fact space. One of the reason for this need might be algorithmic. So if your problem is as you describe you need a more efficient representation to perform calculations.
A Traditional CQRS configuration is the following:
Transaction Model (a domain model responsible only to execute transactions, AGGREGATES), OLTP
Presentation Model (a model built towards presentation needs, VIEWS).
You can add other models.
Analysis Model (OLAP and so on)
Reporting Model
Archive Model
and so on.
The Transaction Model is the core. This model feeds all other models.
On another note, you may consider that each model uses different storage schemes for performance reasons.
> I have a command like "CalculateThirtyDaySimpleMovingAverage". This
> command will need to retrieve the past 30 days of prices (for a stock)
> to calculate.
The term command in CQRS is usually reserved for the Transaction Model. Unless the calculation needs to be done within a transaction and changes state it should not be called a Command but a Query.
> Once the calculation is done, an entity
> is created ('TechnicalIndicator') and stored in the database.
On another note you should probably consider that you have more then one bounded context.
Usually Analysis support Decisions yet they are done each in distinct bounded contexts and aren't necessesarly consistent by the split second.
Cheers,
Nuno
> This line confuses me a bit:
>
>> The term command in CQRS is usually reserved for the Transaction Model. Unless the calculation needs to be done within a transaction and changes state it should not be called a Command but a Query.
>
> The "calculation" will create a new piece of data (whether that is an
> entity or a value object is up for debate), that needs to be persisted
> somewhere. I don't see how this is not a 'Command'. The result of
> the calculation can be retrieved in a Query later on when used for
> analysis/ to make trading decisions.
Just because some value need to be persisted in some database it does not mean that it needs to be part of the Domain Model, or it is necessarily transactional. Its seams to me that you aren't describing any business rules then need enforced at all time. Yes it is business logic, but aren't really business rules (when this do that, if this then that, and so on).
From a data point of view a Domain Model is mainly a effective to fetch very specific facts about the domain. What happened, when, and why on a specific Aggregate at specific time T. What is the total of an Order made 3 months ago (T).
The scenario you are describing seams more like an overview over what happened across time, evolving multiple dimensions over a single type of fact (PriceChanged).
To this a different model might be more suited: http://www.dwreview.com/OLAP/Introduction_OLAP.html
http://msdn.microsoft.com/en-us/library/aa902683(v=sql.80).aspx
You can build yourself a generic analytical engine, and for that matter use domain modeling or you can use an OLAP tool from some vendor. Either way I would probably tackle it as a separated Bounded Context then say Client Portfolio Management.
Trance.
These types of systems are generally not "CQRS" but are pure event based.
Also a "Technical Indicator" generally is associated with a "Group of
Instruments"
2011/5/18 Laurynas Pečiūra <laurynas...@gmail.com>:
--
Les erreurs de grammaire et de syntaxe ont été incluses pour m'assurer
de votre attention
Nuno
--
You need to understand the proper approach is highly dependent on your algorithms.
Typically in CQRS, you don't query your Transactional Model, you issue commands.
> 1. queryService.CalculateTechnicalIndicator(... ) [Query]
As I said before don't know if CalculateTechnicalIndicator is a command or a query. But assuming you want indicators to be consistent in a time line with each other I would assume that you need an Aggregate somewhere.
I have never been in a project related to Stock. So sorry if this example looks naive.
For instance, say you want to calculate the Average of Prices since the beginning of time of some Stock. Would you need to process millions of StockPriceChanged every time there is a change? NOOOOOOO.
Consider an Aggregate, AverageStockPrice of the stock of some company. When a new price changed gets in, it updates the total, and easily computes the Average. It can keep track of averages across time. It does not need to analyze millions of price changes, it done incrementally.
AverageStockPrice.Handle(stockPriceChanged) -> AverageStockPriceChanged(); // NOT CQRS.
Or if you want to go for the CQRS way:
StockPriceChanged Handler:
AverageSockPrice.Calculate(stockPriceChanged.newPrice) -> AverageStockPriceCalculated(....);
I prefer the first.
You can have a massive number of indicators being computed in parallel like this.
My advice would be to focus on those algorithms and get ways to optimize their computations fead with StockPriceChanged events. In other words, make a computation model for those. The difficulty is when parameters are arbitrary. Say in the above, the time line over which the average o be computed is fully arbitrary, hence you are left with calculating the average over all the events every time.
Does it really matter if CalculateTechnicalIndicator is a Query or Command?
You can then use CQRS on top to push results to Views or whatever other Bounded Context.
Cheers,
Nuno
Live models will update a running total as every tick arrives (i.e.
triggered by events), in the fastest way possible (caching etc). This
is suited to things like HFT and algo trading and might trigger
further events - e.g. you might have entry/exit strategies as DDD
sagas waiting for particular technical indicator events indicating
favourable conditions to execute buy/sell orders.
Offline models, on the other hand, are simply queries over the read
model (select avg(price) from tick...). This is what you would use for
simpler ad hoc reporting/analysis.
Both styles are a good example of where it is acceptable to use domain
logic on the read side.
--
Richard Dingwall
http://richarddingwall.name
> For step 2: "The CalculateThirtyDaySimpleMovingAverage EventHandler picks up the
> event. It then queries a table on the Read side to get the historical
> prices it needs to do its calculation."
I thnk you probably mean The CalculateThirtyDaySimpleMovingAverage CommandHandler
CalculateThirtyDaySimpleMovingAverage is definitely not an event (what fact or occurrence does it represent?)
Nuno
Domain logic doesn't have to only be on the write side. Calculating
technical indicators is definitely part of your domain model.
And it is no problem publishing events from event handlers.
Again, no experience in this industry.
James
In reality its just a small FSM to do the work.
--
As I mentioned previously this sounds like a pure eventing system. The
best way to think about it is in terms of cascading. Events come into
your system (PriceChanged) you have a series of FSM (Finite State
Machines) that listen to these events and on their own produce a
series of new events (let's say 30 minute moving average changed).
Then these events might go to other handlers who in turn produce their
own events (BuySignalFound). Etc etc etc. When we model the system we
look at it in terms of cascading ... Does this make sense?
Greg
--