|MongoDB 1.7.3 (unstable) Released||Eliot||11/16/10 11:18 PM|
MongoDB 1.7.3 is now available for testing.
This is part of the 1.7 development series which is not intended for
1.8 will be the culmination of the 1.7 series
As always, please let us know of any issues,
|Re: [mongodb-user] MongoDB 1.7.3 (unstable) Released||kevin||11/17/10 5:54 AM|
please can we get this in the next one!
|RE: [mongodb-user] MongoDB 1.7.3 (unstable) Released||AndrewK||11/17/10 6:02 AM|
I second this... if I could vote more than once, I’d do that too :)
|Re: MongoDB 1.7.3 (unstable) Released||bingomanatee||11/17/10 4:31 PM|
This feels like slippery slope stuff here - once you move past simple
i/o for key/value sets you end up coding applications in the model and
committing and maintaining things in the Mongo stack that are best
If you are going to clip an array(stack) to a maximum size, why not
write input filters in MongoDB? or allow arrays of arrays and limit
the size of the stack to the size of the sum of the members'
Why not write model-centric schemas with defaults?
I like mongo the way I like my women - fast, stupid and numerous. If
it starts doing too much on its own then what will we do for a
living ? :)
|Re: MongoDB 1.7.3 (unstable) Released||jannick||11/30/10 12:31 PM|
Assuming that the array-length-limit is supplied by the application
code (ie. no schema or metadata in mongo) I fail to see how this is
different from the atomic update operations we already have access to.
They could all be emulated using findAndModify + a versioning field,
but by having them as operations we get:
- Simpler programming model
- Better performance for some use cases, as the record doesn't have
to be sent to the app, and we don't have to do optimistic concurrency
Given the document "hard" size-limit, a lot of apps have to either
prey the most extreme edge-cases never shows up, or switch to a more
complex and lower performing approach in order to handle the "max
entries" constraint themselves.