MongoDB 1.7.3 (unstable) Released

Showing 1-5 of 5 messages
MongoDB 1.7.3 (unstable) Released Eliot 11/16/10 11:18 PM
MongoDB 1.7.3 is now available for testing.
This is part of the 1.7 development series which is not intended for
production use.
1.8 will be the culmination of the 1.7 series

Notable Changes:
- initial covered index support
- distinct can use data from indexes when possible
- map/reduce can merge or reduce results into an existing collection
- mongod tracks and mongostat displays network usage
- sharding stability improvements

Change Log:

As always, please let us know of any issues,

Re: [mongodb-user] MongoDB 1.7.3 (unstable) Released kevin 11/17/10 5:54 AM
please can we get this in the next one!

You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To post to this group, send email to
To unsubscribe from this group, send email to
For more options, visit this group at

RE: [mongodb-user] MongoDB 1.7.3 (unstable) Released AndrewK 11/17/10 6:02 AM

I second this... if I could vote more than once, I’d do that too :)


From: [] On Behalf Of kevin
Sent: 17 November 2010 13:54
Subject: Re: [mongodb-user] MongoDB 1.7.3 (unstable) Released

Re: MongoDB 1.7.3 (unstable) Released bingomanatee 11/17/10 4:31 PM
This feels like slippery slope stuff here - once you move past simple
i/o for key/value sets you end up coding applications in the model and
committing and maintaining things in the Mongo stack that are best
left and easily accomplished with JavaScript or server side code.

If you are going to clip an array(stack) to a maximum size, why not
write input filters in MongoDB? or allow arrays of arrays and limit
the size of the stack to the size of the sum of the members'

Why not write model-centric schemas with defaults?

I like mongo the way I like my women - fast, stupid and numerous. If
it starts doing too much on its own then what will we do for a
living ? :)
Re: MongoDB 1.7.3 (unstable) Released jannick 11/30/10 12:31 PM
Assuming that the array-length-limit is supplied by the application
code (ie. no schema or metadata in mongo) I fail to see how this is
different from the atomic update operations we already have access to.
They could all be emulated using findAndModify + a versioning field,
but by having them as operations we get:
 - Simpler programming model
 - Better performance for some use cases, as the record doesn't have
to be sent to the app, and we don't have to do optimistic concurrency
with retries.

Given the document "hard" size-limit, a lot of apps have to either
prey the most extreme edge-cases never shows up, or switch to a more
complex and lower performing approach in order to handle the "max
entries" constraint themselves.