Second bound ignored in a range query

151 views
Skip to first unread message

Igor Berger

unread,
Oct 26, 2011, 12:29:28 PM10/26/11
to mongod...@googlegroups.com
Hello,

It looks like MongoDB is ignoring the second bound in a range query.
Running MongoDB 2.0.0 on 64-bit Windows. The data is structured as follows:

{ f: [
    { t1: "abc"},
    { t2: 123 },
    { t5: 16.0 }
]}

I created an index on f.t5 and ran the following query:

> db.docs.find({"f.t5": { $gt: 0.15999, $lt: 0.160001 }}).explain()
{
        "cursor" : "BtreeCursor f.t5_1",
        "nscanned" : 407246,
        "nscannedObjects" : 407246,
        "n" : 68,
        "millis" : 2761,
        "nYields" : 0,
        "nChunkSkips" : 0,
        "isMultiKey" : true,
        "indexOnly" : false,
        "indexBounds" : {
                "f.t5" : [
                        [
                                0.15999,
                                1.7976931348623157e+308
                        ]
                ]
        }
}

However, when I swap $gt and $lt, the lower bound is ignored:

> db.docs.find({"f.t5": { $lt: 0.16001, $gt: 0.15999 }}).explain()
{
        "cursor" : "BtreeCursor f.t5_1",
        "nscanned" : 186951,
        "nscannedObjects" : 186951,
        "n" : 68,
        "millis" : 1201,
        "nYields" : 0,
        "nChunkSkips" : 0,
        "isMultiKey" : true,
        "indexOnly" : false,
        "indexBounds" : {
                "f.t5" : [
                        [
                                -1.7976931348623157e+308,
                                0.16001
                        ]
                ]
        }
}

Using $elemMatch has exactly the same problem:

> db.docs.find({f: {$elemMatch: {t5: { $gt: 0.15999, $lt: 0.160001 } }}}).explain()
{
        "cursor" : "BtreeCursor f.t5_1",
        "nscanned" : 407246,
        "nscannedObjects" : 407246,
        "n" : 68,
        "millis" : 1591,
        "nYields" : 0,
        "nChunkSkips" : 0,
        "isMultiKey" : true,
        "indexOnly" : false,
        "indexBounds" : {
                "f.t5" : [
                        [
                                0.15999,
                                1.7976931348623157e+308
                        ]
                ]
        }
}

Is this a bug? Any workarounds?

Thank you,
Igor.

Igor Berger

unread,
Oct 26, 2011, 12:50:43 PM10/26/11
to mongod...@googlegroups.com
To clarify: the returned results are correct.
The second bound is ignored only for indexing purposes.
So, this is a performance problem for me.

Kyle Banker

unread,
Oct 26, 2011, 3:34:35 PM10/26/11
to mongodb-user
Might be a floating-point issue. Created a ticket here:
https://jira.mongodb.org/browse/SERVER-4155

aaron

unread,
Oct 27, 2011, 2:36:25 AM10/27/11
to mongodb-user
Hi Igor,

Right now this is "works as designed" for multikey indexes. Please
see further comments in SERVER-4155.

Thanks,
Aaron

Igor Berger

unread,
Oct 27, 2011, 10:03:09 AM10/27/11
to mongod...@googlegroups.com
Thanks for figuring it out.

Is my only option now to downgrade to 1.9.1?


Igor Berger

unread,
Oct 27, 2011, 10:04:26 AM10/27/11
to mongod...@googlegroups.com
I meant 1.8.4 of course.

Kyle Banker

unread,
Oct 27, 2011, 10:07:08 AM10/27/11
to mongod...@googlegroups.com
Another options is not to store the data as multi-key. That is instead of this:

{ f: [
    { t1: "abc"},
    { t2: 123 },
    { t5: 16.0 }
]}

Do this:

{ f: { t1: "abc",
     t2: 123,
     t5: 16.0 }
}

Kyle Banker

unread,
Oct 27, 2011, 10:15:24 AM10/27/11
to mongod...@googlegroups.com
If you downgrade, you may not always get the correct results, if I understand the tickets correctly.

Igor Berger

unread,
Oct 27, 2011, 10:22:25 AM10/27/11
to mongod...@googlegroups.com
Unfortunately, I can't do that. The original example was just to demonstrate the problem.

I have a lot of t's, so all of them can't be indexed. So, I tried using multi-keys structured in 2 ways below.
Unfortunately both ways are very slow when doing a range search (with both bounds specified).

{ f: [
    { t1: "abc"},
    { t2: 123 },
    { t5: 16.0 }
]}

and

{ f: [
    { t: 1, v: "abc"},
    { t: 2, v: 123 },
    { t: 5, v: 16.0 }
]}

Kyle Banker

unread,
Oct 27, 2011, 10:34:01 AM10/27/11
to mongod...@googlegroups.com
In that case, your best option might be to store all of the "f" values as separate documents in a new collection. That way the query will use the index appropriately.

Igor Berger

unread,
Nov 4, 2011, 9:38:35 AM11/4/11
to mongod...@googlegroups.com
It's an interesting idea. That would make a lot more of a lot smaller documents.
But then the queries become more complicated. I'll give it a try.

Thanks for everyone's help.

> --
> You received this message because you are subscribed to the Google Groups
> "mongodb-user" group.
> To view this discussion on the web visit
> https://groups.google.com/d/msg/mongodb-user/-/J0TQ0g9UgrUJ.
> To post to this group, send email to mongod...@googlegroups.com.
> To unsubscribe from this group, send email to
> mongodb-user...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/mongodb-user?hl=en.
>

Sergejs Degtjars

unread,
Aug 7, 2012, 4:14:30 AM8/7/12
to mongod...@googlegroups.com
I also got this performance problem, but actually it is more complicated.
In my case I can't run production because this raise query execution time 10 to 100 times or more.
See https://jira.mongodb.org/browse/SERVER-6720

пятница, 4 ноября 2011 г., 15:38:35 UTC+2 пользователь Igor Berger написал:
It's an interesting idea. That would make a lot more of a lot smaller documents.
But then the queries become more complicated. I'll give it a try.

Thanks for everyone's help.

Reply all
Reply to author
Forward
0 new messages