Will there be any possibility that mongodb allows any size of document not having a limit of 16Mb?

90 views
Skip to first unread message

Sherry Ummen

unread,
Aug 6, 2014, 4:40:36 AM8/6/14
to mongod...@googlegroups.com
Will there be any possibility that mongodb allows any size of document not having a limit of 16Mb?

In our case the document can go more that 16Mb; if mongodb could allow us to store any size document it would be faster for us. Now we are splitting the document to smaller chunks and then reading it back to make it as one document. Which is a time consuming operation.

Someone please comment or suggest


Tugdual Grall

unread,
Aug 6, 2014, 4:58:51 AM8/6/14
to mongod...@googlegroups.com
Hello Sherry,

The short answer is "No" (at least short term)

First of all 16mb is a lot! You can store lot of informations already in a JSON Document that has this size. 
Most of the time when you have such big document it is something that you can -should- change and use a different document design.

The main reason around this limitation is related to the memory management on your server. As you probably know when you want to manipulate the document, the database server (mongod) has to load it in memory and bigger is the document more RAM will you consume for a single document ( http://docs.mongodb.org/manual/faq/diagnostics/#faq-memory ), but also the overall impact of networking for example during replication.

This subject is something that is discussed quite often in the community, while we understand the benefits of "removing the limit", I still think it is safer to keep this limit, to have a better memory management, and performance, even if sometimes it means more work for the developer.

PS: If you look at the history of MongoDB you'll see that this limitations has evolved overtime (moved from 8Mb to 16Mb), so based on the evolution of the infrastructure this could change in the future but nothing planned for now.

Tug
@tgrall

Jeff Lee

unread,
Aug 6, 2014, 12:33:21 PM8/6/14
to mongod...@googlegroups.com
Hi Sherry,

There is an existing ticket for increasing the max document size beyond 16 MB.  You may want to take a look at and vote on SERVER-5923.

Regards


--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
 
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user...@googlegroups.com.
To post to this group, send email to mongod...@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/478084a2-0754-42f6-b327-d69baefb58f5%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

s.molinari

unread,
Aug 6, 2014, 12:41:17 PM8/6/14
to mongod...@googlegroups.com

Sherry Ummen

unread,
Aug 7, 2014, 5:18:52 AM8/7/14
to mongod...@googlegroups.com
Thanks Jeff,

That's exactly the answer I was looking for.

Because of this limitation I was thinking that any RDBS for example MYSQL allows us to store text more than 16Mb for example  the LONGTEXT data type which can be of size ~4Gb. So why Mongodb has this kind of limitation?

Previous answers tell that its for the memory management and replication etc but MYSQL for eg. also has these features then?


On Wednesday, 6 August 2014 22:03:21 UTC+5:30, jeff wrote:
Hi Sherry,

There is an existing ticket for increasing the max document size beyond 16 MB.  You may want to take a look at and vote on SERVER-5923.

Regards

On Wed, Aug 6, 2014 at 1:58 AM, Tugdual Grall <tug...@gmail.com> wrote:
Hello Sherry,

The short answer is "No" (at least short term)

First of all 16mb is a lot! You can store lot of informations already in a JSON Document that has this size. 
Most of the time when you have such big document it is something that you can -should- change and use a different document design.

The main reason around this limitation is related to the memory management on your server. As you probably know when you want to manipulate the document, the database server (mongod) has to load it in memory and bigger is the document more RAM will you consume for a single document ( http://docs.mongodb.org/manual/faq/diagnostics/#faq-memory ), but also the overall impact of networking for example during replication.

This subject is something that is discussed quite often in the community, while we understand the benefits of "removing the limit", I still think it is safer to keep this limit, to have a better memory management, and performance, even if sometimes it means more work for the developer.

PS: If you look at the history of MongoDB you'll see that this limitations has evolved overtime (moved from 8Mb to 16Mb), so based on the evolution of the infrastructure this could change in the future but nothing planned for now.

Tug
@tgrall


On Wednesday, August 6, 2014 10:40:36 AM UTC+2, Sherry Ummen wrote:
Will there be any possibility that mongodb allows any size of document not having a limit of 16Mb?

In our case the document can go more that 16Mb; if mongodb could allow us to store any size document it would be faster for us. Now we are splitting the document to smaller chunks and then reading it back to make it as one document. Which is a time consuming operation.

Someone please comment or suggest


--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
 
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe@googlegroups.com.

To post to this group, send email to mongod...@googlegroups.com.
Visit this group at http://groups.google.com/group/mongodb-user.

s.molinari

unread,
Aug 7, 2014, 5:28:37 AM8/7/14
to mongod...@googlegroups.com
Out of curiosity, but what kind of text do you have that is more than 16MB and needs to be stored in one piece for retrieval?

Scott

Sherry Ummen

unread,
Aug 7, 2014, 5:35:51 AM8/7/14
to mongod...@googlegroups.com
Hi Scott,

    Its a ship related data can be its FEM parameters, calculation result, dimension or curves or can be anything and depending on the project this data can be huge.

Sherry


On Thursday, 7 August 2014 14:58:37 UTC+5:30, s.molinari wrote:

s.molinari

unread,
Aug 7, 2014, 10:16:13 AM8/7/14
to mongod...@googlegroups.com
Can't the data be logically split up in smaller pieces in some way? And why couldn't GridFS work for your needs? You can store very large file data in GridFS.

Scott

Stephen Steneker

unread,
Aug 8, 2014, 1:07:15 AM8/8/14
to mongod...@googlegroups.com
 On Friday, 8 August 2014 00:16:13 UTC+10, s.molinari wrote:
Can't the data be logically split up in smaller pieces in some way? And why couldn't GridFS work for your needs? You can store very large file data in GridFS.

GridFS might work if there are large blob fields in the document .. but if you want to search structured data a more likely approach is to reconsider the data model as per your first suggestion.

As Tug suggested earlier, large documents will have a performance impact. For example, if you have a 15Mb document and typically only work with a small subset of that data .. the MongoDB server still has to load the full document into memory. Just because you *might* be able to store everything in a single document, does not mean you should ;-).

There are some data model considerations listed in the documentation as well: http://docs.mongodb.org/manual/core/data-model-operations/.

Regards,
Stephen

s.molinari

unread,
Aug 8, 2014, 1:16:55 AM8/8/14
to mongod...@googlegroups.com
Yeah, this is all of us trying to give a solution, without really knowing the problem or the real structure and content of the data (which seems to be part of the problem). I guess to give a better answer, we'd need to know more about the data, than we know right now.

Scott

Sherry Ummen

unread,
Aug 8, 2014, 7:22:25 AM8/8/14
to mongod...@googlegroups.com
Thanks for the reply.

Yes currently I am doing the split of the large document to smaller documents but its a performance hit. And one more issue was facing is that when doing bulk write random times I get write operation fail some of the document in that bulk operation. So I am bit worried whether I should go for this splitting kind of approach.

What do you suggest.

The data I have already told its ship related data; but structure of data I cannot reveal due to confidentiality issues. Its a proprietary way of storing data.


On Friday, 8 August 2014 10:46:55 UTC+5:30, s.molinari wrote:

s.molinari

unread,
Aug 8, 2014, 7:55:36 AM8/8/14
to mongod...@googlegroups.com
Out of that data, do you need to search/ query for any certain part of that data to find it again?

Scott

Sherry Ummen

unread,
Aug 8, 2014, 8:38:30 AM8/8/14
to mongod...@googlegroups.com
Not search but just to get some properties of it and sometimes the whole document.

s.molinari

unread,
Aug 8, 2014, 8:44:03 AM8/8/14
to mongod...@googlegroups.com
Then I would suggest storing attributes of the file data along with the file's location (in GridFS) in a normal collection and store the actual file data in GridFS. You could probably model your proprietary storage format in the normal collection and just retrieve the data out of GridFS, when needed. That way there is no real bounds on the size of the data and how its stored.

Scott

Sherry Ummen

unread,
Aug 8, 2014, 8:54:49 AM8/8/14
to mongod...@googlegroups.com
OK I have not tried it but just to know; how will be the read and write speed of gridFs is good enough? and how replication works in this case?

s.molinari

unread,
Aug 8, 2014, 9:58:48 AM8/8/14
to mongod...@googlegroups.com
Replication shouldn't be a problem, as it is done just like any other Mongo database. If you can afford it, you could have a separate Mongo instance to serve just GridFS. This article might be of interest.

http://java.dzone.com/articles/when-use-gridfs-mongodb

Scott
Reply all
Reply to author
Forward
0 new messages