preallocation strategy for daily log documents

375 views
Skip to first unread message

jtoberon

unread,
Nov 4, 2011, 1:48:40 PM11/4/11
to mongodb-user
We have a collection of log data, where each document in the
collection is identified by a MAC address and a calendar day.
Basically:

{
_id: <generated>,
mac: <string>,
day: <date>,
data: [ "value1", "value2" ]
}

Every five minutes, we append a new log entry to the data array within
the current day's document. The document rolls over at midnight UTC
when we create a new document for each MAC.

We've noticed that IO, as measured by bytes written, increases all day
long, and then drops back down at midnight UTC. This shouldn't happen
because the rate of log messages is constant. We believe that the
unexpected behavior is due to Mongo moving documents, as opposed to
updating their log arrays in place. For what it's worth, stats() shows
that the paddingFactor is 1.0299999997858227.

Several questions:

1. Is there a way to confirm whether Mongo is updating in place or
moving? We see some moves in the slow query log, but this seems like
anecdotal evidence. I know I can db.setProfilingLevel(2), then
db.system.profile.find(), and finally look for "moved:true", but I'm
not sure whether it's ok to do this on a busy production system.

2. The size of each document is very predictable and regular. Assuming
that mongo is doing a lot of moves, what's the best way to figure out
why isn't Mongo able to presize more accurately? Or to make Mongo
presize more accurately? Assuming that the above description of the
problem is right, tweaking the padding factor does not seem like it
would do the trick.

3. It should be easy enough for me to presize the document and remove
any guesswork from Mongo. I know the padding factor docs (http://
www.mongodb.org/display/DOCS/Padding+Factor) say that I shouldn't have
to do this, but I just need to put this issue behind me.) What's the
best way to presize a document? It seems simple to write a document
with a garbage byte array field, and then immediately remove that
field from the document, but are there any gotchas that I should be
aware of? For example, I can imagine having to wait on the server for
the write operation (i.e. do a safe write) before removing the garbage
field.

4. I was concerned about preallocating all of a day's documents at
around the same time because it seems like this would saturate the
disk at that time. Is this a valid concern? Should I try to spread out
the preallocation costs over the previous day?

This is cross posted from stackoverflow (http://stackoverflow.com/
questions/8010643/in-mongodb-preallocation-strategy-for-daily-log-
documents), even though I know this is somewhat bad form. If anybody
answers here, I'll repost them over to there!

Scott Hernandez

unread,
Nov 4, 2011, 3:41:22 PM11/4/11
to mongod...@googlegroups.com
On Fri, Nov 4, 2011 at 1:48 PM, jtoberon <jtob...@gmail.com> wrote:
> We have a collection of log data, where each document in the
> collection is identified by a MAC address and a calendar day.
> Basically:
>
> {
>  _id: <generated>,
>  mac: <string>,
>  day: <date>,
>  data: [ "value1", "value2" ]
> }
>
> Every five minutes, we append a new log entry to the data array within
> the current day's document. The document rolls over at midnight UTC
> when we create a new document for each MAC.
>
> We've noticed that IO, as measured by bytes written, increases all day
> long, and then drops back down at midnight UTC. This shouldn't happen
> because the rate of log messages is constant. We believe that the
> unexpected behavior is due to Mongo moving documents, as opposed to
> updating their log arrays in place. For what it's worth, stats() shows
> that the paddingFactor is 1.0299999997858227.
>
> Several questions:
>
> 1. Is there a way to confirm whether Mongo is updating in place or
> moving? We see some moves in the slow query log, but this seems like
> anecdotal evidence. I know I can db.setProfilingLevel(2), then
> db.system.profile.find(), and finally look for "moved:true", but I'm
> not sure whether it's ok to do this on a busy production system.

Yes, it should be fine as long as you aren't at the edge of disk
performance. The overhead is mostly in IO, as the timing is done
anyway.

> 2. The size of each document is very predictable and regular. Assuming
> that mongo is doing a lot of moves, what's the best way to figure out
> why isn't Mongo able to presize more accurately? Or to make Mongo
> presize more accurately? Assuming that the above description of the
> problem is right, tweaking the padding factor does not seem like it
> would do the trick.

That is the attempt of the padding factor, but if you really know the
full structure of the doc, just create a new one the day before, or
trickle them in, and just update by the date/time of the event as it
happens.

In MMS we do a nested tree, something like this, starting with 6
groups of hours (0-3, 4-7...21-24), then quarter and minute break
downs. It leads to something like this:
{ ..., minutes : { "00" : { "01" : { "01" : 14 } } } } // first hour,
second quarter (30 minute), 2nd minute in that quarter (31st minute on
the top of first hour).

This makes the updates faster but might not be a big issue if you
aren't doing tens/hundreds of thousands of updates per second.

Without this you might see cpu going up throughout the day.

> 3. It should be easy enough for me to presize the document and remove
> any guesswork from Mongo. I know the padding factor docs (http://
> www.mongodb.org/display/DOCS/Padding+Factor) say that I shouldn't have
> to do this, but I just need to put this issue behind me.) What's the
> best way to presize a document? It seems simple to write a document
> with a garbage byte array field, and then immediately remove that
> field from the document, but are there any gotchas that I should be
> aware of? For example, I can imagine having to wait on the server for
> the write operation (i.e. do a safe write) before removing the garbage
> field.

See above.

> 4. I was concerned about preallocating all of a day's documents at
> around the same time because it seems like this would saturate the
> disk at that time. Is this a valid concern? Should I try to spread out
> the preallocation costs over the previous day?

Yes, but you probably just need to do it over a few hours; best to
test the load. But inserts are less costly than the moves are.

> This is cross posted from stackoverflow (http://stackoverflow.com/
> questions/8010643/in-mongodb-preallocation-strategy-for-daily-log-
> documents), even though I know this is somewhat bad form. If anybody
> answers here, I'll repost them over to there!
>

> --
> You received this message because you are subscribed to the Google Groups "mongodb-user" group.
> To post to this group, send email to mongod...@googlegroups.com.
> To unsubscribe from this group, send email to mongodb-user...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/mongodb-user?hl=en.
>
>

jtoberon

unread,
Nov 4, 2011, 4:15:31 PM11/4/11
to mongodb-user
Thanks, Scott. It's great to know that we're on the right track.

On Nov 4, 3:41 pm, Scott Hernandez <scotthernan...@gmail.com> wrote:

jtoberon

unread,
Nov 11, 2011, 3:02:27 PM11/11/11
to mongodb-user
We're still stuck:

We took a script that reproduces our IO problems to the 10Gen office
hours, and folks there seemed as surprised by the problems as we were.
The only workaround that seems to help is to reduce the size of the
array we're appending to. We're going to use one document per hour
instead of one document per day. This is similar to the approach used
for MMS.

Breaking log entries into a document per time interval is one of the
recommended use cases for mongodb. It would be nice to understand why
we failed when attempting to implement this by creating a fairly small
array per document. The IO behavior is inexplicably terrible even when
we're way below the threshold of 5000 suggested here:
http://blog.axant.it/archives/236.

Scott Hernandez

unread,
Nov 11, 2011, 3:15:56 PM11/11/11
to mongod...@googlegroups.com
Can you provide a reproduction case, or something we can run locally?

Also, just out of curiosity which office hours (city/hour) and who did
you speak to?

jtoberon

unread,
Nov 11, 2011, 5:16:42 PM11/11/11
to mongodb-user
Yes, definitely. This evening, we'll post or email a python script
that can be run locally.

We went to the NYC office hours on Wednesday November 9. Having just
reread my post above, I noticed that "surprised" probably was a bad
choice of words, and we very much appreciate you taking the time to
respond on this forum.

Scott Hernandez

unread,
Nov 11, 2011, 6:02:38 PM11/11/11
to mongod...@googlegroups.com
Okay, that should give more insight into what is going on. I think
Mike, who you met with Wed. night, was also looking into this and may
have something after the weekend.

Doing this type of pattern is not trivial and there are many different
ways of doing it which will define different performance
characteristics depending on your usage. It might be good to wrap this
up into a cookbook example if that sounds good.

jtoberon

unread,
Nov 11, 2011, 6:22:02 PM11/11/11
to mongodb-user
Dan is going to post or email his script in a few minutes.

A cookbook entry sounds like a good idea. I'm kind of surprised that
our use case isn't more common, but you're right that the details
matter when fine tuning for performance. If the cookbook could explain
some of the performance characteristics and trade offs, then readers
may start thinking in a more mongo-friendly way.

Please thank Mike for his time, too!

Dan Riegel

unread,
Nov 11, 2011, 6:31:59 PM11/11/11
to mongodb-user
Hi Scott and Mike,

I cleaned up the python script I gave to Mike, and I'm pasting it in
here (will put it up on github as soon as I get a chance). In
particular, look at the effect of -l 100,1000. It is also interesting
to look at what I think should be equivalent exchanges, like "-l 100 -
n 1000" vs "-l 1000 -n 100" (both do 100,000 updates, but show
decidedly different profiles). I admit there is slightly sloppy
timing due to the threads and the blocking queue, but it will affect
nothing more than the last few updates, should be insignificant,
especially with just 2 threads. The -v flag gives you a better real
time feel for the rate of updating.

Look at the usage for other data structures. I threw in a bunch, with
varying results. I would be very interested in your thoughts on which
ones should be efficient.

If it doesn't paste well, I can email it to you...


Filename is test_mongodb_performance.py

--------------------------------------begin file
#!/usr/bin/python

# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
published by
# the Free Software Foundation, either version 3 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
License
# along with this program. If not, see <http://www.gnu.org/
licenses/>.
#
# Author: Daniel Riegel


import sys
import pymongo
import time
import getopt
import copy
from pymongo import Connection
from pymongo.database import Database
from datetime import datetime, timedelta
from locale import atoi
import random
import math
import threading
import logging

from Queue import Queue

def get_mongo_db(host):
connection = Connection(host,
port=27017)
db = Database(connection, "atest")
db.set_profiling_level(0)
return db

def generate_uuids(num, length=6):
results = []
for i in range(num):
val = random.randint(0,2**(length*8))
results.append("%x" % val)
return results

def generate_and_insert_ids(coll, doc_template, num_docs, prepad,
associative, logger):
logger.debug("Starting uuid generation and insertion")
uuids = generate_uuids(num_docs)

start = time.time()
for uuid in uuids:
doc_template["uuid"] = uuid
coll.insert(doc_template, None, safe=False)
if prepad:
coll.update({"uuid" : uuid} ,{ "$set" : { "logs" : [] }})
if associative:
coll.update({"uuid" : uuid} ,{ "$set" : { "logs" : {} }})

logger.debug("done inserting initial documents: %s ms\n" %
((time.time() - start) * 1000))
return uuids

class DBUpdater(threading.Thread):
def __init__(self, collection_name, queue, host):
self.queue = queue
threading.Thread.__init__(self)
self.db = get_mongo_db(host)
self.coll = self.db[collection_name]
self.daemon = True

def stop(self):
self.queue.put(None)

def run(self):
while True:
args = self.queue.get(True)
if args == None:
break
# self.coll.find(args[0])[0] # just to warm the cache,
don't think we need it
self.coll.update(args[0], args[1], upsert=args[2],
safe=args[3])


def run_test(num_docs = 100,
list_length = 1000,
entry_size = 300,
prepad = False,
in_place = False,
safe_write = False,
associative = False,
group_sub_list = 0,
num_threads = 2,
verbose = False,
host = "localhost"):

#settings / constants

time_buckets = 15
outer_loop_delay_ms = 100
inner_loop_delay_ms = 0
logger = logging.Logger(__name__)
handler = logging.StreamHandler(sys.stdout)
logger.addHandler(handler)
if verbose:
logger.setLevel(logging.DEBUG)
else:
logger.setLevel(logging.INFO)
# locals

filler = "x" * entry_size
padding = []
dt = datetime.today()
db = get_mongo_db(host)

args = """num_docs: %d
array_element_size: %d
list_length: %d
prepad: %s
in_place: %s
safe_write: %s
associative: %s
group_sub_list: %d
num_threads: %d
""" % (num_docs,
entry_size,
list_length,
prepad,
in_place,
safe_write,
associative,
group_sub_list,
num_threads )

# argument processing

if group_sub_list == 0:
group_sub_list = list_length
if in_place:
padding = map( lambda f : filler, range(list_length))
elif prepad:
padding = "x" * (list_length * entry_size)
elif group_sub_list < list_length:
padding = {}
for hr in range(int(math.ceil(list_length / group_sub_list))):
padding["%d" % hr] = {"hr" : hr, "vals": [] }

doc_template = {
"day" : dt.day,
"month" : dt.month,
"year" : dt.year,
"logs" : padding,
}

# loop variables

counter = 0
dbwait_table = 0
times = []
coll = db.arraytest
time_div = float(list_length) / time_buckets
start = time.time()

# start doing something

coll.drop() # clean it out
coll.ensure_index([("uuid",pymongo.ASCENDING)])
uuids = generate_and_insert_ids(coll,
doc_template,
num_docs,
prepad,
associative,
logger)

oldidx = 0
i = 0
max_i = list_length * num_docs
update_start = time.time()

update_queue = Queue(num_threads * 2)
updaters = []

for j in range(num_threads):
updaters.append(DBUpdater("arraytest", update_queue, host))
updaters[j].start()
j += 1

while counter < list_length:
t1 = time.time()
idx = int(math.floor( i * time_buckets / max_i))
sublist_idx = counter / group_sub_list
if oldidx < idx:
logger.debug("\n%d of %d (%d of %d updates)" % ( idx + 1,
time_buckets, i, max_i))
oldidx = idx
for uuid in uuids:
upsert_dict = {"$push" : { "logs" : filler }}
query = { "uuid" : uuid }

if group_sub_list < list_length:
query = {"uuid" : uuid }
upsert_dict = {"$push" : {"logs.%d.vals" %
sublist_idx : filler }}
if associative:
upsert_dict = {"$set" : {"logs.%d" % counter :
filler }}
if in_place:
upsert_dict = {"$set" : {"logs.%d" % counter :
filler }}
if group_sub_list == 1:
# just insert, no updates
doc_template["uuid"] = uuid + "%d" % counter
doc_template["logs"] = filler
coll.insert(doc_template)
else:
update_queue.put( [query, upsert_dict, True,
safe_write], True)

if verbose and (i % 100 == 0):
sys.stdout.write(".")
sys.stdout.flush()
i += 1
time.sleep(inner_loop_delay_ms / 1000)

insert_time = time.time() - t1

if len(times) <= idx:
times.append(0)
times[idx] += insert_time * 1000 / time_div / num_docs
counter += 1
time.sleep(outer_loop_delay_ms / 1000)

# shut down worker threads
logger.debug("stopping threads...")
for updater_thread in updaters:
updater_thread.stop()

logger.debug("joining threads...")
for updater_thread in updaters:
updater_thread.join()

logger.info("updates took %d ms" % ((time.time() -
update_start)*1000))
print args
for i, timey in enumerate(times):
logger.info("%d: %f" % (i, timey))

return times


def expand_array(arglist):
i = 0
result = []
for arg in arglist:
if type(arg) == list:
results = []
for val in arg:
subarg = copy.deepcopy(arglist)
subarg[i] = val
results.extend(expand_dict(subarg))
return results
else:
result.append(arg)
i += 1
return [result]

def expand_dict(target):
i = 0
result = {}
for key, val in target.iteritems():
if type(val) == list:
results = []
for subval in val:
# split it into one dict for each value in the array
and recurse
target_clone = copy.deepcopy(target)
target_clone[key] = subval
results.extend(expand_dict(target_clone))
return results
else:
result[key] = val
i += 1
return [result]

def usage():
print """
NAME
%s - measure mongo $push performance on arrays of a set size for a
specified collection size

SYNOPSIS
%s: [ OPTIONS ]

DESCRIPTION
Run a test of a mongo database with a variety of parameters.
Allows simple comparison of
different parameter values. If multiple parameters are passed in
for any arguments,
run multiple tests on the cross product of all possible
combinations and print out a
summary of the results on completion.

-a, --associative { y | n | yn | y,n } default False
add entries as key - value pairs under the logs field instead
of pushing then onto an array

-g --group_sub_list=sub_list_size
place entries in multiple lists under hash keys. List length
is limited to sub_list_size.

-h, --help
print this usage info

-o, --host
mongodb host name or ip

-i, --in_place { y | n | yn | y,n } default False
create the entire document at the start, and simply $set the
values in the loop
not compatible with -a

-l, --list_length=length default 1000
how many entries to add to the list in each document

-n, --num_docs=num default 100
total number of independent documents to create and fill

-p, --prepad { y | n | yn | y,n } default False
create documents with their ultimate size from the start, then
immediately delete the padding

-s, --entry_size=size default 300
the size, in bytes, of each entry in the arrays. It is just a
string of 'x' characters

-t, --num_threads=num default 2
the number of threads to use to update

-v, --verbose
print verbose info to console

-w, --safe_write { y | n | yn | y,n } default False
use the safe write flag (safe = True) for all updates and
inserts

""" % (__file__, __file__)

def main():

dt = datetime.today() # - timedelta(days=5)

argv = sys.argv


# if this fails, add this to the environment:
# export PYTHONPATH=$PYTHONPATH:.. (or wherever ears_tools is

try:
opts, args = getopt.getopt(argv[1:], "hn:l:s:p:i:w:a:g:t:o:v",
["help",

"num_docs=",

"list_length=",

"entry_size=",

"prepad=",

"in_place=",

"safe_write=",

"associative=",

"group_sub_list=",

"num_threads=",
"host="

"verbose",
])

except getopt.GetoptError:
usage()
sys.exit(2)

args = { "num_docs" : 100,
"list_length" : 1000,
"entry_size" : 300,
"prepad" : False,
"in_place" : False,
"safe_write" : False,
"associative" : False,
"group_sub_list" : 0,
"num_threads" : 2,
"verbose" : False,
"host" : "localhost"
}

bool_map = { "y" : True, "n" : False, "yn" : [True,False], "y,n" :
[True,False] }

try :
for opt, arg in opts:
if opt in ("-h", "--help"):
#TODO write usage
usage()
sys.exit()
elif opt in ("-n", "--num_docs"):
args["num_docs"] = map( lambda x : atoi(x),
arg.split(","))
elif opt in ("-l", "--list_length"):
args["list_length"] = map( lambda x : atoi(x),
arg.split(","))
elif opt in ("-s", "--entry_size"):
args["entry_size"] = map( lambda x : atoi(x),
arg.split(","))
elif opt in ("-p", "--prepad"):
args["prepad"] = bool_map.get(arg, "True")
elif opt in ("-i", "--in_place"):
args["in_place"] = bool_map.get(arg, "True")
elif opt in ("-w", "--safe_write"):
args["safe_write"] = bool_map.get(arg, "True")
elif opt in ("-a", "--associative"):
args["associative"] = bool_map.get(arg, "True")
elif opt in ("-g", "--group_sub_list"):
args["group_sub_list"] = map( lambda x : atoi(x),
arg.split(","))
elif opt in ("-o", "--host"):
args["host"] = arg
elif opt in ("-t", "--num_threads"):
args["num_threads"] = map( lambda x : atoi(x),
arg.split(","))
elif opt in ("-v", "--verbose"):
args["verbose"] = True

except Exception:
usage()
sys.exit(2)

argsetlist = expand_dict(args)
print "Running %d times, with the following argument sets: " %
len(argsetlist)
for i, argset in enumerate(argsetlist):
print "%d: %r" % (i,argset)

# sys.exit(0)

times = []
for argset in argsetlist:
print "now running %r" % argset
times.append(run_test(**argset))

# print "\t".join(map(lambda x : "%r" % x,argsetlist))
for i, run in enumerate(argsetlist):
print "run #%d: %r" % (i, run)
print "Average time per insert operation, in ms"
for i, row in enumerate(zip(*times)):

print "%d:\t%r" % (i, row)


if __name__ == "__main__":
main()



Dan Riegel

unread,
Nov 15, 2011, 11:12:58 AM11/15/11
to mongodb-user
Apologies for the hurried script paste. That script is now on github:

https://gist.github.com/1367378

We would greatly appreciate feedback from anybody on this behavior,
whether they have seen it, or found a solution. At the moment, our
solution is simply to limit the size of our arrays to about 20
entries.

Dan Riegel
EnergyHub
>     -a, --associative  { y | n | yn | y,n }     default...
>
> read more »

Scott Hernandez

unread,
Nov 15, 2011, 11:18:50 AM11/15/11
to mongod...@googlegroups.com
Mike did some more testing using a cleaned-up (reduced) test script
and will be responding soon, I expect.

Mike O'Brien

unread,
Nov 15, 2011, 12:21:24 PM11/15/11
to mongodb-user
Hey,
Sorry for the delay, I had posted in here yesterday afternoon but I
don't see the post in here anymore, so it got wiped somehow (or i
forgot to hit send?)

Anyway, over the weekend I hacked up my own script as well which has
some example of how to use a tree-like schema:
https://gist.github.com/1365096

By switching from array to a nested tree I got about a ~20% boost in
performance when the array length is long - this is improving
throughput by decreasing the amount of CPU time wasted. Using a tree
is probably the way to go, I think.

I also got a slight boost in performance (about 10-15 seconds off from
test runs in the range of 300-400 seconds) by using padding - but this
suggests that moves aren't really the bottleneck, I guess.
It seems like since your documents get quite large, the rate of
updates is basically hammering the disk and saturating I/O.

Couple things that might be worth playing with:

- turn off journalling to see if there's any dramatic difference, as
it takes some burden off the disk
- play with the value for --syncdelay which controls the frequency of
flushes to disk
- try without safe writes
Also, this seems like a great use case for sharding - you could use a
UUID as part of the shard key. Splitting documents down to more
granular units (by hour for example, as you mentioned) may also help
by limiting the total document size.
> ...
>
> read more »

jtoberon

unread,
Nov 15, 2011, 1:20:49 PM11/15/11
to mongodb-user
Thanks, Mike. Regarding your various suggestions, from top to bottom:
- For now, we probably are going to pursuit the strategy of more
granular units. For various reasons, this is easier for us to
implement than a nested tree. Also, given the growth pattern that we
see during one day, ~20% wouldn't help.
- We found the same results for manual padding: it basically didn't
help. This implies that mongo's built in padding works as advertised!
- We found that --syncdelay smooths out the IO stats, but doesn't
solve the underlying problem.
- We already do without safe writes for the log data in question.
- Yes, sharding would help. We're on it. But we still want to
understand what's going on.

Just to follow up on the original question, Dan and I don't understand
the underlying behavior. We do updates at a fairly constant rate all
day long. Yet mongo is not doing a constant amount of IO. Why?

In other words, can you explain what you meant by "since your
documents get quite large, the rate of updates is basically hammering
the disk and saturating I/O"? The rate of updates (number of bytes
we're adding to a collection) IS constant. Yet Mongo's IO is NOT
constant, but rather it grows. Why? Since we're just appending data to
an array, it should be possible to write the updates incrementally
(i.e. at a constant rate), too.

(Apologies if this question is obvious: One thing just occurred to me
when I read your note about journaling. What's written to the journal,
the incremental update or the fully updated entry? If it's the later,
then that would explain the behavior that we're seeing.)
> > >>    ...
>
> read more »

Mike O'Brien

unread,
Nov 22, 2011, 10:50:00 AM11/22/11
to mongodb-user
Hey,Larger documents span many more pages on disk, which means that
there are more dirty pages that need to be flushed. So in general
you'll get better performance with smaller documents especially if
your writes are changing the document size (which they are in this
case).
I tried modifying the script I posted above to do hourly documents
(i.e. 24 per day) by adding a {hour: ...} field to the upsert and it
did make a very significant difference by about 30-40% (!). I think
this is the way to go.

jtoberon

unread,
Nov 22, 2011, 12:26:13 PM11/22/11
to mongodb-user
Thanks, Mike. The pages on disk explanation makes sense, as long as
mongo flushes the entire document to disk, rather than just flushing
the pages containing changes. (I don't think we're changing the
document size, since mongo pads AND we we've tried preallocating, but
at this point the question is somewhat academic.)

Our test results agree with yours: switching to hourly documents helps
a lot. The performance seems to be determined by the size of the
document rather than by the length of the array.

When we tested journaling, we found that the combination of journaling
AND large documents (measured in bytes, not number of entries in the
array) is really bad. We actually don't see mongo's performance drop
off a cliff if we change either of those to variables (journaling or
document size), So, we're going to turn journaling off first, since we
already have enough replicas, and restructure the documents second.

Thanks again for all your help.

> > > > >>        ...
>
> read more »

Reply all
Reply to author
Forward
0 new messages