Think you're running into https://jira.mongodb.org/browse/SERVER-2253.
The sharding config is cleared at first, but the db entry is
re-inserted. Workaround is to do:
config.databases.remove({ _id : <database> }) after you drop a sharded
db. Fix may be simple to backport, looking into this as well for
1.9.2.
> All our collections report correct indexes when we call getIndexes(),
> like you see in Theos post.
> However, if we call db.printCollectionStats() all but one collection
> only report the _id index.
getIndexes() pulls from the <database>.system.indexes metadata
collection, which seems to have stale data. printCollectionStats() is
reporting the true values from the collstats command run on the servers.
Explicitly dropping and re-creating the index should work.
Another thing to try - does config.locks.find({ _id : "balancer",
state : 1 }) return the same document when you run it twice 20 mins
apart?
You'll want to double-check that the lock ts value hasn't changed, and
then do a config.settings.update({ _id : "balancer", ts : <ts> },
{ $set : { state : 0 } }).
can you send config.locks.find({ state : 1 }) - looking for all active
locks that don't change in ~20 mins.
The lock data on different config servers may be inconsistent - can you
log in individually to each and check config.locks()?
What do you mean by absolutely no utilization? Just that there are no
collection chunks on that shard for that db?
rfmshard2 8877
rfmsshard2 396
--
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To post to this group, send email to mongod...@googlegroups.com.
To unsubscribe from this group, send email to mongodb-user...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/mongodb-user?hl=en.
Also, is at least some balancer still running in a mongos process?
Primary refers to the database, not to the collection, it's the shard on
which new (unsharded) collections will be created, so don't think this
is necessarily an issue. You can test this by creating a new collection
for that db.
> > +unsub...@googlegroups.com.
> > For more options, visit this group at
> > http://groups.google.com/group/mongodb-user?hl=en.
> >
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "mongodb-user" group.
> To post to this group, send email to mongod...@googlegroups.com.
> To unsubscribe from this group, send email to mongodb-user
> <g....@10gen.com> wrote:
> > Something may interfering with balancing /
> > migrations - you can check
> >; the status of balancing by running a
> &> <logfile> | grep "[balancer]" on
> > the mongos logs, migrations by grepping
> > "migrate". You can also turn on
> > more verbose logging temporarily with
> > adminCommand( { "setParameter" :
> > 1 , logLevel : 1 } ).
> >
> Another thing to try - does
> config.locks.find({ _id :
> "balancer",
> state : 1 }}}) return the same
> document when you run it twice 20
> miins
> apart??
>
> On FFri, 2011-07-08 at 00:53 -0700,
> David Tollmyr wrote:
> > Rebuilding the indexes helped
> > somewhat. Now our collections are
> > sharded evenly on shards 3 and 4.
> > The """couldn't find index over
> > splitting key"" errors are gone.
> > But for some reason shards 1 and 2
> > are hardly being utilized at all.
> > All servers are up and responding
> > properly.
> >
> Ideas??
>
> --- Sharding SStatus ---
> sharding version: {{ "_id" :
> 1, ""version" : 3 }
> shards:
>t; {{
> "_id" : "rfmshard1",
> "host" : "rfmshard1/
> rfmcolldb03:27017,rfmcolldb02:27017,rfmcolldb01:27017"
> }}
> {{{
> """_id" : "rfmshard2",
> """host" : "rfmshard2/
> rfmcolldb01:27117,rfmcolldb03:27117,rfmcolldbb02:27117"
> }}
> {{
> "_id" : "rfmshard3",
> "host" : "rfmshard3/
> rfmcolldb04:27017,rfmcolldb06:27017,rfmcollldb05:27017"
> }}
> {{
> ""_id" : "rfmshard4",
> ""host" : "rfmshard4/
> rfmcolldb04:27117,rfmcolldb06:27117,rfmcolldb05:271117"
> }}
>
> {{{ "_id" : "complete",
> ""partitioned" : true,
> ""primary" : "rfmshard4" }
> complete.exposures chunks:
> rfmshard3 162
> rfmshard4 396
> rfmsshard2 396
&> too many chunksn to print,
> use verbose if you want to
> force print
&ggt; complete.pageviews chunks:
> rfmshard3 1
> rfmshard4 18
> rfmshard2 1 too many chunksn
> to print, use verbose if you
> want to force print
>
> fragments.exposure___fragments
> chunks:
> rrfmshard4 344
> rfmshard3 344
> rfmshard1 1 >
>; On 8 Juli, 05:36,
> Greg Studer
> <g...@@10gen.com>
> wrote:
&ggt; > sharding
> > configuration (most of the sharding ccommands said that the
> > > database
> > > and
> > > collections were already sharded -- seems like sharding > > > configuration is not removed whhen you drop a database).
> > >
> Think you're
> running
> intohttps://jira.mongodb.org/browse/SERVER-2253.
> The sharding
> config is
> cleared at
> first, but
> the db entry
> is
&> re-inserted.
> Workaround
> is to do:
&> config.databases.remove({ _id : <database> }) after you drop a sharded
> db. Fix may
> be simple to
> backport,
> looking into
> this as well
> for
> 1.9.2.
>
> All
> our
&ggt; collections report correct indexes when we call ggetIndexes(),
> like
> you
> see
> in
> Theos post.
> However, if wee call db.printCollecctionStats() all but one collection
> only
> report the ___id index.
>
> getIndexes()
> pulls from
> the
> <database>.system.indexes metadata<
> collection,
> which seems
> to have
> stale data.
> printCollectionStats() is
> reporting
> the true
&> values from
> the
> collstats
> command run
> on the
> servers.
> Explicitly
> dropping and
> re-creating
> the index
> should work.
&ggt;
> On Thu,
> 2011-07-07
> at 05:26
> -0700, David
>t; Tollmyr
> wrote:
> > Hi. I'm
> > David,
> > Theos
> > colleague.
> >
> We've done some furtherr digging and have discovered wwhat seems an
> inconsistency in the indexes.
> All
> our
> collections report correct iindexes when we call getIndexess(),
> like
> you
> see
> in
> Theos post.
> However, if we call dbb.printCollectionStats() all but one collecction
> only
> report the __id index.
>
> > db.printCollecttionStats()
> >
> exposure__fragments
> {{{
l�äs mer »
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongod...@googlegroups.com.
To unsubscribe from this group, send email to mongodb-user
+unsub...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/mongodb-user?hl=en.
--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongod...@googlegroups.com.
To unsubscribe from this group, send email to mongodb-user
+unsub...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/mongodb-user?hl=en.
--
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To post to this group, send email to mongod...@googlegroups.com.
To unsubscribe from this group, send email to mongodb-user...@googlegroups.com.
Guessing the "had to change attempt:0" indicates some connections
dropped/re-established at some point which may have made the situation
more confusing.
On Mon, 2011-07-11 at 23:19 +0200, David Tollmyr wrote:
> The "collection metadata lock" messages are not as common as i
> thought. I see about 6-8 in the past 12 hours.
> As far as balancer goes i see this regularly but not much else:
> Mon Jul 11 20:29:54 [Balancer] SyncClusterConnection connecting to
> [rfmcolldb03:28100]
>
>
> I see a lot of these if they're relevant:
> Mon Jul 11 21:14:52 [conn35] ns: fragments.pageview_fragments
> ClusteredCursor::query ShardConnection had to change attempt: 0
>
> > > > > > > > <database>.system.indexes metadata<<br> > collection,
> > > > > > > l妹 mer 禄
On Mon, 2011-07-11 at 23:19 +0200, David Tollmyr wrote:
> The "collection metadata lock" messages are not as common as i
> thought. I see about 6-8 in the past 12 hours.
> As far as balancer goes i see this regularly but not much else:
> Mon Jul 11 20:29:54 [Balancer] SyncClusterConnection connecting to
> [rfmcolldb03:28100]
>
>
> I see a lot of these if they're relevant:
> Mon Jul 11 21:14:52 [conn35] ns: fragments.pageview_fragments
> ClusteredCursor::query ShardConnection had to change attempt: 0
>
> > > > > > > > <database>.system.indexes metadata<<br> > collection,
> > > > > > > l妹 mer 禄
To unsubscribe from this group, send email to mongodb-user...@googlegroups.com.
If stale writebacks are queued, you'll need to bounce the shard
primaries to remove them.
To unsubscribe from this group, send email to mongodb-user...@googlegroups.com.
shardinng > > > configuration is not removed whhen you