Moxi management channel

119 views
Skip to first unread message

Guille -bisho-

unread,
Jul 7, 2010, 12:57:11 PM7/7/10
to moxi
I want to be able to manage Moxi remotely with libconflate. I read in
the documentation:

------------------------
* MCP (or libconflate) configuration of moxi

The following is the code snippet you'd see in the Master Control
Program (mcp)'s configs.py file. Notice that this is just python...

configs['storefront324'] = ServerConfig([
ServerList('poolx', 11221,
[Server("1",
host='localhost',
port=11211,
protocol='binary',
# bucket='b1',
# weight=2,
# usr='test1',
# pwd='password'
)],
protocol='binary',
# front_cache_lifespan=15000,
# front_cache_spec='sess:|page:',
# downstream_timeout=2000
),
ServerList('pooly', 11331,
[Server("2",
host='localhost',
port=11211,
protocol='binary',
# usr='test1',
# pwd='password'
),
Server("3",
host='localhost',
port=11311,
protocol='binary',
# usr='test1',
# pwd='password'
)],
protocol=default_protocol)
])
------------------------

But I can't find any reference to that MCP python program. Where can I
find it?

The moxi proxy already connects to the XMPP server, but I'm unable to
configure it, and if moxi fails parsing the commands it just exits.

Matt Ingenthron

unread,
Jul 8, 2010, 11:31:40 PM7/8/10
to mo...@googlegroups.com
Hi Guille,

Sorry for the delay in replying.

MCP was something we had at one point as a "control plane" for moxi
configurations. It plugged into the libconflate configuration
management of moxi which the used libstrophe, an XMPP client library, to
talk to something called mcp.

We've actually moved away from that approach to a more HTTP oriented
approach using REST or using a file. You can actually see a lot of that
in the current moxi codebase.

The configurations are served up by a component called ns_server in the
membase beta NorthScale has shipped (moxi is a key component of membase
and has been under very active development recently), but for small
deployments it may be simpler to just have it talk to a small server
sending a REST configuration.

I can probably help with this a bit more if I know the config you're
trying to achieve.

- Matt

Guille -bisho-

unread,
Jul 9, 2010, 4:48:09 AM7/9/10
to moxi
Very interesting. I really like the idea of REST, I was worried on the
deployment and management problems of configuring moxis through XMPP,
I very much prefer to push and retrieve config with REST / HTTP or
similar, so you can always query status and track if everything is in
sane condition proactively.

I will take a look to the memserver and ns_server.

There is any documentation on the configuration format? Moxi 0.10
supports rest or I need to look at trunk / membase repositories?

Thanks for your support. Moxi is a really interesting application.

Guille -bisho-

unread,
Jul 9, 2010, 12:19:25 PM7/9/10
to moxi
I'm having some problems compiling latest moxi 1.6 beta 1.1:

make[4]: Leaving directory `/tmp/moxi/libconflate/libstrophe'
Making all in .
make[4]: Entering directory `/tmp/moxi/libconflate'
make[4]: Leaving directory `/tmp/moxi/libconflate'
Making all in tests
make[4]: Entering directory `/tmp/moxi/libconflate/tests'
make[4]: Nothing to be done for `all'.
make[4]: Leaving directory `/tmp/moxi/libconflate/tests'
make[3]: Leaving directory `/tmp/moxi/libconflate'
make[2]: Leaving directory `/tmp/moxi/libconflate'
make[2]: Entering directory `/tmp/moxi'
gcc -std=gnu99 -DHAVE_CONFIG_H -I. -Ilibconflate -DNDEBUG -
DCONFLATE_DB_PATH=\"/usr/local/var/lib/moxi\" -g -O2 -pthread -Wall -
Werror -pedantic -Wstrict-prototypes -Wmissing-prototypes -Wmissing-
declarations -Wredundant-decls -fno-strict-aliasing -MT moxi-
memcached.o -MD -MP -MF .deps/moxi-memcached.Tpo -c -o moxi-
memcached.o `test -f 'memcached.c' || echo './'`memcached.c
In file included from cproxy.h:9,
from memcached.c:50:
mcs.h:10:32: error: libvbucket/vbucket.h: No such file or directory
In file included from cproxy.h:9,
from memcached.c:50:
mcs.h:31: error: expected specifier-qualifier-list before
‘VBUCKET_CONFIG_HANDLE’
cc1: warnings being treated as errors
mcs.h:33: error: struct has no members
make[2]: *** [moxi-memcached.o] Error 1
make[2]: Leaving directory `/tmp/moxi'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/tmp/moxi'
make: *** [all] Error 2

I also need to make some changes to be able to compile, for example
version.sh was creating version.m4 in the root directory instead of in
"m4" folder:

diff --git a/autogen.sh b/autogen.sh
index 5dd991f..4fa8060 100755
--- a/autogen.sh
+++ b/autogen.sh
@@ -20,7 +20,7 @@ libtoolize --automake
fi

echo "aclocal..."
-ACLOCAL=`which aclocal-1.10 || which aclocal-1.9 || which aclocal19
|| which aclocal-1.7 || which aclocal17 || which aclocal-1.5 || which
aclocal15`
+ACLOCAL=`which aclocal-1.11 || which aclocal-1.10 || which
aclocal-1.9 || which aclocal19 || which aclocal-1.7 || which aclocal17
|| which aclocal-1.5 || which aclocal15`
${ACLOCAL:-aclocal} $ACLOCALFLAGS || exit 1

echo "autoheader..."
@@ -28,7 +28,7 @@ AUTOHEADER=${AUTOHEADER:-autoheader}
$AUTOHEADER || exit 1

echo "automake..."
-AUTOMAKE=`which automake-1.10 || which automake-1.9 || which
automake-1.7`
+AUTOMAKE=`which automake-1.11 || which automake-1.10 || which
automake-1.9 || which automake-1.7`
$AUTOMAKE --foreign --add-missing || automake --foreign --add-missing
|| exit 1

echo "autoconf..."
diff --git a/version.sh b/version.sh
index 6df80a0..8a97ce0 100755
--- a/version.sh
+++ b/version.sh
@@ -4,7 +4,7 @@ if git describe | sed s/-/./g > version.num.tmp
then
mv version.num.tmp version.num
echo "m4_define([VERSION_NUMBER], [`tr -d '\n' < version.num`])"
\
- > version.m4
+ > m4/version.m4
sed s/@VERSION@/`cat version.num`/ < memcached.spec.in >
memcached.spec
else
rm version.num.tmp

Matt Ingenthron

unread,
Jul 9, 2010, 7:26:10 PM7/9/10
to mo...@googlegroups.com
Hi Guille,

Sorry about that. It's a bug in that there is a dependency there on
libvbucket in all cases. That's a result of some of the work we've been
doing on on membase and moxi's role in the membase architecture.

For now, you could probably just grab the missing header from libvbucket
or remove the -Werror from the Makefile.

libvbucket's code and that header is here:
http://github.com/northscale/libvbucket/tree/master/include/libvbucket/

By the way, in case you're wondering what vbuckets are and why moxi
cares, here's Dustin's post on the matter:
http://dustin.github.com/2010/06/29/memcached-vbuckets.html

vbuckets and moxi getting data to the right vbucket is part of how
membase works it's magic of rebalancing a cluster while online.

I'm planning to work on cleanup next week. Thanks for the pointer on
version.m4

- Matt

> �VBUCKET_CONFIG_HANDLE�

Guille -bisho-

unread,
Jul 12, 2010, 4:16:13 AM7/12/10
to moxi
I tried that, and then complains while linking:

/usr/bin/ld: cannot find -lvbucket

I'm trying to compile libvbucket, but I wasn't able yet. I'm going to
try to get the lib from the latest membase built packages...

Guille -bisho-

unread,
Jul 13, 2010, 4:38:22 AM7/13/10
to moxi
I managed to build moxi with the vbucket from membase source
distribution, that was missing from moxi 1.6 beta.

Now I have another issue. I launch the moxi with REST support, but
keeps requesting the configuration forever, without any throttle, at
top speed.

I requested the file by hand, and it's fine:
$ lynx -source http://localhost/pools/default/bucketsStreaming/default
11311 = {
"hashAlgorithm": "CRC",
"numReplicas": 0,
"serverList": ["localhost:11211"],
}

Launching moxi with this config file also works fine:
$ ./moxi -vvv -z /var/www/pools/default/bucketsStreaming/default
[...]
worker_libevent thread_id 140150538585872
<38 server listening (auto-negotiate)
<38 initialized conn_funcs to default
<39 server listening (auto-negotiate)
<39 initialized conn_funcs to default
init_string cycle: 0
init_string downstream_max: 4
init_string downstream_weight: 0
init_string downstream_retry: 1
init_string downstream_protocol: 8
init_string downstream_timeout: 0
init_string wait_queue_timeout: 0
init_string front_cache_max: 200
init_string front_cache_lifespan: 0
init_string front_cache_spec:
init_string front_cache_unspec:
init_string key_stats_max: 4000
init_string key_stats_lifespan: 0
init_string key_stats_spec:
init_string key_stats_unspec:
init_string optimize_set:
init_string usr:
init_string host:
init_string port: 0
init_string bucket:
init_string port_listen: 0
cproxy_create on port 11311, config {
"hashAlgorithm": "CRC",
"serverList": ["localhost:11211"]
}
cproxy_listen on port 11311, downstream {
"hashAlgorithm": "CRC",
"serverList": ["localhost:11211"]
}
<40 server listening (auto-negotiate)
<40 initialized conn_funcs to default
<41 server listening (auto-negotiate)
<41 initialized conn_funcs to default
<41 cproxy listening on port 11311
<40 cproxy listening on port 11311
moxi listening on 11311 with 2 conns

But when launching moxi with remote configuration, nothing happens,
and nothing appears on console despite the verbosity level:

$ ./moxi -vvv -z auth=,url=http://localhost:80/pools/default/
bucketsStreaming/default,#@ -p 11311
slab class 1: chunk size 96 perslab 10922
slab class 2: chunk size 120 perslab 8738
[...]
slab class 38: chunk size 394840 perslab 2
slab class 39: chunk size 493552 perslab 2
worker_libevent thread_id 140715919709968
worker_libevent thread_id 140715911317264
worker_libevent thread_id 140715902924560
worker_libevent thread_id 140715894531856
<38 server listening (auto-negotiate)
<38 initialized conn_funcs to default
<39 server listening (auto-negotiate)
<39 initialized conn_funcs to default
cproxy_init jid: host: http://localhost:80/pools/default/bucketsStreaming/default
dbpath: /usr/local/var/lib/moxi/conflate-default.cfg
cproxy_init_agent_start
cproxy_init done

The webserver is being hammered with thousands of requests:
::1 - - [13/Jul/2010:10:06:37 +0200] "GET /pools/default/
bucketsStreaming/default HTTP/1.1" 200 326 "-" ""

I have ensured /usr/local/var/lib/moxi/conflate-default.cfg has write
privileges, but it doesn't get overwritten...

Any clue?

steve.yen

unread,
Jul 13, 2010, 12:46:02 PM7/13/10
to moxi

Hi, you've found two things...

One is a bug -- moxi should stop (or at least backoff) if it sees a
bad config file. I've entered this into the internal bug tracking
system.

Another is your config is wrong, and is missing the vBucketMap
section...

11311 = {
"hashAlgorithm": "CRC",
"numReplicas": 0,
"serverList": ["localhost:11211"],
"vBucketMap":
[
[0],
[0]
]
}

Actually, this brings up an important clarifying question: are you
trying to use moxi to proxy to a membase server or to memcached
server?

Steve

On Jul 13, 1:38 am, Guille -bisho- <bishi...@gmail.com> wrote:
> I managed to build moxi with the vbucket from membase source
> distribution, that was missing from moxi 1.6 beta.
>
> Now I have another issue. I launch the moxi with REST support, but
> keeps requesting the configuration forever, without any throttle, at
> top speed.
>
> I requested the file by hand, and it's fine:
> $ lynx -sourcehttp://localhost/pools/default/bucketsStreaming/default

Guille -bisho-

unread,
Jul 14, 2010, 5:09:24 AM7/14/10
to moxi, stev...@gmail.com
On Jul 13, 6:46 pm, "steve.yen" <steve....@gmail.com> wrote:
> Hi, you've found two things...
>
> One is a bug -- moxi should stop (or at least backoff) if it sees a
> bad config file.  I've entered this into the internal bug tracking
> system.
>
> Another is your config is wrong, and is missing the vBucketMap
> section...
>
> 11311 = {
>   "hashAlgorithm": "CRC",
>   "numReplicas": 0,
>   "serverList": ["localhost:11211"],
>   "vBucketMap":
>     [
>       [0],
>       [0]
>     ]
>
> }
>
> Actually, this brings up an important clarifying question: are you
> trying to use moxi to proxy to a membase server or to memcached
> server?

Regular memcached server for now. We need to decrease the number of
established connections to memcached servers. I have plans to try out
also membase, but not right now.

Yeah, I noticed that too. I got it to work with:
11311 = {
"hashAlgorithm": "CRC",
"numReplicas": 0,
"serverList":
["127.0.0.1:11214","127.0.0.1:11213","127.0.0.1:11212","127.0.0.1:11211"],
"vBucketMap":
[
[0]
]
}

With that config, moxi works fine if launching with the config file:
$ ./moxi -vvv -z /var/www/pools/default/bucketsStreaming/default

I have a testing php client that attacks the moxi proxi, and it works.
Even more, I managed to configure it so the hash is compatible with
the current hash php used.

But again, when run in REST mode, the config is requested hundreds of
times per second, and moxi crashes when I run the same testing php
client against it. The config to fetch from REST is different from the
file format?
$ ./moxi -vvv -z auth=,url=http://localhost:80/pools/default/
bucketsStreaming/default,#@ -p 11311

slab class 1: chunk size 96 perslab 10922
slab class 2: chunk size 120 perslab 8738
slab class 3: chunk size 152 perslab 6898
slab class 4: chunk size 192 perslab 5461
slab class 5: chunk size 240 perslab 4369
slab class 6: chunk size 304 perslab 3449
slab class 7: chunk size 384 perslab 2730
slab class 8: chunk size 480 perslab 2184
slab class 9: chunk size 600 perslab 1747
slab class 10: chunk size 752 perslab 1394
slab class 11: chunk size 944 perslab 1110
slab class 12: chunk size 1184 perslab 885
slab class 13: chunk size 1480 perslab 708
slab class 14: chunk size 1856 perslab 564
slab class 15: chunk size 2320 perslab 451
slab class 16: chunk size 2904 perslab 361
slab class 17: chunk size 3632 perslab 288
slab class 18: chunk size 4544 perslab 230
slab class 19: chunk size 5680 perslab 184
slab class 20: chunk size 7104 perslab 147
slab class 21: chunk size 8880 perslab 118
slab class 22: chunk size 11104 perslab 94
slab class 23: chunk size 13880 perslab 75
slab class 24: chunk size 17352 perslab 60
slab class 25: chunk size 21696 perslab 48
slab class 26: chunk size 27120 perslab 38
slab class 27: chunk size 33904 perslab 30
slab class 28: chunk size 42384 perslab 24
slab class 29: chunk size 52984 perslab 19
slab class 30: chunk size 66232 perslab 15
slab class 31: chunk size 82792 perslab 12
slab class 32: chunk size 103496 perslab 10
slab class 33: chunk size 129376 perslab 8
slab class 34: chunk size 161720 perslab 6
slab class 35: chunk size 202152 perslab 5
slab class 36: chunk size 252696 perslab 4
slab class 37: chunk size 315872 perslab 3
slab class 38: chunk size 394840 perslab 2
slab class 39: chunk size 493552 perslab 2
worker_libevent thread_id 140693837612816
worker_libevent thread_id 140693854398224
worker_libevent thread_id 140693862790928
worker_libevent thread_id 140693846005520
<38 server listening (auto-negotiate)
<38 initialized conn_funcs to default
<39 server listening (auto-negotiate)
<39 initialized conn_funcs to default
cproxy_init jid: host: http://localhost:80/pools/default/bucketsStreaming/default
dbpath: /usr/local/var/lib/moxi/conflate-default.cfg
cproxy_init_agent_start
cproxy_init done
38: drive_machine conn_listening
<41 new auto-negotiating client connection
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
41: Client using the ascii protocol
<41 set test_moxi_0 0 0 5
41: going from conn_parse_cmd to conn_nread
41: drive_machine conn_nread
41: drive_machine conn_nread
> NOT FOUND test_moxi_0
>41 STORED
41: going from conn_nread to conn_write
41: drive_machine conn_write
41: drive_machine conn_write
41: going from conn_write to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 set test_moxi_1 0 0 5
41: going from conn_parse_cmd to conn_nread
41: drive_machine conn_nread
41: drive_machine conn_nread
> NOT FOUND test_moxi_1
>41 STORED
41: going from conn_nread to conn_write
41: drive_machine conn_write
41: drive_machine conn_write
41: going from conn_write to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 set test_moxi_2 0 0 5
41: going from conn_parse_cmd to conn_nread
41: drive_machine conn_nread
41: drive_machine conn_nread
> NOT FOUND test_moxi_2
>41 STORED
41: going from conn_nread to conn_write
41: drive_machine conn_write
41: drive_machine conn_write
41: going from conn_write to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 set test_moxi_3 0 0 5
41: going from conn_parse_cmd to conn_nread
41: drive_machine conn_nread
41: drive_machine conn_nread
> NOT FOUND test_moxi_3
>41 STORED
41: going from conn_nread to conn_write
41: drive_machine conn_write
41: drive_machine conn_write
41: going from conn_write to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 set test_moxi_4 0 0 5
41: going from conn_parse_cmd to conn_nread
41: drive_machine conn_nread
41: drive_machine conn_nread
> NOT FOUND test_moxi_4
>41 STORED
41: going from conn_nread to conn_write
41: drive_machine conn_write
41: drive_machine conn_write
41: going from conn_write to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 set test_moxi_5 0 0 5
41: going from conn_parse_cmd to conn_nread
41: drive_machine conn_nread
41: drive_machine conn_nread
> NOT FOUND test_moxi_5e 0 5
moxi5
efault
>41 STORED
41: going from conn_nread to conn_write
41: drive_machine conn_write
41: drive_machine conn_write
41: going from conn_write to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 set test_moxi_6 0 0 5
41: going from conn_parse_cmd to conn_nread
41: drive_machine conn_nread
41: drive_machine conn_nread
> NOT FOUND test_moxi_6
>41 STORED
41: going from conn_nread to conn_write
41: drive_machine conn_write
41: drive_machine conn_write
41: going from conn_write to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 set test_moxi_7 0 0 5
41: going from conn_parse_cmd to conn_nread
41: drive_machine conn_nread
41: drive_machine conn_nread
> NOT FOUND test_moxi_7e 0 5
moxi7
efault
>41 STORED
41: going from conn_nread to conn_write
41: drive_machine conn_write
41: drive_machine conn_write
41: going from conn_write to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 set test_moxi_8 0 0 5
41: going from conn_parse_cmd to conn_nread
41: drive_machine conn_nread
41: drive_machine conn_nread
> NOT FOUND test_moxi_8: 0 5
moxi8
Host: l�
>41 STORED
41: going from conn_nread to conn_write
41: drive_machine conn_write
41: drive_machine conn_write
41: going from conn_write to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 set test_moxi_9 0 0 5
41: going from conn_parse_cmd to conn_nread
41: drive_machine conn_nread
41: drive_machine conn_nread
> NOT FOUND test_moxi_9: 0 5
moxi9
Host: l�
>41 STORED
41: going from conn_nread to conn_write
41: drive_machine conn_write
41: drive_machine conn_write
41: going from conn_write to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 get test_moxi_0
Segmentation fault
bisho@pluma:/tmp/membase-server-ubuntu-1.6.0beta1/bin/moxi$ ./moxi -
vvv -z auth=,url=http://localhost:80/pools/default/bucketsStreaming/
default,#@ -p 11311
slab class 1: chunk size 96 perslab 10922
slab class 2: chunk size 120 perslab 8738
slab class 3: chunk size 152 perslab 6898
slab class 4: chunk size 192 perslab 5461
slab class 5: chunk size 240 perslab 4369
slab class 6: chunk size 304 perslab 3449
slab class 7: chunk size 384 perslab 2730
slab class 8: chunk size 480 perslab 2184
slab class 9: chunk size 600 perslab 1747
slab class 10: chunk size 752 perslab 1394
slab class 11: chunk size 944 perslab 1110
slab class 12: chunk size 1184 perslab 885
slab class 13: chunk size 1480 perslab 708
slab class 14: chunk size 1856 perslab 564
slab class 15: chunk size 2320 perslab 451
slab class 16: chunk size 2904 perslab 361
slab class 17: chunk size 3632 perslab 288
slab class 18: chunk size 4544 perslab 230
slab class 19: chunk size 5680 perslab 184
slab class 20: chunk size 7104 perslab 147
slab class 21: chunk size 8880 perslab 118
slab class 22: chunk size 11104 perslab 94
slab class 23: chunk size 13880 perslab 75
slab class 24: chunk size 17352 perslab 60
slab class 25: chunk size 21696 perslab 48
slab class 26: chunk size 27120 perslab 38
slab class 27: chunk size 33904 perslab 30
slab class 28: chunk size 42384 perslab 24
slab class 29: chunk size 52984 perslab 19
slab class 30: chunk size 66232 perslab 15
slab class 31: chunk size 82792 perslab 12
slab class 32: chunk size 103496 perslab 10
slab class 33: chunk size 129376 perslab 8
slab class 34: chunk size 161720 perslab 6
slab class 35: chunk size 202152 perslab 5
slab class 36: chunk size 252696 perslab 4
slab class 37: chunk size 315872 perslab 3
slab class 38: chunk size 394840 perslab 2
slab class 39: chunk size 493552 perslab 2
worker_libevent thread_id 140671905761040
worker_libevent thread_id 140671922546448
worker_libevent thread_id 140671897368336
worker_libevent thread_id 140671914153744
<38 server listening (auto-negotiate)
<38 initialized conn_funcs to default
<39 server listening (auto-negotiate)
<39 initialized conn_funcs to default
cproxy_init jid: host: http://localhost:80/pools/default/bucketsStreaming/default
dbpath: /usr/local/var/lib/moxi/conflate-default.cfg
cproxy_init_agent_start
cproxy_init done
38: drive_machine conn_listening
<41 new auto-negotiating client connection
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
41: Client using the ascii protocol
<41 set test_moxi_0 0 0 5
41: going from conn_parse_cmd to conn_nread
41: drive_machine conn_nread
41: drive_machine conn_nread
> NOT FOUND test_moxi_0
>41 STORED
41: going from conn_nread to conn_write
41: drive_machine conn_write
41: drive_machine conn_write
41: going from conn_write to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 set test_moxi_1 0 0 5
41: going from conn_parse_cmd to conn_nread
41: drive_machine conn_nread
41: drive_machine conn_nread
> NOT FOUND test_moxi_1
>41 STORED
41: going from conn_nread to conn_write
41: drive_machine conn_write
41: drive_machine conn_write
41: going from conn_write to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 set test_moxi_2 0 0 5
41: going from conn_parse_cmd to conn_nread
41: drive_machine conn_nread
41: drive_machine conn_nread
> NOT FOUND test_moxi_2
>41 STORED
41: going from conn_nread to conn_write
41: drive_machine conn_write
41: drive_machine conn_write
41: going from conn_write to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 set test_moxi_3 0 0 5
41: going from conn_parse_cmd to conn_nread
41: drive_machine conn_nread
41: drive_machine conn_nread
> NOT FOUND test_moxi_3
>41 STORED
41: going from conn_nread to conn_write
41: drive_machine conn_write
41: drive_machine conn_write
41: going from conn_write to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 set test_moxi_4 0 0 5
41: going from conn_parse_cmd to conn_nread
41: drive_machine conn_nread
41: drive_machine conn_nread
> NOT FOUND test_moxi_4
>41 STORED
41: going from conn_nread to conn_write
41: drive_machine conn_write
41: drive_machine conn_write
41: going from conn_write to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 set test_moxi_5 0 0 5
41: going from conn_parse_cmd to conn_nread
41: drive_machine conn_nread
41: drive_machine conn_nread
> NOT FOUND test_moxi_5
>41 STORED
41: going from conn_nread to conn_write
41: drive_machine conn_write
41: drive_machine conn_write
41: going from conn_write to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 set test_moxi_6 0 0 5
41: going from conn_parse_cmd to conn_nread
41: drive_machine conn_nread
41: drive_machine conn_nread
> NOT FOUND test_moxi_6
>41 STORED
41: going from conn_nread to conn_write
41: drive_machine conn_write
41: drive_machine conn_write
41: going from conn_write to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 set test_moxi_7 0 0 5
41: going from conn_parse_cmd to conn_nread
41: drive_machine conn_nread
41: drive_machine conn_nread
> NOT FOUND test_moxi_7
>41 STORED
41: going from conn_nread to conn_write
41: drive_machine conn_write
41: drive_machine conn_write
41: going from conn_write to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 set test_moxi_8 0 0 5
41: going from conn_parse_cmd to conn_nread
41: drive_machine conn_nread
41: drive_machine conn_nread
> NOT FOUND test_moxi_8
>41 STORED
41: going from conn_nread to conn_write
41: drive_machine conn_write
41: drive_machine conn_write
41: going from conn_write to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 set test_moxi_9 0 0 5
41: going from conn_parse_cmd to conn_nread
41: drive_machine conn_nread
41: drive_machine conn_nread
> NOT FOUND test_moxi_9
>41 STORED
41: going from conn_nread to conn_write
41: drive_machine conn_write
41: drive_machine conn_write
41: going from conn_write to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 get test_moxi_0
> NOT FOUND test_moxi_0
>41 END
41: going from conn_parse_cmd to conn_mwrite
41: drive_machine conn_mwrite
41: drive_machine conn_mwrite
41: going from conn_mwrite to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 get test_moxi_1
> NOT FOUND test_moxi_1
>41 END
41: going from conn_parse_cmd to conn_mwrite
41: drive_machine conn_mwrite
41: drive_machine conn_mwrite
41: going from conn_mwrite to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 get test_moxi_2
> NOT FOUND test_moxi_2
>41 END
41: going from conn_parse_cmd to conn_mwrite
41: drive_machine conn_mwrite
41: drive_machine conn_mwrite
41: going from conn_mwrite to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 get test_moxi_3
> NOT FOUND test_moxi_3
>41 END
41: going from conn_parse_cmd to conn_mwrite
41: drive_machine conn_mwrite
41: drive_machine conn_mwrite
41: going from conn_mwrite to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 get test_moxi_4
> NOT FOUND test_moxi_4
>41 END
41: going from conn_parse_cmd to conn_mwrite
41: drive_machine conn_mwrite
41: drive_machine conn_mwrite
41: going from conn_mwrite to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 get test_moxi_5
> NOT FOUND test_moxi_5
>41 END
41: going from conn_parse_cmd to conn_mwrite
41: drive_machine conn_mwrite
41: drive_machine conn_mwrite
41: going from conn_mwrite to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 get test_moxi_6
> NOT FOUND test_moxi_6
>41 END
41: going from conn_parse_cmd to conn_mwrite
41: drive_machine conn_mwrite
41: drive_machine conn_mwrite
41: going from conn_mwrite to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 get test_moxi_7
> NOT FOUND test_moxi_7
>41 END
41: going from conn_parse_cmd to conn_mwrite
41: drive_machine conn_mwrite
41: drive_machine conn_mwrite
41: going from conn_mwrite to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 get test_moxi_8
> NOT FOUND test_moxi_8
>41 END
41: going from conn_parse_cmd to conn_mwrite
41: drive_machine conn_mwrite
41: drive_machine conn_mwrite
41: going from conn_mwrite to conn_new_cmd
41: drive_machine conn_new_cmd
41: going from conn_new_cmd to conn_waiting
41: drive_machine conn_waiting
41: going from conn_waiting to conn_read
41: drive_machine conn_read
41: going from conn_read to conn_parse_cmd
41: drive_machine conn_parse_cmd
<41 get test_moxi_9
> FOUND KEY test_moxi_9
>41 sending key test_moxi_9
>41 END
41: going from conn_parse_cmd to conn_mwrite
41: drive_machine conn_mwrite
41: drive_machine conn_mwrite
*** glibc detected *** ./moxi: double free or corruption (fasttop):
0x0000000000b01280 ***
======= Backtrace: =========
/lib/libc.so.6(+0x775b6)[0x7ff0bc20e5b6]
/lib/libc.so.6(cfree+0x73)[0x7ff0bc214e53]
./moxi[0x415713]
./moxi[0x415b6f]
./moxi[0x4178bd]
./moxi[0x4110d1]
./moxi[0x41143e]
./moxi[0x46bb14]
./moxi[0x4174b0]
/lib/libpthread.so.0(+0x69ca)[0x7ff0bcdde9ca]
/lib/libc.so.6(clone+0x6d)[0x7ff0bc27d6fd]
======= Memory map: ========
00400000-004a6000 r-xp 00000000 08:02
45769759 /tmp/membase-server-
ubuntu-1.6.0beta1/bin/moxi/moxi
006a5000-006a7000 r--p 000a5000 08:02
45769759 /tmp/membase-server-
ubuntu-1.6.0beta1/bin/moxi/moxi
006a7000-006aa000 rw-p 000a7000 08:02
45769759 /tmp/membase-server-
ubuntu-1.6.0beta1/bin/moxi/moxi
006aa000-006b0000 rw-p 00000000 00:00 0
00af1000-023f4000 rw-p 00000000 00:00
0 [heap]
7ff0b4000000-7ff0b4021000 rw-p 00000000 00:00 0
7ff0b4021000-7ff0b8000000 ---p 00000000 00:00 0
7ff0b8368000-7ff0b837e000 r-xp 00000000 08:02
16778428 /lib/libgcc_s.so.1
7ff0b837e000-7ff0b857d000 ---p 00016000 08:02
16778428 /lib/libgcc_s.so.1
7ff0b857d000-7ff0b857e000 r--p 00015000 08:02
16778428 /lib/libgcc_s.so.1
7ff0b857e000-7ff0b857f000 rw-p 00016000 08:02
16778428 /lib/libgcc_s.so.1
7ff0b857f000-7ff0b858b000 r-xp 00000000 08:02
17820950 /lib/libnss_files-2.11.1.so
7ff0b858b000-7ff0b878a000 ---p 0000c000 08:02
17820950 /lib/libnss_files-2.11.1.so
7ff0b878a000-7ff0b878b000 r--p 0000b000 08:02
17820950 /lib/libnss_files-2.11.1.so
7ff0b878b000-7ff0b878c000 rw-p 0000c000 08:02
17820950 /lib/libnss_files-2.11.1.so
7ff0b878c000-7ff0b878d000 ---p 00000000 00:00 0
7ff0b878d000-7ff0b8f8d000 rw-p 00000000 00:00 0
7ff0b8f8d000-7ff0b8f8e000 ---p 00000000 00:00 0
7ff0b8f8e000-7ff0b978e000 rw-p 00000000 00:00 0
7ff0b978e000-7ff0b978f000 ---p 00000000 00:00 0
7ff0b978f000-7ff0b9f8f000 rw-p 00000000 00:00 0
7ff0b9f8f000-7ff0b9f90000 ---p 00000000 00:00 0
7ff0b9f90000-7ff0ba790000 rw-p 00000000 00:00 0
7ff0ba790000-7ff0ba791000 ---p 00000000 00:00 0
7ff0ba791000-7ff0baf91000 rw-p 00000000 00:00 0
7ff0baf91000-7ff0baf92000 ---p 00000000 00:00 0
7ff0baf92000-7ff0bb792000 rw-p 00000000 00:00 0
7ff0bb792000-7ff0bb793000 ---p 00000000 00:00 0
7ff0bb793000-7ff0bbf93000 rw-p 00000000 00:00 0
7ff0bbf93000-7ff0bbf95000 r-xp 00000000 08:02
16780139 /lib/libdl-2.11.1.so
7ff0bbf95000-7ff0bc195000 ---p 00002000 08:02
16780139 /lib/libdl-2.11.1.so
7ff0bc195000-7ff0bc196000 r--p 00002000 08:02
16780139 /lib/libdl-2.11.1.so
7ff0bc196000-7ff0bc197000 rw-p 00003000 08:02
16780139 /lib/libdl-2.11.1.so
7ff0bc197000-7ff0bc311000 r-xp 00000000 08:02
16780136 /lib/libc-2.11.1.so
7ff0bc311000-7ff0bc510000 ---p 0017a000 08:02
16780136 /lib/libc-2.11.1.so
7ff0bc510000-7ff0bc514000 r--p 00179000 08:02
16780136 /lib/libc-2.11.1.so
7ff0bc514000-7ff0bc515000 rw-p 0017d000 08:02
16780136 /lib/libc-2.11.1.so
7ff0bc515000-7ff0bc51a000 rw-p 00000000 00:00 0
7ff0bc51a000-7ff0bc59c000 r-xp 00000000 08:02
17820945 /lib/libm-2.11.1.so
7ff0bc59c000-7ff0bc79b000 ---p 00082000 08:02
17820945 /lib/libm-2.11.1.so
7ff0bc79b000-7ff0bc79c000 r--p 00081000 08:02
17820945 /lib/libm-2.11.1.so
7ff0bc79c000-7ff0bc79d000 rw-p 00082000 08:02
17820945 /lib/libm-2.11.1.so
7ff0bc79d000-7ff0bc7b3000 r-xp 00000000 08:02
17820956 /lib/libresolv-2.11.1.so
7ff0bc7b3000-7ff0bc9b2000 ---p 00016000 08:02
17820956 /lib/libresolv-2.11.1.so
7ff0bc9b2000-7ff0bc9b3000 r--p 00015000 08:02
17820956 /lib/libresolv-2.11.1.so
7ff0bc9b3000-7ff0bc9b4000 rw-p 00016000 08:02
17820956 /lib/libresolv-2.11.1.so
7ff0bc9b4000-7ff0bc9b6000 rw-p 00000000 00:00 0
7ff0bc9b6000-7ff0bc9bd000 r-xp 00000000 08:02
17820957 /lib/librt-2.11.1.so
7ff0bc9bd000-7ff0bcbbc000 ---p 00007000 08:02
17820957 /lib/librt-2.11.1.so
7ff0bcbbc000-7ff0bcbbd000 r--p 00006000 08:02
17820957 /lib/librt-2.11.1.so
7ff0bcbbd000-7ff0bcbbe000 rw-p 00007000 08:02
17820957 /lib/librt-2.11.1.so
7ff0bcbbe000-7ff0bcbd5000 r-xp 00000000 08:02
17820947 /lib/libnsl-2.11.1.so
7ff0bcbd5000-7ff0bcdd4000 ---p 00017000 08:02
17820947 /lib/libnsl-2.11.1.so
7ff0bcdd4000-7ff0bcdd5000 r--p 00016000 08:02
17820947 /lib/libnsl-2.11.1.so
7ff0bcdd5000-7ff0bcdd6000 rw-p 00017000 08:02
17820947 /lib/libnsl-2.11.1.so
7ff0bcdd6000-7ff0bcdd8000 rw-p 00000000 00:00 0
7ff0bcdd8000-7ff0bcdf0000 r-xp 00000000 08:02
17820955 /lib/libpthread-2.11.1.so
7ff0bcdf0000-7ff0bcfef000 ---p 00018000 08:02
17820955 /lib/libpthread-2.11.1.so
7ff0bcfef000-7ff0bcff0000 r--p 00017000 08:02
17820955 /lib/libpthread-2.11.1.so
7ff0bcff0000-7ff0bcff1000 rw-p 00018000 08:02
17820955 /lib/libpthread-2.11.1.so
7ff0bcff1000-7ff0bcff5000 rw-p 00000000 00:00 0
7ff0bcff5000-7ff0bd00b000 r-xp 00000000 08:02
16778576 /lib/libz.so.1.2.3.3
7ff0bd00b000-7ff0bd20a000 ---p 00016000 08:02
16778576 /lib/libz.so.1.2.3.3
7ff0bd20a000-7ff0bd20b000 r--p 00015000 08:02
16778576 /lib/libz.so.1.2.3.3
7ff0bd20b000-7ff0bd20c000 rw-p 00016000 08:02
16778576 /lib/libz.so.1.2.3.3
7ff0bd20c000-7ff0bd374000 r-xp 00000000 08:02
16778404 /lib/libcrypto.so.0.9.8
7ff0bd374000-7ff0bd573000 ---p 00168000 08:02
16778404 /lib/libcrypto.so.0.9.8
7ff0bd573000-7ff0bd580000 r--p 00167000 08:02
16778404 /lib/libcrypto.so.0.9.8
7ff0bd580000-7ff0bd598000 rw-p 00174000 08:02
16778404 /lib/libcrypto.so.0.9.8
7ff0bd598000-7ff0bd59c000 rw-p 00000000 00:00 0
7ff0bd59c000-7ff0bd5e6000 r-xp 00000000 08:02
16778549 /lib/libssl.so.0.9.8
7ff0bd5e6000-7ff0bd7e5000 ---p 0004a000 08:02
16778549 /lib/libssl.so.0.9.8
7ff0bd7e5000-7ff0bd7e7000 r--p 00049000 08:02
16778549 /lib/libssl.so.0.9.8
7ff0bd7e7000-7ff0bd7ec000 rw-p 0004b000 08:02
16778549 /lib/libssl.so.0.9.8
7ff0bd7ec000-7ff0bd80c000 r-xp 00000000 08:02
17820930 /lib/ld-2.11.1.so
7ff0bd964000-7ff0bd9eb000 rw-p 00000000 00:00 0
7ff0bda09000-7ff0bda0c000 rw-p 00000000 00:00 0 Aborted

steve.yen

unread,
Jul 14, 2010, 11:57:18 AM7/14/10
to moxi
Hi,
Yes, you've hit a manifestation of the same bug, or at least it's in
the same part of the code that interacts with libcurl.

The latest libconflate has a quick fix that should make REST requests
less often (once a second, rather than as fast-as-possible), and I
hope that'll be helpful. Please see:
http://github.com/northscale/libconflate/commit/420faf47fa6c1ee3564f21dc8cde57056670aa06

A better fix would be to do something more intelligent (backoff, etc).

The end-all solution, by the way, is with the way membase does it.
The REST/webserver component in membase keeps the HTTP/REST connection
open, or in so-called "streaming" fashion. So, when the membase
cluster management component sees a cluster configuration change, it
can actively notify clients like moxi. That has the benefit so that
clients like moxi do not need to do continual polling against the REST
server.

Steve

steve.yen

unread,
Jul 14, 2010, 12:30:54 PM7/14/10
to moxi
Also, on the moxi crash, I'll try to replicate what you did, but the
most useful things are stack backtraces, etc, if you have them.
Thanks!
Steve

steve.yen

unread,
Jul 14, 2010, 12:40:30 PM7/14/10
to moxi
Apologies, I seem to be reading and re-reading your message in
sentence-sized parts (timesliced into my morning routine and
commute)... :-)

> Even more, I managed to configure it so the hash is compatible with
> the current hash php used.

Great! Would love to hear more about how you did this, if you can
spare some of the details?

Thx,
Steve

Guille -bisho-

unread,
Jul 14, 2010, 1:02:44 PM7/14/10
to moxi
On Jul 14, 5:57 pm, "steve.yen" <steve....@gmail.com> wrote:
> Hi,
> Yes, you've hit a manifestation of the same bug, or at least it's in
> the same part of the code that interacts with libcurl.
>
> The latest libconflate has a quick fix that should make REST requests
> less often (once a second, rather than as fast-as-possible), and I
> hope that'll be helpful.  Please see:http://github.com/northscale/libconflate/commit/420faf47fa6c1ee3564f2...

I will try that!

BTW, I think I'm getting the vbuckets idea wrong. I wanted to
configure just the serverList of the moxi. It looks like moxi listens
in a port, configures other with the vbucket configuration. Is there
any way I can send moxi just a serverList, not the vbucket
configuration? I need to be able to change it on the fly, in case a
memcached instance dies and needs to be replaced by other.

Also, when I launch moxi like:

$ ./moxi -vvv -u nobody -l 127.0.0.1 -z ./config/port_11302

It opens two ports: the expected 11301 and 11210 that is not mentioned
in command line or config. I also tried -P XXX and -Z port_listen=XXX
but the 11210 is always used.
tcp 0 0 127.0.0.1:11301 0.0.0.0:*
LISTEN 12805/moxi
tcp 0 0 127.0.0.1:11210 0.0.0.0:*
LISTEN 12805/moxi

I need to be able to launch more than one moxi at once, one for each
memcached farm, with different lists of servers.

> A better fix would be to do something more intelligent (backoff, etc).
>
> The end-all solution, by the way, is with the way membase does it.
> The REST/webserver component in membase keeps the HTTP/REST connection
> open, or in so-called "streaming" fashion.  So, when the membase
> cluster management component sees a cluster configuration change, it
> can actively notify clients like moxi.  That has the benefit so that
> clients like moxi do not need to do continual polling against the REST
> server.

Hum... Sounds interesting but for now I would prefer to stick for now
to well known technologies, like a standard webserver.

About the ns_server, could issue alerts if notices that a particular
moxi client is not connecting? I was thinking in implementing this
kind of alerts in the webserver.

steve.yen

unread,
Jul 14, 2010, 5:00:26 PM7/14/10
to moxi
Hi,
There was a bug in the last libconflate commit. I was able to
reproduce what you were seeing with my own, simple (not ns_server)
webserver, and just checked in another libconflate fix that should
actually work right. With this fix, moxi/libconflate still polls the
REST webserver, but only every second.

And, more below...

On Jul 14, 10:02 am, Guille -bisho- <bishi...@gmail.com> wrote:
> On Jul 14, 5:57 pm, "steve.yen" <steve....@gmail.com> wrote:
>
> > Hi,
> > Yes, you've hit a manifestation of the same bug, or at least it's in
> > the same part of the code that interacts with libcurl.
>
> > The latest libconflate has a quick fix that should make REST requests
> > less often (once a second, rather than as fast-as-possible), and I
> > hope that'll be helpful.  Please see:http://github.com/northscale/libconflate/commit/420faf47fa6c1ee3564f2...
>
> I will try that!
>
> BTW, I think I'm getting the vbuckets idea wrong. I wanted to
> configure just the serverList of the moxi. It looks like moxi listens
> in a port, configures other with the vbucket configuration. Is there
> any way I can send moxi just a serverList, not the vbucket
> configuration? I need to be able to change it on the fly, in case a
> memcached instance dies and needs to be replaced by other.

As long as the server has a vBucketMap of [ [0] ], it should work.
But, you'd be getting libvbucket hashing instead of libmemcached/
ketama hashing.

At this point, the REST/HTTP codepaths only work with the libvbucket
library.

>
> Also, when I launch moxi like:
>
> $ ./moxi -vvv -u nobody -l 127.0.0.1 -z ./config/port_11302
>
> It opens two ports: the expected 11301 and 11210 that is not mentioned
> in command line or config. I also tried -P XXX and -Z port_listen=XXX
> but the 11210 is always used.
> tcp        0      0 127.0.0.1:11301         0.0.0.0:*
> LISTEN      12805/moxi
> tcp        0      0 127.0.0.1:11210         0.0.0.0:*
> LISTEN      12805/moxi

There's an extra "-p 0" flag you should use for this case...

$ ./moxi -vvv -u nobody -l 127.0.0.1 -p 0 -z ./config/port_11302

Or...

$ ./moxi -vvv -u nobody -l 127.0.0.1 -p 0 -z http://HOST:PORT/url/to/json

The "-p 0" forces moxi to never activate its memcached codepaths --
to never listen on (by default) port 11210 as a memcached server.
Instead, with an explicit "-p 0", moxi will just run as a proxy only.

That, in combination with the -Z port_listen=XXXX and multiple URL's,
you should be able to launch multiple moxi's

$ ./moxi -vvv -u nobody -l 127.0.0.1 -p 0 -Z port_listen=11511 -z
http://HOST:PORT/url/to/json1
$ ./moxi -vvv -u nobody -l 127.0.0.1 -p 0 -Z port_listen=11611 -z
http://HOST:PORT/url/to/json2

> I need to be able to launch more than one moxi at once, one for each
> memcached farm, with different lists of servers.
>
> > A better fix would be to do something more intelligent (backoff, etc).
>
> > The end-all solution, by the way, is with the way membase does it.
> > The REST/webserver component in membase keeps the HTTP/REST connection
> > open, or in so-called "streaming" fashion.  So, when the membase
> > cluster management component sees a cluster configuration change, it
> > can actively notify clients like moxi.  That has the benefit so that
> > clients like moxi do not need to do continual polling against the REST
> > server.
>
> Hum... Sounds interesting but for now I would prefer to stick for now
> to well known technologies, like a standard webserver.
>
> About the ns_server, could issue alerts if notices that a particular
> moxi client is not connecting? I was thinking in implementing this
> kind of alerts in the webserver.

There's nothing in that in ns_server right now. The ns_server also
doesn't really know who's making requests to those streaming REST/HTTP
url's, whether it's moxi or somebody trying a curl (for example).

Cheers!
Steve

Guille -bisho-

unread,
Jul 16, 2010, 11:42:23 AM7/16/10
to moxi
Great!, this seems to work. Thanks a lot for your patience!

steve.yen

unread,
Jul 16, 2010, 11:49:30 AM7/16/10
to moxi
Glad to hear it -- and thank you for helping make moxi better!
Reply all
Reply to author
Forward
0 new messages