Hello,
I'd like to share these scalaris beginner quick start snippets.
Once you have (easyly) build scalaris (configure+make), "For simple tests, you do not need to install Scalaris. You can run it directly from the source directory".
Let's experiment a scalaris DBMS cluster with only one computer thanks to Erlang or virtualisation.
Just slightly copy-modify scalaris.cfg:
{listen_ip, {127,0,0,1}}.
{mgmt_server, {{127,0,0,1},14195,mgmt_server}}.
{known_hosts, [ {{127,0,0,1},14195, service_per_vm},
{{127,0,0,1},14196, service_per_vm},
{{127,0,0,1},14197, service_per_vm},
{{127,0,0,1},14198, service_per_vm}
% no n5 99, not listed here
]}.
In shell S1, start the first node ("premier"):
./bin/scalarisctl -m -n
pre...@127.0.0.1 -p 14195 -y 8000 -s -f start
(note the -m and -f options, management and first_node)
In a monitoring shell MS, see the "premier" node:
./bin/scalarisctl list
epmd: up and running on port 4369 with data:
name premier at port 47235
In a web browser pointed at
127.0.0.1:8000 you can see the first node ("premier") and its management role.
Let's add 4 nodes in the cluster (use new shells S2 to S5, each simulating a physical computer):
./bin/scalarisctl -n
sec...@127.0.0.1 -p 14196 -y 8001 -s start
./bin/scalarisctl -n
n...@127.0.0.1 -p 14197 -y 8002 -s start
./bin/scalarisctl -n
n...@127.0.0.1 -p 14198 -y 8003 -s start
./bin/scalarisctl -n
n...@127.0.0.1 -p 14199 -y 8004 -s start
See them joined (use shell MS):
./bin/scalarisctl list
epmd: up and running on port 4369 with data:
name n5 at port 47801
name n4 at port 54614
name n3 at port 41710
name second at port 44329
name premier at port 44862
See each of their web console on
127.0.0.1:8001 to 8004. (all the same and not the same as 8000 started with management role)
All nodes claim a 5 nodes cluster ring.
Now let us use the web console client interface:
Go to 8000 and lookup something -> {fail,not_found} (ok).
Add k1/v1 and k2/v2 -> {ok} and {ok}.
lookup k1 & k2 -> {ok,"v1"} & {ok,"v2"}.
look them up from other nodes -> ok, same.
update k1 to v1updated from second(8001) -> {ok}.
update k2 to v2updated from n5(8004) -> {ok}.
look them up from other nodes -> ok, updated.
Now let's stop/kill n4 (use shell MS):
./bin/scalarisctl -n
n...@127.0.0.1 stop
Other nodes notice the crash.
See new 4-nodes list in shell MS with ./bin/scalarisctl list
Look up k1 & k2 from all other nodes -> ok, available.
Now let's restart n4 (in shell S4):
./bin/scalarisctl -n
n...@127.0.0.1 -p 14198 -y 8003 -s start
See restored node list (in shell MS) with ./bin/scalarisctl list
Look up k1 & k2 from n4 or other nodes -> ok, available.
We have a 5 node cluster again and no data loss.
Let's now use the erlang shell of a server node to experiment with the erlang API.
Hit <return> to see the erlang shell prompt in shell S1 (premier@127...):
(
pre...@127.0.0.1)1> api_tx:read("k0").
{fail,not_found}
(
pre...@127.0.0.1)2> api_tx:read("k1").
{ok,"v1updated"}
(
pre...@127.0.0.1)3> api_tx:read("k2").
{ok,"v2updated"}
(
pre...@127.0.0.1)4> api_tx:read(<<"k1">>).
{ok,"v1updated"}
(
pre...@127.0.0.1)5> api_tx:read(<<"k2">>).
{ok,"v2updated"}
(
pre...@127.0.0.1)6> api_tx:write(<<"k3">>,<<"v3">>).
{ok}
(
pre...@127.0.0.1)7> api_tx:read(<<"k3">>).
{ok,<<"v3">>}
(
pre...@127.0.0.1)8> api_tx:read("k3").
{ok,<<"v3">>}
(
pre...@127.0.0.1)9> api_tx:write(<<"k4">>,{1,2,3,four}).
{ok}
(
pre...@127.0.0.1)10> api_tx:read("k4").
{ok,{1,2,3,four}}
Let's now connect a true client to our 5 nodes scalaris DBMS cluster.
We use a new shell (again) to run an erlang VM to do remote API calls to the server nodes.
This is a quick-and-dirty discovery (using rpc:call/4). A production system would have some more sophisticated client side module. For example it would automatically dispatch requests to server nodes.
erl -name
cli...@127.0.0.1 -hidden -setcookie 'chocolate chip cookie'
(
cli...@127.0.0.1)1> net_adm:ping('
n...@127.0.0.1').
pong
(
cli...@127.0.0.1)2> rpc:call('
n...@127.0.0.1', api_tx, read, [<<"k0">>]).
{fail,not_found}
(
cli...@127.0.0.1)3> rpc:call('
n...@127.0.0.1', api_tx, read, [<<"k4">>]).
{ok,{1,2,3,four}}
(
cli...@127.0.0.1)4> rpc:call('
n...@127.0.0.1', api_tx, read, [<<"k4">>]).
{ok,{1,2,3,four}}
(
cli...@127.0.0.1)5> rpc:call('
n...@127.0.0.1', api_tx, write, [<<"num5">>,55]).
{ok}
(
cli...@127.0.0.1)6> rpc:call('
n...@127.0.0.1', api_tx, read, [<<"num5">>]).
{ok,55}
(
cli...@127.0.0.1)7> rpc:call('
n...@127.0.0.1', api_tx, add_on_nr, [<<"num5">>,2]).
{badrpc,nodedown}
(
cli...@127.0.0.1)8> rpc:call('
sec...@127.0.0.1', api_tx, add_on_nr, [<<"num5">>,2]).
{ok}
(
cli...@127.0.0.1)9> rpc:call('
n...@127.0.0.1', api_tx, read, [<<"num5">>]).
{ok,57}
(
cli...@127.0.0.1)10> rpc:call('
n...@127.0.0.1', api_tx, test_and_set, [<<"num5">>,57,59]).
{ok}
(
cli...@127.0.0.1)11> rpc:call('
n...@127.0.0.1', api_tx, read, [<<"num5">>]).
{ok,59}
(
cli...@127.0.0.1)12> rpc:call('
n...@127.0.0.1', api_tx, test_and_set, [<<"num5">>,57,55]).
{fail,{key_changed,59}}
(
cli...@127.0.0.1)13> rpc:call('
n...@127.0.0.1', api_tx, read, [<<"num5">>]).
{ok,59}
(
cli...@127.0.0.1)14> rpc:call('
n...@127.0.0.1', api_tx, test_and_set, [<<"k2">>,"v2updated",<<"v2updatedTWICE">>]).
{ok}
(
cli...@127.0.0.1)15> rpc:call('
n...@127.0.0.1', api_tx, read, [<<"k2">>]).
{ok,<<"v2updatedTWICE">>}
(
cli...@127.0.0.1)16> rpc:call('
n...@127.0.0.1', api_tx, add_on_nr, [<<"num5">>,-4]).
{ok}
(
cli...@127.0.0.1)17> rpc:call('
n...@127.0.0.1', api_tx, read, [<<"num5">>]).
{ok,55}
(
cli...@127.0.0.1)18> q().
ok
Just for the fun, another client computer now connects to the cluster and read updates made by the first:
erl -name
clien...@127.0.0.1 -hidden -setcookie 'chocolate chip cookie'
(
clien...@127.0.0.1)1> net_adm:ping('
n...@127.0.0.1').
pong
(
clien...@127.0.0.1)2> rpc:call('
n...@127.0.0.1', api_tx, read, [<<"k0">>]).
{fail,not_found}
(
clien...@127.0.0.1)3> rpc:call('
n...@127.0.0.1', api_tx, read, [<<"k1">>]).
{ok,"v1updated"}
(
clien...@127.0.0.1)4> rpc:call('
n...@127.0.0.1', api_tx, read, [<<"k2">>]).
{ok,<<"v2updatedTWICE">>}
(
clien...@127.0.0.1)5> rpc:call('
sec...@127.0.0.1', api_tx, read, [<<"num5">>]).
{ok,55}
In shell MS we now list and stop the nodes:
./bin/scalarisctl list
epmd: up and running on port 4369 with data:
name n4 at port 52504
name n5 at port 47801
name n3 at port 41710
name second at port 44329
name premier at port 44862
./bin/scalarisctl -n
sec...@127.0.0.1 stop
'
sec...@127.0.0.1'
./bin/scalarisctl -n
n...@127.0.0.1 stop
'
n...@127.0.0.1'
./bin/scalarisctl -n
n...@127.0.0.1 stop
'
n...@127.0.0.1'
./bin/scalarisctl -n
n...@127.0.0.1 stop
'
n...@127.0.0.1'
./bin/scalarisctl list
epmd: up and running on port 4369 with data:
name premier at port 44862
./bin/scalarisctl -n
pre...@127.0.0.1 stop
'
pre...@127.0.0.1'
./bin/scalarisctl list
epmd: up and running on port 4369 with data:
(nothing)
To go further see also:
-api_tx:add_del_on_list
-api_tx bulk ops
-Transaction TLog (new tlog then commit)
-PubSub
Hope this little guide has demonstrated how to setup a 5-50 nodes scalaris cluster, how to recover a failing node and how to make remote client requests.
May be this could be used for a page on scalaris' wiki ?
Have a nice read.
Pierre M.