}
baratine start conf
resin.cf spins up all jvms, this seems ok.
starting *:8085 (cluster-8085)
starting *:8086 (cluster-8086)
starting *:8087 (cluster-8087)
starting *:8088 (cluster-8088)
starting *:8089 (cluster-8089)
starting *:8090 (cluster-8090)
I am deploying my jar with:
deploy my.jar
I see that the pod is using the first three servers:
baratine> cat /proc/pods/mypod
{ "pod" : "pod",
"type" : "solo",
"sequence" : 0
"servers" : [
],
"nodes" : [
[0, 1, 2]
]
From the source code and documation i see different types like "solo" , "triad" oder "cluster". How can i set the type and what is behind each type ? Are there any limitation by using multiple jvms on the same Machine ? Do i need to provide different hosts for baratine to switch to a different type ?
Can i modify the servers that were selected for my pod ?
I can see in the logs that my normal Services are starting up on each node. but when i use them everything is done on the first jvm. So "solo" means that there is one active Node doing all the work and the other nodes are just standby nodes ? I would like the services to be spread over all available nodes, is this a configuration problem or do i need to put each service in one pod ?
Besides my normal "Service"s i have one ResourceService as well. I am accessing/creating 100 Resources with:
ClientHamp client = new ClientHamp(url);
for (int i = 0; i < 100; i++) {
DatasetService myService = client.lookup(
"remote:///datasetservice/" + i)
.as(DatasetService.class);
System.out.println("loading " + i + " " + myService.loadData());
}
I can see in the logs that everything is created only on the first jvm and nothing gets partitioned ?!? Is this happening due to type = SOLO ? Are 100 Resources and/or my urls not sufficient enough for the hashing to pick an other node ?
Thomas