faban start and stop over a URL

55 views
Skip to first unread message

Ulf

unread,
Jul 27, 2011, 12:21:17 PM7/27/11
to faban-users
Hi,

we want to embbed the faban suite in a kind of master script, which
contolls the whole test scenerios. This script wants to start the
faban test via url. I think that this is possible, because there is a
"ok" button in the webpage and i am able to post this to the faban
master with curl.

But we also want to stop the faban test from our script, so that we
sent the faban test a signal and the test ramp down normal.

Is it possible to stop the faban tests normal with a url?

Thanks for the answer,
Ulf

Shanti Subramanyam (gmail)

unread,
Jul 31, 2011, 1:02:19 PM7/31/11
to faban...@googlegroups.com
The faban model is a time-based model so it will always run to the initially configured rampup,stdystate and rampdown times.
You can kill a run but that will just abort it and you won't get any reports.

If you want to start tests without the UI, look at fabancli in 'bin'. You don't need to go thru hoops using curl :-)

Shanti

Rod Treweek

unread,
Oct 3, 2017, 12:28:04 PM10/3/17
to faban-users
I realize that this thread is quite old, but after scouring the web, It's not clear to me how I might meaningfully interact with the Faban API.  I've been using the Docker containers provided by Cloudsuite's web search component (http://cloudsuite.ch//pages/benchmarks/websearch/) to explore what's possible with their combination of  Apache Solr, Nutch, and Faban, and thought that using curl with the Faban component would be a reasonable method for testing what's available/possible through the API. The documentation available from here: http://faban.org/1.3/docs/api/index.html is a bit terse, so I was hoping that posting here might yield a better explanation as to how (as a start) I might duplicate the return of the same .json output I currently with: docker run -it --name client --net search_network cloudsuite/web-search:client server_address 50 90 60 60 . Although this may seem like a trivial task, I have not been able to accomplish this. I'm hopeful someone may be gracious enough to assist me in minimally interacting with this api :)

Thanks!

Rod Treweek

unread,
Oct 3, 2017, 6:13:13 PM10/3/17
to faban-users

Jeremy Whiting

unread,
Nov 13, 2017, 12:16:44 PM11/13/17
to faban-users
Rod,
 Try using the fabancli tool. There are a number of commands you can send using the tool. Calling the tool reveals usage instructions.

$ ./bin/fabancli
./bin/fabancli [-M masterURL] [-U user [-P password]] pending
./bin/fabancli [-M masterURL] [-U user [-P password]] status runId
./bin/fabancli [-M masterURL] [-U user [-P password]] submit benchmark profile configfile
./bin/fabancli [-M masterURL] [-U user [-P password]] kill runId
./bin/fabancli [-M masterURL] [-U user [-P password]] wait runId
./bin/fabancli [-M masterURL] [-U user [-P password]] showlogs runId [-ft]
$
./fabancli  -M http://server_address:9980/ kill acmebenchmark.1A
KILLED

$

 In the above I killed a run. You'll need to use the correct host address with the 'M' flag. Also make sure the port 9980 is open.

Regards,
Jeremy

Jeremy Whiting

unread,
Nov 13, 2017, 12:20:53 PM11/13/17
to faban-users
here are docs for the tool fabancli

Vincenzo Ferme

unread,
Nov 15, 2017, 11:51:16 AM11/15/17
to faban-users
Hi,

if you want to use a Java client to interact with Faban, you can give a try to https://github.com/benchflow/benchflow/tree/devel/benchflow-faban-client. It is released on https://bintray.com/benchflow/benchflow-maven/benchflow-faban-client (you can download the latest jar clicking on benchflow-faban-client_devel.jar). We currently do not have documentation, but the source code has Javadoc for all the provided functionalities. 

Rod Treweek

unread,
Nov 22, 2017, 2:39:42 PM11/22/17
to faban...@googlegroups.com
Thank you very much!!! This is all *extremely* helpful. One thing I was hoping I might get, and I know this is still a likely arbitrary thing to base any assumptions on, but I did want to get at least some sort of at least "anecdotal" ideas as to how much variation I might generally expect to see in the report summaries generated at the end of a run. Again, I'm currently running container-to-container using the web search benchmark setup featured as part of Cloudsuite's suite of containerized benchmark tools offered here: http://cloudsuite.ch//pages/benchmarks/websearch/

Again, I'm just looking for general/anecdotal observations on what I sort of pass/fail fluctuations I might expect to see from anyone familiar with the Cloudsuite implementation of Faban, or some similarly configured container-to-container implementation of Faban with a similarly generated "web search" index. 

I've unfortunately been unable to vary my testing much across different hardware up to this point, and what I'm seeing seems to be fluctuating pretty wildly, so I'd like to understand whether I may be wielding this tool properly, whether I may need to adjust my expectations considerably, or if there may be some other issue such as a problem with my Docker version (17.09)/implementation which is causing such unexpected variation in pass/fail along with some intermittent Java fatal exception errors which seem timing-related.

Some details on my setup:

I currently have a crontab which spins-up the client container every 10 minutes (terminating after each run) starting at around 2900 concurrent connections (which, in my infra I've had a pretty consistent "pass" rate with) incremented by 1 each time it runs (every ten minutes, which I've confirmed is more than enough time to guarantee completion, and for which I also have some checks in place.).  This has all been working fine, and I also have some logging setup to confirm that things are executing and terminating as they should according to this schedule

I'm also using the default values suggested by Cloudsuite of 90 seconds for ramp-up, 60 seconds for steady-state timing, and 60 seconds for ramp-down - which has seemed sufficient, - or at least has not triggered any explicit errors from Faban, i.e. "Ramp-up time is too short, consider increasing this by <X> seconds", etc.

What I'm seeing is pretty random...I might see a fairly consistent pass rate from 3000 - 3070 connections one day, followed by a string of failures that continues on up to an oddly successful run at something like 3200 concurrent connections. On other occasions, I might see 1 - 2 aborted runs (Java fatal exceptions thrown related to timing), followed by a consistent string of failures starting at as low as 2900 connections and failing continuously until I manually intervene the following day, again resetting the number of connections back to 2900, and start seeing "pass" results again.  At other times I might see alternating pass/fail results that seem to arbitrarily span the entire range of 2900 - 3200 concurrent connections (I have yet to see any passing results beyond 3200 concurrent connections, however).

Am I doing something incorrectly? Is my expectation of fairly consistent results from one identically configured run to the next reasonable? Again, this would seem a simple container-to-container Docker implementation, following the instructions on Cloudsuite's site and simply scaling the number of connections up by one starting from an established "passing" baseline, and gradually increasing the number of connections until I start hitting the failure mark - which would seem like the correct use case for this benchmark, if not the very reason for its existence.

Let me know if I may be missing something, and again, thank you all for the help!

--
You received this message because you are subscribed to a topic in the Google Groups "faban-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/faban-users/5kHpUmsYk90/unsubscribe.
To unsubscribe from this group and all its topics, send an email to faban-users+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Shanti Subramanyam (gmail)

unread,
Jan 8, 2018, 12:15:45 PM1/8/18
to faban...@googlegroups.com
If you're running this in an environment where other instances/containers could be running on the same hardware, that is usually the culprit for variable results. Other reasons could be memory, GC issues, not sufficient rampup etc.

--
You received this message because you are subscribed to the Google Groups "faban-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to faban-users+unsubscribe@googlegroups.com.

Jeremy Whiting

unread,
Jan 10, 2018, 6:16:45 AM1/10/18
to faban-users
 I agree. Using a cloud environment will suffer variance to some degree.
 You will see this in performance sensitive benchmarking. When you are at the tipping point of PASS/FAIL.

 If you want to do benchmarking and be certain results are consistent (and keep your sanity) you'll need dedicated machines and a switch. Otherwise you'll be forever chasing ghosts.

Jeremy
To unsubscribe from this group and all its topics, send an email to faban-users...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "faban-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to faban-users...@googlegroups.com.

Rod Treweek

unread,
Jan 10, 2018, 4:37:11 PM1/10/18
to faban-users
Thank you :)
Reply all
Reply to author
Forward
0 new messages