Like: python multi-mechanize.py project1
I suggest the following directory structure to support this:
multi-mechanize
lib
tools
multi-mechanize.py
projects
project1
config.cfg
test-scripts
results
project2
config.cfg
test-scripts
results
So every project has their own config.cfg file.
Maybe the actual projects dierectory could be made a configurable
element for multi-mechanize?
Or you can simply refer to the project directory with the start up
parameter.
Regards,
Amol
I agree with the need for better structure and a way to have multiple
active projects without a lot of copying config files around.
I will look over the proposed directory structure and implement
something that makes sense.
more updates soon....
-Corey
Amol,
thanks for the input.
I wasn't planning on writing a "results viewer", but I can see how it
might be useful for browsing and comparing results.
I'm gonna push this off for a while and hopefully get back to working
on something like this in a few weeks... unless you wanted to
contribute it yourself, in which case I can help sooner.
-Corey
I just implemented the new directory structure pretty much exactly as
Roland described.
The new code is in SVN if anyone wants to try it. I checked in a
"default_project" with the code base.... so to run the new code, do:
>python multi-mechanize.py default_project
I will be updating the documentation and doing a release soon.
-Corey
I would love to work on this and many other things, I also have done
comparison of two results of some different tool using Matplotlib. I
have attached a sample graph of the work, check out if you like it.
I'd also have to understand the current report format and design
accordingly. Please provide any inputs on your expectations.
Thanks,
Amol
great! i'm very open to ideas and collaboration in any areas of the project.
> have attached a sample graph of the work, check out if you like it.
> I'd also have to understand the current report format and design
> accordingly. Please provide any inputs on your expectations.
your graph looks great... can you explain more about what you are proposing?
let's say you have completed several test runs and you have a pile of
results directories... how would you go about viewing/comparing
results?
-Corey
also, check this thread: http://groups.google.com/group/multi-mechanize/browse_thread/thread/df525ad6fdb8bf85
we are discussing adding a db layer as an optional output method. so
basically, a database gets populated with your results as a test
runs. I think we would want to build any results comparison or
results viewing features on top of the database. thoughts?
-Corey
yeah, a tarball of your copy would be great.
you can email it to me: http://www.goldb.org has my contact info
i will probably either try it with mysql or sql server. along with
the code, let me know if you have any additional tips or guidance for
setting it up.
thanks!
-Corey
Same is the case with database stored results we can defer this by
abstracting away the results reader (from directories/files or from
db).
~ Amol
Amol,
that sounds good. my advice is to start small. just get some basic
comparisons going, so we can review your code and see how it fits with
the overall structure of the project.
-Corey
I'm almost done with comparing two results having same number of
user-groups. However, I'm facing some problems with parsing results.
I'm using your 'results' module to parse the results. The module is
perfectly fine, but it requires run_time, which comes from the config.
This config may not be the same after multiple test runs and one
cannot create Result object without correct run_time.
One option to get correct run_time is to parse the result.html. But
HTML parsing is hard and will make code hard to understand.
I couldn't get the database thing working because MySQLdb module
couldn't get installed on my machine. I'm working on this but can you
suggest any other way of getting run_time and other config specific
options?
Thanks,
Amol
I'm trying to get it done using ElementTree.
This gives me test start time (or more precisely result directory
creation time) and not the duration (run_time) for which test ran.
Right I'm thinking of this
>>> import xml.etree.ElementTree as ET
>>> tree = ET.parse('results.html')
>>> div = tree.find('.//{http://www.w3.org/1999/xhtml}div')
>>> div.getchildren()[4].tail # this gives me runtime.
Now, I want to get total number of threads for the test. I will use
them to differentiate between two test. Can anyone thinking of better
way to differentiate?
:-)
>
> I think if there's additional data beyond what's stored in the results.csv
> that's needed for reporting, we might want
> to take a look at getting that data into the results.csv file itself (or
> some additional csv files in the results directory).
May be we can copy the config into the results directory. This will
also be a reference point for the user if he/she wants to find out
what config gave corresponding results.
> I personally think using reporting html as a source of data is probably the
> wrong road to go down as far as a permanent solution,
agreed!
+1 i think its a good idea to have config.cfg in the results
directory.. as a reference point to look back on, and also as a data
source for whatever post processing/comparison happens.
so you can assume that config.cfg will be in the results directory,
and develop from there.
(brian already sent me a patch for this so i will add this
functionality in soon).
thanks guys!
-Corey
just checked in the code for this. you can grab the latest SVN trunk.
-Corey
i've been testing with SQLite. no setup and it works great. give
that a try if you have any mysql headaches.
-Corey