Francois and I spent a fair bit of time cleaning up grid over the past
week. I caught some blockers when deploying out to a staging
environment that should be fixed (I'll verify when I finally get home).
Undoubtedly there are others. If you're able to test grid out in a
staging environment, that'd be much appreciated.
Build Instructions
==================
$ ./go release
This will produce the following JAR that you'll want to run:
build/java/server/src/org/openqa/grid/selenium/selenium-standalone.jar
We want that basically to replace selenium-server-standalone.jar, but
there's some circular dependency nuttiness going on there. Help here
would be appreciated.
Running Grid 2
==============
Hub:
java -jar
build/java/server/src/org/openqa/grid/selenium/selenium-standalone.jar
-role hub [-port <port for hub>]
Se 1 RC Server:
java -jar
build/java/server/src/org/openqa/grid/selenium/selenium-standalone.jar
-role <remotecontrol | remote-control | rc> [-port <port for RC server>]
Se 2 WebDriver:
java -jar
build/java/server/src/org/openqa/grid/selenium/selenium-standalone.jar
-role <webdriver | wd>
There is virtually no control currently over how to specify the
browsers. So, that's something that obviously needs to be fixed. When
available, you'll be able to supply appropriate args.
Backwards Compatibility
=======================
Grid 2 is backwards-compatible with Grid 1 where it makes sense.
Whereas sensibility is defined as:
- Grid 2 hub can load Grid 1 config file off the classpath. You will
most likely have change your launch command to something like:
java -cp
/Users/nirvdrum/dev/workspaces-java/selenium:/Users/nirvdrum/dev/workspaces-java/selenium/build/java/server/src/org/openqa/grid/selenium/selenium-standalone.jar
org.openqa.grid.selenium.GridLauncher -role hub
- Grid 1 nodes can connect to Grid 2 hub and use the old environment strings
- Clients can connect to Grid 1 nodes through Grid 2 hub using the old
environment strings
We will not be supporting connecting Grid 2 nodes to a Grid 1 hub.
The level of support we provide basically allows you to upgrade your
cluster piecemeal. You start with the hub, which will provide you the
greatest set of benefits over Grid 1. From there you can start to
replace your Grid 1 nodes and swapping out your client code.
Conclusion
==========
If you made it this far you're a better person than me and you're
precisely the kind of person I hope will provide valuable feedback.
I'll be on IRC (handle: nirvdrum) if you run into problems getting going
-- or use the dev list for now. We're at a state where legitimate bugs
should go into the issue tracker. I'll make it a point to suck it up
and expose myself to that atrocity at least once a day.
Thanks,
Kevin
Simon
> --
> You received this message because you are subscribed to the Google Groups
> "Selenium Developers" group.
> To post to this group, send email to selenium-...@googlegroups.com.
> To unsubscribe from this group, send email to
> selenium-develo...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/selenium-developers?hl=en.
>
>
*firefox => [host_a:5000, host_a:5001, host_b:5000]
Assuming that's the queue of scheduled work, when no work is being
performed and two requests come in at the same time, they'll both go to
host_a, even though splitting over host_a and host_b is likely to lead
to a better scenario. Worse is if three requests come in, each spread
out by a minute, they'll all go to host_a:5000 (assuming the work can
finish in < 1 min.). With Se RC servers leaking memory, host_a will
almost certainly degrade in performance at a much more rapid rate than
host_b, but host_a will still be given precedence in work allocation.
The grid 2 will distribute work load over nodes, regardless of queue
insertion order. And it will try to distribute over different hosts.
The other benefits all come if you decide to customize the hub via its
set of extensibility interfaces. Most of the interesting things you can
do with grid 2 occur at the hub level.
The hub can serve both grid 1 and grid 2 nodes. This is by design to
facilitate precisely the situation you raised: staging upgrades for
clusters with large numbers of nodes.
--
Kevin
Simon
--
--
Simon