Another one joins the flock

17 views
Skip to first unread message

MisterDNA

unread,
Sep 16, 2009, 8:48:46 PM9/16/09
to Linux Networx Users Group
Hello, everyone!

I bought a six-node Evolocity cluster from Utah State University when
they retired it. It's the Dual 2.4 Xeon variety. I didn't get an Ice
Box or anything else with it aside from the frame and four blanks, but
I have one on the way to me.

I was able to get a temperature/reset cable, but no RS-232 cables and
no software.

I intend to turn it into a renderfarm for Blender projects, with an
upgrade to 20 nodes, eventually.

What I'd like to know:

1. Pinout for the RS-232 cable. Is it standard in any way? I don't
have one to reverse-engineer from.
2. Clusterworx... how do I get it? I'm sure it's closed-source.

Thank you all and a special thank you to Josh Parry for telling me
about this group.

Derek Peavey
Electronics Technician - Inovar - Logan, Utah

jhatch

unread,
Sep 17, 2009, 10:14:23 AM9/17/09
to Linux Networx Users Group
On Sep 16, 6:48 pm, MisterDNA <dere...@gmail.com> wrote:
> Hello, everyone!
>
> I bought a six-node Evolocity cluster from Utah State University when
> they retired it.  It's the Dual 2.4 Xeon variety.  I didn't get an Ice
> Box or anything else with it aside from the frame and four blanks, but
> I have one on the way to me.

Sounds fun. ;)

> What I'd like to know:
>
> 1.  Pinout for the RS-232 cable.  Is it standard in any way?  I don't
> have one to reverse-engineer from.

I assume you are talking about the RS232 cable for the Icebox that you
are getting. The management port is standard 9 pin connector, and as
far as I know, the RJ45 serial ports are standard wiring as well.
Don't know which standard that would be though, if an ex-icebox
engineer is watching these threads they would be able to answer that
question better.

> 2.  Clusterworx... how do I get it?  I'm sure it's closed-source.

It is closed source and owned by SGI. Probably expensive too. I
suggest you go with an open source cluster management suite if you
can.

Jarom Hatch

JMcD...@issisolutions.com

unread,
Sep 17, 2009, 10:20:39 AM9/17/09
to lnx...@googlegroups.com
Perceus is a pretty viable replacement for Cwx. There is no gui, but its easy, and FAST.
www.infiscale.org

-----Original Message-----
From: jhatch <jsh...@gmail.com>

Date: Thu, 17 Sep 2009 07:14:23
To: Linux Networx Users Group<lnx...@googlegroups.com>
Subject: [lnxiug] Re: Another one joins the flock

John Leidel

unread,
Sep 17, 2009, 10:14:17 AM9/17/09
to lnx...@googlegroups.com
Agreed. My top two open source management stacks are Rocks and Perceus.
Rocks has quite a bit more "stuff" included, not all of which you may
necessarily use. Perceus is lean and mean.

... my 2 cents.

Ed Wahl

unread,
Sep 17, 2009, 11:30:56 AM9/17/09
to lnx...@googlegroups.com
I'll agree on Rocks being easy, however just understand that not all LNXI nodes are easily bootable via anything but CWX, depending on your underlieing motherboard and the version of Linux Bios you have.   Older evolocity 1 nodes and mobos seem to have better booting support from projects like coreboot (loremerly linuxbios) but most all coding has ended and moved on to gPXE, so support is pretty spotty and a great deal of the later code just plain does not compile or work.

If possible I HIGHLY recommend re-flashing all your nodes with the manufacturers bios and running a modern cluster software such as rocks, perceus, xcat, etc,etc.   I've been down this route, and talked to several other sites that have done this and are happy.  A couple even wrote web services to interface with the Icebox.

I recommend xCat but it requires a lot more linux knowledge than Rocks to get running well, but it has advantages over Rocks. I use my iceboxes with some things I've quickly hacked up to work with xCat.


Ed

MisterDNA

unread,
Sep 17, 2009, 12:23:28 PM9/17/09
to Linux Networx Users Group
Thank you all for the responses.

Regarding SGI's ownership of Clusterworx (Is that what they now call
"ISLE"?), I'm not surprised they guard it so closely. I have twice as
many O2s as I do LNXI nodes, actually, so I'm familiar with their
approach concerning software and outdated hardware. They guard their
wares like Gollum guards his "Precious".

My motherboards are the Supermicro P4DPR-IGMQ.

I do have one node I just bought, sight unseen, from eBay that has
dual Opteron 248s, 16GB RAM (I hope that's eight 2GB modules) and a
couple of RAIDed SCSI disks. I'm due to receive that next Tuesday.
Others I see with the same configuration aren't even blades. Would
those be the management nodes? Or can basically anything be used as a
management node?

Regarding the Ice Box, I'm wondering if it's basically a glorified
remote relay box with a bunch of RS-232 ports and temperature sensor
lines. I've read it has an embedded Intel 386 CPU inside. Is it
going to be hard to talk dirty to it without special software?

My overall idea is this:

I have the 20-node LNXI renderfarm sitting outside in my converted
toolshed workshop while I do most of my work indoors. I only turn on
what I need when I need it. The cluster will pull almost as much
wattage as a clothes dryer when I'm finished with it so I don't want
it running all the time.

Being able to go diskless with a large amount of RAM will help since I
remember the greatest wear a hard disk experiences is during start-up
and shutdown of the drive. The only compute node moving parts I want
to deal with are the fans.

The queue server and management server will stay running 24/7 (maybe I
could combine the two?). When I prepare a .blend project for
rendering, I send it to the queue with a message stating the urgency
level. The queue server screams at the management server, which
screams at either one or both Ice Boxes to start nodes based on how
many the project requires. Maybe five if it's not that big of a
project or all twenty if it's something important. The nodes check
in, loading the OS images from the management server's RAM to their
own RAM, boot the OS from this RAMdisk, when loaded and ready, they
check in at the queue server to receive their work (could be as large
as 4GB, maybe more). The queue keeps track of work received and the
manager notes any problems and alerts the queue server accordingly.
Once the work remaining has reduced to the point there are fewer
remaining frames than there are good nodes to render them, the
management server tells the Ice Boxes to shut down the nodes, saving
energy and saving money.

If I need them to turn on again for another project, it's only a wait
of a couple minutes.

If that sounded sloppy, I apologize. I'm kind of (okay, very) new to
this.

I'm supposed to receive my first icebox today when the UPS man
arrives. Maybe I'll be lucky enough to have some cables with it?

Justin Wood

unread,
Sep 17, 2009, 12:36:19 PM9/17/09
to lnx...@googlegroups.com
ISLE Cluster Manager is what was formerly known as Clusterworx.  It is still under development, but I'm not sure the details on how they're selling it, as I'm no longer involved with the company.  

As for the Icebox, you don't need CWX to talk to it.  You can telnet directly to it and issue commands interactively, you can use powerman and/or conman (both open source) to interface with it, and even SNMP, though I've only tried this once a number of years ago.  

-Justin.

MisterDNA

unread,
Sep 17, 2009, 2:23:12 PM9/17/09
to Linux Networx Users Group
I just took delivery of my Icebox. It looks like control is very
straight-forward. I'll have to hook it up to a few cables and sniff
around.

I do wonder about the AC input, though. Is it set up such that I can
split the sides of a 230V circuit off to run each input or are they
just wired parallel internally, using standard computer power cords to
avoid the more unusual power cords for high current?

Does the Icebox display a menu over the terminal when started or do I
need a manual?

On Sep 17, 10:36 am, Justin Wood <justingw...@gmail.com> wrote:
> ISLE Cluster Manager is what was formerly known as Clusterworx.  It is still
> under development, but I'm not sure the details on how they're selling it,
> as I'm no longer involved with the company.
>
> As for the Icebox, you don't need CWX to talk to it.  You can telnet
> directly to it and issue commands interactively, you can use powerman and/or
> conman (both open source) to interface with it, and even SNMP, though I've
> only tried this once a number of years ago.
>
> -Justin.
>

Justin Wood

unread,
Sep 17, 2009, 2:44:13 PM9/17/09
to lnx...@googlegroups.com
There's no menu, but you can type 'help' to get a list of available commands.  Default username and password should be admin:icebox.

MisterDNA

unread,
Sep 17, 2009, 2:56:29 PM9/17/09
to Linux Networx Users Group
I noticed there's a momentary pushbutton switch visible at the top
left of the unit, behind one of the vent slots. I'm guessing that's a
reset button to go back to factory default, similar what I see on
routers and switches.

Once again, thanks for the info.

On Sep 17, 12:44 pm, Justin Wood <justingw...@gmail.com> wrote:
> There's no menu, but you can type 'help' to get a list of available
> commands.  Default username and password should be admin:icebox.
>

Josh Parry/ Hardware Technician

unread,
Sep 18, 2009, 2:38:44 PM9/18/09
to Linux Networx Users Group
I'm not sure which revision/version of the Icebox you have, but this
old article might be useful for you.
Here's the URL: http://www.linuxfordevices.com/c/a/Linux-For-Devices-Articles/Whats-an-ICE-Box-and-whats-inside-one/

When it comes to the AC power input for the Icebox, as long as the
male/female layout and design is the same; I don't see why you
couldn't use a standard computer power cord. However, that's not
something in my experience I've ever seen done. Since our standard
builds used the more "unusual" power cords that went into the PDU at
the bottom of the rack, to supply power to the Icebox(s).

Joshua McDowell

unread,
Sep 18, 2009, 2:44:10 PM9/18/09
to lnx...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

When I used to install Linux NetworX clusters. The installs would get
delayed all the time because the customer didn't understand the power
requirements.

The way it was always explained to me, was that each icebox 4 had to
have 30 amps on each side and icebox 3s had to have 25 amps on each side.

When I do the math, 12 400 watt power supplies at 110 volts. (That 10
nodes and 2 AUX ports. ) I get 25 amps total, so I am not sure why they
required 30 amps on each side. I can say saw thrown breakers anytime
they tried 20 amps on each side..

Joshua McDowell
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkqz1PoACgkQDiqOyViXQA67NACghIys5l3bFN2W5Qhs3GqGkoF0
QhYAoJBJ8C3RFJ5bLKIc3QNnvM2PxoZb
=xmsl
-----END PGP SIGNATURE-----

Matthew Hatch

unread,
Sep 18, 2009, 2:53:48 PM9/18/09
to lnx...@googlegroups.com
If I remember correctly, the Icebox 4 used a different power cable than
the earlier models. On Icebox <= 3, a standard PC power cable would
work. Of course, it's been three years since I've seen one...

Josh Parry/ Hardware Technician wrote:

JMcD...@issisolutions.com

unread,
Sep 18, 2009, 2:59:58 PM9/18/09
to lnx...@googlegroups.com
That is correct, I am looking at both now.

-----Original Message-----
From: Matthew Hatch <hat...@gmail.com>

Date: Fri, 18 Sep 2009 12:53:48
To: <lnx...@googlegroups.com>
Subject: [lnxiug] Re: Another one joins the flock



MisterDNA

unread,
Sep 18, 2009, 3:06:49 PM9/18/09
to Linux Networx Users Group
I have an Icebox 3000. I'm going to read that article in the link
right now.

From what I've been reading on Wikipedia, the standard for RS-232 over
RJ-45 is called RS-232D and combines the DSR and ring lines onto one
pin (hate to connect a modem to that...)

Based on what I'm seeing regarding power, I wonder if my AC mains will
be adequate. I was expecting each blade to drain maybe 200W under
load, figuring the CPUs at 65W TDP each, plus chipset, RAM and PSU
loss. I figured two or even three banks of compute nodes would
happily coexist on one 230VAC 30A circuit, much like an electric
clothes dryer.
Reply all
Reply to author
Forward
0 new messages