Parallel garbage collection worthwhile?

227 views
Skip to first unread message

Seth

unread,
Jul 28, 2009, 11:16:49 PM7/28/09
to Clojure
I found a simple, worthwhile improvement for a CPU bound
implementation of the Sieve of Eratosthenes in Clojure and thought I'd
share it. Also attached are a few metrics and code for the curious.

So I'm using Clojure to solve some of the problems on Project Euler.
I'd solved some of them before with other languages, but not in new
lazy/immutable way Clojure is encouraging me to think.

Anyways, my Sieve of Eratosthenes was too slow. It took about three
minutes on a 2GHz Intel Core Duo MacBook Pro to find the primes lesser
than 100,000. The blame's on me here -- I'm trying to do it the
immutable/lazy/??? way rather than the big mutable array way. Patience
is appreciated as I am still low on the learning curve.

I enabled local jmx monitoring and attached JConsole to the Clojure
1.0 process. Almost all the time was spent in garbage collection. JMap
showed scary things like this:

================================================================

New Generation (Eden + 1 Survivor Space):
capacity = 589824 (0.5625MB)
used = 524288 (0.5MB)
free = 65536 (0.0625MB)
88.88888888888889% used
Eden Space:
capacity = 524288 (0.5MB)
used = 524288 (0.5MB)
free = 0 (0.0MB)
100.0% used
From Space:
capacity = 65536 (0.0625MB)
used = 0 (0.0MB)
free = 65536 (0.0625MB)
0.0% used
To Space:
capacity = 65536 (0.0625MB)
used = 65536 (0.0625MB)
free = 0 (0.0MB)
100.0% used

[...]

================================================================

So the young spaces are both a.) small and b.) full. Much to my
surprise enabling parallel garbage collection of the young space was
the biggest win:

1.0 = official release
1.1a = compiled from git 2009-07-27

* clojure version, jvm flags, seconds to find primes <= 100,000

1.0, (default), 230
1.1a, (default), 190
1.1a, -XX:NewRatio=2 -Xmx128m, 148
1.1a, -XX:+UseParallelGC, 51
1.1a, -XX:NewRatio=2 -Xmx128m -XX:+UseParallelGC, 49

So it looks like my code makes Clojure generate a lot of young
objects... I tried to stick to the immutable/lazy approach... all
suggestions would be very welcome -- don't spare the corrections
please!

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(defn subsequent-multiples [val]
(fn [candidate]
(or (not= 0 (mod candidate val))
(= val candidate))))

(defn build-sieve-slow [ceiling]
(loop [primes ()
field (range 2 (+ 1 ceiling))]
(let [cur (first field)
remnants (filter (subsequent-multiples cur) field)]
(if (> cur (/ ceiling 2))
(concat primes remnants)
(recur
(concat primes (list (first remnants)))
(rest remnants))))))

(time (apply + (build-sieve-slow 100000)))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

Daniel Lyons

unread,
Jul 29, 2009, 2:59:24 AM7/29/09
to clo...@googlegroups.com
Hi Seth,

On Jul 28, 2009, at 9:16 PM, Seth wrote:

>
> I found a simple, worthwhile improvement for a CPU bound
> implementation of the Sieve of Eratosthenes in Clojure and thought I'd
> share it. Also attached are a few metrics and code for the curious.
>
> So I'm using Clojure to solve some of the problems on Project Euler.
> I'd solved some of them before with other languages, but not in new
> lazy/immutable way Clojure is encouraging me to think.
>
> Anyways, my Sieve of Eratosthenes was too slow. It took about three
> minutes on a 2GHz Intel Core Duo MacBook Pro to find the primes lesser
> than 100,000. The blame's on me here -- I'm trying to do it the
> immutable/lazy/??? way rather than the big mutable array way. Patience
> is appreciated as I am still low on the learning curve.

A common problem is trying to do old things the new way. Functional
programming isn't just about immutability, it's a holistic approach,
and that often means new algorithms are in order. I ran into this
exact problem a couple years ago when I was learning Haskell and was
equally at a loss. In your particular case I think you might want to
take a look at the source code in the contrib project which implements
a lazy list of primes:

http://code.google.com/p/clojure-contrib/source/browse/trunk/src/clojure/contrib/lazy_seqs.clj

This is a simplified variation on the complex algorithm discussed in
"Lazy Wheel Sieves and Spirals of Primes":

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.55.7096&rep=rep1&type=pdf

The short answer is that the way to make algorithms spend less time in
the garbage collector is to spend less time creating garbage. :)
Unfortunately this advice (which I received a lot learning Haskell) is
never helpful. There's no substitute for a better algorithm, but it
was a long time before I ran into this algorithm. I'm sorry I don't
have great concrete advice for your particular implementation. It
doesn't look like a bad attempt to me, just not yet very functional. I
bet if you look at the lazy-seqs code above things will come together
a little better. Probably it would help to try and implement a lazy
list of the Fibonacci sequence before looking at that code, and then
maybe try some other mathematical sequences that are a little easier
to construct too.

Hope that helps a little,


Daniel Lyons

Tassilo Horn

unread,
Jul 29, 2009, 3:58:58 AM7/29/09
to Clojure
Daniel Lyons <fus...@storytotell.org> writes:

Hi Daniel,

> In your particular case I think you might want to take a look at the
> source code in the contrib project which implements a lazy list of
> primes:
>
> http://code.google.com/p/clojure-contrib/source/browse/trunk/src/clojure/contrib/lazy_seqs.clj

I came up with another approach on lazily calculating primes using the
Miller Rabin algorithm. Calculating the primes < 100.000 akes about 12
seconds on my 2.2 GHz dual core. Here it is:

--8<---------------cut here---------------start------------->8---
(ns de.tsdh.math.primes
(:use [clojure.contrib [math :only [expt]]
[test-is :only [deftest is]]
[def :only [defvar defvar-]]]))

(defvar *pseudo-accuracy* 10
"The chances that prime? returns true for a composite is (expt 4 (* -1
*pseudo-accuracy*)).")

(defn factorize-out
"Factorizes out all x factors from n.
Examples:
(factorize-out 10 2) ==> {:exponent 1, :rest 5}, because 2^1 * 5 = 10
(factorize-out 90 3) ==> {:exponent 2, :rest 10}, because 3^2 * 10 = 90"
[n x]
(loop [acc n exp 0]
(if (= 0 (mod acc x))
(recur (/ acc x) (inc exp))
(hash-map :exponent exp :rest acc))))

(defn expt-mod
"Equivalent to (mod (expt n e) m), but faster.
http://en.wikipedia.org/wiki/Modular_exponentiation#An_efficient_method:_the_right-to-left_binary_algorithm"
[n e m]
(loop [r 1, b n, e e]
(if (= e 0)
r
(recur (if (odd? e)
(mod (* r b) m)
r)
(mod (expt b 2) m)
(bit-shift-right e 1)))))

(defn prime?
"Checks if n is a prime using the Miller-Rabin pseudo-primality test. Also
see *pseudo-accuracy*."
[n]
(cond
(< n 2) false
(= n 2) true
(even? n) false
:else (let [m (factorize-out (dec n) 2)
d (:rest m)
s (:exponent m)]
(loop [k 1]
(if (> k *pseudo-accuracy*)
true
(let [a (+ 2 (rand-int (- n 4)))
x (expt-mod a d n)]
(if (or (= x 1) (= x (dec n)))
(recur (inc k))
(if (loop [r 1
x (expt-mod x 2 n)]
(cond
(or (= x 1) (> r (dec s))) false
(= x (dec n)) true
:else (recur (inc r) (mod (* x x) n))))
(recur (inc k))
false))))))))

(defn next-prime
"Returns the next prime greater than n."
[n]
(cond
(< n 2) 2
:else (loop [n (inc n)]
(if (prime? n)
n
(recur (inc n))))))

(defn primes
"Returns a lazy sequence of prime numbers"
[]
(iterate next-prime 2))
--8<---------------cut here---------------end--------------->8---

Bye,
Tassilo

ataggart

unread,
Jul 29, 2009, 4:07:52 AM7/29/09
to Clojure
I threw this together, how does it measure up?


(defn prime? [x primes]
(not (first (filter #(zero? (rem x %)) primes))))

(defn next-prime [xs primes]
(lazy-seq
(let [xs (drop-while #(not (prime? % primes)) xs)
prime (first xs)
primes (conj primes prime)]
(cons prime
(next-prime (rest xs) primes)))))

(defn primes []
(next-prime (iterate inc 2) []))

(time (first (drop-while #(< % 100000) (primes))))

Seth

unread,
Jul 29, 2009, 7:41:51 AM7/29/09
to Clojure
That worked pretty well! It completed in about 9 seconds with the
default JVM flags. Better yet I switched to the 64bit server 1.6 VM
and it completed in 2.9 seconds. Unlike my code, which threw a stack
overflow exception. Sigh.

I *really* appreciate everyone's friendly and helpful feedback. Time
permitting I'll dig back into this tonight.

John Harrop

unread,
Jul 29, 2009, 3:49:48 AM7/29/09
to clo...@googlegroups.com
On Wed, Jul 29, 2009 at 2:59 AM, Daniel Lyons <fus...@storytotell.org> wrote:
Probably it would help to try and implement a lazy
list of the Fibonacci sequence before looking at that code, and then
maybe try some other mathematical sequences that are a little easier
to construct too.

Using the super-lazy-seq macro Wrexsoul posted a month or so ago, that's dead easy:

(super-lazy-seq [a 0 b 1] (next-item b b (+ a b)))

On the other hand,

(defn fibs [] (cons 1 (cons 1 (lazy-seq (map + (fibs) (rest (fibs)))))))
(def fibs (memoize fibs))

works and is done with map rather than a looping-like construct. (Without the memoize, the latter is exponential in time; with it, though, all generated terms are retained in memory. This could be scoped, by using (binding [fibs (memoize fibs)] (take n fibs)) or whatever whenever you needed to do something with (part of) fibs, and returning something. Downside: binding doesn't work well with lazy seqs. It's actually probably easier to use the first version efficiently in a pure-functional way, even if its less pure-functional "inside". On the other hand, I'm a bit suspicious of that macro. It seems to work for common cases, but I suspect it will not work properly if you shadow one of its loop bindings with a let in the loop body, as I think it doesn't care if it's the same var so long as it has the same name when it wraps the loop vars in a deref.

(map second (iterate (fn [[a b]] [b (+ a b)]) [0 1]))

seems to work though and avoids the memory problems of the memoized function, the exponential tendencies of the un-memoized function, AND the super-lazy-seq macro (and even the lazy-seq macro). Interestingly, it works identically to the super-lazy-seq macro in that both start with a pair of [a = 0, b = 1] and transform it according to [a -> b, b -> (+ a b)].

In fact, there's a general correspondence between
(map #(nth % n) (iterate (fn [[a b c ... z]] [exprs]) [a0 b0 c0 ... z0]))
and
(super-lazy-seq [a a0 b b0 c c0 ... z z0] (next-item an exprs))
and I suspect the map/iterate versions to be more efficient since they avoid atoms. (On the other hand, if the sequence is generated in a non-functional way, such as from network I/O, super-lazy-seq seems to me to be a better fit as long as you avoid shadowing any of its loop bindings internally.)

Niels Mayer

unread,
Jul 29, 2009, 4:00:19 AM7/29/09
to clo...@googlegroups.com
FYI, some potentially useful info here:

Does JDK6u14 "Garbage-First garbage collector (G1)" work and/or improve Xwiki performance/size/mem-locality?


Date: Feb 17, 2009; 05:13pm
Author: Niels Mayer
XWiki- Dev (http://n2.nabble.com/XWiki--Dev-f475773.html)
I usually have found that the garbage collector is the hardest-working thing 
in a java-web app (unfortunately) so when I noticed this new GC option in *Java 
HotSpot 14*, I figured it might help (potentially a lot w/ high-volume 
sites, which are the real GC churners). 

Has anybody tried the "Garbage-First garbage collector (G1)"  with Xwiki? 

http://platform.xwiki.org/xwiki/bin/view/AdminGuide/Performances suggests 

> CATALINA_OPTS="-server -Xms128m -Xmx1024m -XX:MaxPermSize=128m -Dfile.encoding=utf-8 -Djava.awt.headless=true -XX:+UseParallelGC -XX:MaxGCPauseMillis=100" 

> However these instructions don't work, as-is, since UseParallelGC is the default nowadays: 

https://jdk6.dev.java.net/6uNea.html says: 

> The parallel collector is still the default GC and is the most efficient GC 
> for common household usage. G1 is meant to be an alternative for the 
> concurrent collector. It is designed to be more predictable and enable fast 
> allocation with memory regions design. 

Here's more info 

Java SE 6 Update 14 Milestone Schedule b01 (1st available build) 02/11/09  FCS

> Q2, 2009 *Feature List:* 

>    - *Service Tag creation for Windows JRE (CR 6676849<http://bugs.sun.com/view_bug.do?bug_id=6676840
>    )*: For more information about Service Tag, check out this technical 
>    article 
>    <http://java.sun.com/developer/technicalArticles/productregistration/>. 

>    - *Java HotSpot 14*: including the new garbage collector, G1, Big 
>    Decimal enhancements, TreeMap and HashMap enhancements, optimized compressed 
>    OOP code generation. 

> The Garbage-First garbage collector (G1) is currently in beta. It is not 
> enabled by default. The parallel collector is still the default GC and is 
> the most efficient GC for common household usage. G1 is meant to be an 
> alternative for the concurrent collector. It is designed to be more 
> predictable and enable fast allocation with memory regions design. 

> To use it: -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC 

> For more information about GC & G1, please see: 

>    - G1 Technical Session in JavaOne 2008<http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-5419&yr=2008&track=javase
>    - Java SE 6 GC tunning<http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html
>    - Analysis of the Garbage-First Garbage Collection<http://research.sun.com/jtech/pubs/04-g1-paper-ismm.pdf

>
Niels 
http://nielsmayer.com

PS:  some things that cause big-growth, like importing and exporting, might 
not grow as large with a "generation scavenging" style GC as provided by 
"Garbage-First Collection." Sometimes, just GCing a large structure being 
iterated-over uses a lot more memory than it needs to; because the gc is 
letting objects that should get collected early, like the incremental 
results of an iteration, "build up"  and increase overall memory size while 
decreasing locality and cache-hits. This seems to cause a nearly exponential 
performance dropoff when very little memory churn might occur if only things 
got collected "at the right time." 

http://research.sun.com/jtech/pubs/04-g1-paper-ismm.pdf suggests this new GC 
will help: 

> 2.4 Generational GarbageFirst
> Generational garbage collection [34, 26] has several advantages, which a 
> collection strategy ignores at its peril. Newly allocated objects are 
> usually more likely to become garbage than older objects, and newly 
> allocated objects are also more likely to be the target of pointer 
> modi cations, if only because of initialization. We can take advantage of 
> both of these properties in Garbage-First in a flexible way. We can 
> heuristically designate a region as young when it is chosen as a mutator 
> allocation region. This commits the region to be a member of the next 
> collection set. In return for this loss of heuristic flexibility, we gain an 
> important benefit: remembered set processing is not required to consider 
> modifications in young regions. Reachable young objects will be scanned 
> after they are evacuated as a normal part of the next evacuation pause. 

> Note that a collection set can contain a mix of young and non-young 
> regions. Other than the special treatment for remembered sets described 
> above, both kinds of regions are treated uniformly. 
> ... 
> 2.5 Concurrent Marking 
> Concurrent marking is an important component of the system. It provides 
> collector completeness without imposing any order on region choice for 
> collection sets (as, for example, the Train algorithm of Hudson and Moss 
> [22] does). Further, it provides the live data information that allows 
> regions to be collected in \garbage-first" order. This section describes our 
> concurrent marking algorithm. 

To counter my excitement at the new GC, the "early access" version consistently segfaulted  I haven't tried again on the official release: http://nielsmayer.com/xwiki/bin/view/fedora10/64bitJavaAppletIn64BitFirefox#Comments

Niels Mayer | Feb 19, 2009 05:52:38 GMT-08:00
For info on 6u14's new GC and potential usefulness for use with java web apps, see: http://n2.nabble.com/Does-JDK6u14-%22Garbage-First-garbage-collector-%28G1%29%22-work-and-or-improve-Xwiki-performance-size-mem-locality--tp2344358p2344358.html http://n2.nabble.com/forum/PrintPost.jtp?post=2344358 (cancel the print dialog: this gives you the complete message w/o extra crap).

Unfortunately, when running with

VM Arguments:
jvm_args: -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC -Xms256m -Xmx768m -XX:PermSize=256m -XX:MaxPermSize=768m -Djava.awt.headless=true …
java_command: org.apache.catalina.startup.Bootstrap start

You get a SIGSEGV (which obviates the whole point of garbage-collection):

A fatal error has been detected by the Java Runtime Environment:

SIGSEGV (0xb) at pc=0x0000000000e9c621, pid=13441, tid=114968912

JRE version: 6.0_14-b01 Java VM: OpenJDK 64-Bit Server VM (14.0-b10 mixed mode linux-amd64 ) Problematic frame: V [libjvm.so+0x296621]

The SIGSEGV happens because of new Garbage Collector, as invoked by "-XX:+UnlockExperimentalVMOptions -XX:+UseG1GC". It dies after running for less than an hour:

Current thread (0x000000004149f000):  ConcurrentGCThread [stack: 0x0000000000000000,0x0000000000000000] [id=13448]

siginfo:si_signo=SIGSEGV: si_errno=0, si_code=1 (SEGV_MAPERR), si_addr=0x0000001600000050 … elapsed time: 2916 seconds

This issue doesn't occur with the default GC invoked with

jvm_args: -Xms256m -Xmx768m -XX:PermSize=256m -XX:MaxPermSize=768m -Djava.awt.headless=true"

I guess that's why this is an "Early Access" release.

Christophe Grand

unread,
Jul 29, 2009, 9:08:09 AM7/29/09
to clo...@googlegroups.com
Hi all,

I thought about this naive (it considers even numbers!) implementation of the sieve:

(defn primes [max]
  (let [enqueue (fn [sieve n factor]
                  (let [m (+ n factor)]
                    (assoc sieve m
                      (conj (sieve m) factor))))
        next-sieve (fn [sieve candidate]
                     (if-let [factors (sieve candidate)]
                       (reduce #(enqueue %1 candidate %2)
                         (dissoc sieve candidate)
                         factors)
                       (enqueue sieve candidate candidate)))]
    (apply concat (vals (reduce next-sieve {} (range 2 max))))))

Here the sieve is a map where keys are next known non-primes and values a list of their prime factors.

It's not lazy and doesn't return a sorted seq but, despite being naive, it surprisingly doesn't perform that bad:
(time (apply + (primes 100000)))
"Elapsed time: 285.23304 msecs"
454396537

lazy version:
(defn lazy-primes []
  (letfn [(enqueue [sieve n factor]
            (let [m (+ n factor)]
              (assoc sieve m
                (conj (sieve m) factor))))
          (next-sieve [sieve candidate]
            (if-let [factors (sieve candidate)]
              (reduce #(enqueue %1 candidate %2)
                (dissoc sieve candidate)
                factors)
              (enqueue sieve candidate candidate)))
          (next-primes [sieve candidate]
            (if (sieve candidate)
              (recur (next-sieve sieve candidate) (inc candidate))
              (cons candidate
                (lazy-seq (next-primes (next-sieve sieve candidate)
                            (inc candidate))))))]
   (lazy-seq (next-primes {} 2))))


Christophe
--
Professional: http://cgrand.net/ (fr)
On Clojure: http://clj-me.blogspot.com/ (en)

Christophe Grand

unread,
Jul 29, 2009, 9:17:53 AM7/29/09
to clo...@googlegroups.com
On Wed, Jul 29, 2009 at 9:49 AM, John Harrop <jharr...@gmail.com> wrote:
On Wed, Jul 29, 2009 at 2:59 AM, Daniel Lyons <fus...@storytotell.org> wrote:
Probably it would help to try and implement a lazy
list of the Fibonacci sequence before looking at that code, and then
maybe try some other mathematical sequences that are a little easier
to construct too.

Using the super-lazy-seq macro Wrexsoul posted a month or so ago, that's dead easy:

(super-lazy-seq [a 0 b 1] (next-item b b (+ a b)))


Or, using rec-seq/rec-cat from seq-utils:
 (rec-cat fibs [1 1] (map + fibs (rest fibs)))

http://github.com/richhickey/clojure-contrib/blob/e20e8effe977640592b1f285d6c666492d74df00/src/clojure/contrib/seq_utils.clj#L97
 
On the other hand,

(defn fibs [] (cons 1 (cons 1 (lazy-seq (map + (fibs) (rest (fibs)))))))
(def fibs (memoize fibs))

That's roughly what rec-seq does but using an atom and a local instead of a var to keep the mutable state local.
 
Christophe

Niels Mayer

unread,
Aug 15, 2009, 7:15:48 PM8/15/09
to clo...@googlegroups.com
Although the "Garbage-First garbage collector" would be quite useful for Clojure, here I confirming that the new "G1" garbabe collector is "still in beta" at least for a "comparative use case" (using velocity/groovy/jPython in Xwiki, but not Clojure). Is anybody having success with theis GC in Clojure in a more standalone environment?

> The Garbage-First garbage collector (G1) is currently in beta. It is not enabled by default. The parallel collector is still the default GC and is the most efficient GC for common household usage. G1 is meant to be an alternative for the concurrent collector. It is designed to be more predictable and enable fast allocation with memory regions design. To use it: -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC 
> For more information about GC & G1, please see: 
>    - G1 Technical Session in JavaOne 2008<http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-5419&yr=2008&track=javase
>    - Java SE 6 GC tunning<http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html
>    - Analysis of the Garbage-First Garbage Collection<http://research.sun.com/jtech/pubs/04-g1-paper-ismm.pdf

In my tests with Xwiki (which loads velocity, groovy, and jPython, but not yet clojure) and Roller running out of Tomcat6, you unfortunately get a Segmentation Fault at the end of the garbage collection (a regression, the "early access release" ran successfully for 50 minutes under active 5-way-vhosting webload in Tomcat5):

# A fatal error has been detected by the Java Runtime Environment:
# SIGSEGV (0xb) at pc=0x00007f15ca60653a, pid=23777, tid=139729915439440
# JRE version: 6.0_14-b08
# Java VM: Java HotSpot(TM) 64-Bit Server VM (14.0-b16 mixed mode linux-amd64 )
# Problematic frame:
# V [libjvm.so+0x19c53a]

Caused by adding "-XX:+UnlockExperimentalVMOptions -XX:+UseG1GC" flags to tomcat process below

F S UID        PID  PPID  C PRI  NI ADDR SZ WCHAN    RSS PSR STIME TTY        TIME CMD
0 S tomcat    8804     1  0  80   0 - 462172 futex_ 469116 1 Aug13 ?          9:04 /usr/java/default/bin/java -server -Xms160m -Xmx1024m -XX:PermSize=160m -XX:MaxPermSize=320m -XX:+DoEscapeAnalysis -XX:+UseBiasedLocking -XX:+AggressiveOpts -Djavax.servlet.request.encoding=UTF-8 -Dfile.encoding=UTF-8 -DjavaEncoding=UTF-8 -Djava.awt.headless=true -Djava.library.path=/usr/lib64 -classpath :/usr/share/tomcat6/bin/bootstrap.jar:/usr/share/tomcat6/bin/tomcat-juli.jar:/usr/share/java/commons-daemon.jar -Dcatalina.base=/usr/share/tomcat6 -Dcatalina.home=/usr/share/tomcat6 -Djava.endorsed.dirs= -Djava.io.tmpdir=/var/cache/tomcat6/temp -Djava.util.logging.config.file=/usr/share/tomcat6/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager org.apache.catalina.startup.Bootstrap start
Using: XWiki 2.0M2, Sun's jdk-1.6.0_14-fcs.x86_64, Fedora's tomcat6-6.0.18-6.2.fc10.noarch, tomcat-native-1.1.16-1.fc10.x86_64, tomcat6-lib-6.0.18-6.2.fc10.noarch, tomcat6-servlet-2.5-api-6.0.18-6.2.fc10.noarch, on "Linux 2.6.27.29-170.2.78.fc10.x86_64 #1 SMP Fri Jul 31 04:16:20 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux"  
via $500 AMD Opteron 1220 2.80GHz Dual Core "server" http://www.tigerdirect.com/applications/SearchTools/item-details.asp?EdpNo=4312771&CatId=1991

-- Niels
 

Reply all
Reply to author
Forward
0 new messages