I concur with Chandler and Ingersoll. I generally use an early
generation 2.14GHz core2duo processor with 3.2GB RAM, and experimented
fitting a pcount() model on other much more powerful machines (eg, a
computer with 16GB RAM and a high end Xeon processor), and a variety
of the virtual computers built on the Amazon Cloud. Gains in speed
appeared relatively modest. I was able to shave about 10 minutes off
of the 45 minute processing time required to fit the model on my daily
computer using the fastest computers benchmarked. Granted, a time
savings of about 20% is more meaningful when processing requires days,
weeks or months... Processing on older Pentium IV machines was not
substantially slower compared to the core2duo. My tests indicated
that CPU power was more limiting than RAM (7GB vs 16GB RAM not much
different with comparable processors).
If you ultimately need to estimate dispersion from the full model or
want to examine GOF using the parboot() simulator, your 3 days of
processing required to fit your model will likely be an issue. For
example, my full pcount() model required about 10 hrs for each
parboot() iteration. Ideally I wanted to run a very large number of
bootstrap simulations, but even just 100 simulations would take
100x10hrs = 1000hrs (or 41 days). The parboot() simulations are a
'stupidly' redundant processing task, which means R can be used to
take advantage of multiple CPU cores via parallel processing.
On that note, Chandler has provided some very helpful details about
how to parse the parboot() simulations across multiple processor cores
(see
http://groups.google.com/group/unmarked/browse_thread/thread/b71a7551ea6d8e03/99e70d253ed3816d?lnk=gst&q=parallel+processing).
From Chandler's guidance, I succeeded to parse 100 parboot()
simulations across 20 processor cores on a virtual machine I created
on the Amazon Cloud, and cut the estimated 41 days of processing to
just about 1.5 days.
Good luck and please let us know if you learn any new tricks!
-Ted