Exceptionally UNstable Server Instance

238 views
Skip to first unread message

Matt Quackenbush

unread,
Feb 8, 2012, 11:44:39 PM2/8/12
to ra...@googlegroups.com
I've been meaning to post this question for awhile now, but have been too busy taking care of things to stop and write it up.

Server: Rackspace Cloud Instance
OS: Ubuntu 10.04 LTS
RAM: 512mb
Railo: 3.3.1.006 final
Tomcat: 6 - $ sudo apt-get install tomcat6
Java: 1.6.0_26 (Sun Microsystems Inc.) 64bit
Apache2 - $ sudo apt-get install apache2

/etc/default/tomcat6
JAVA_OPTS="-Djava.awt.headless=true -Xms384m -Xmx384m"

This is a "global config" or "single web" type of install. (Same principle as installed by the installer, but done by hand.)

While Tomcat is running nicely (fast, smooth, wonderful), I see the following:

$ free -m
             total       used       free     shared    buffers     cached
Mem:           492        381        110          0         16         74
-/+ buffers/cache:        290        201
Swap:         1019          9       1009

When Tomcat crashes all to hell, I instead see something like this:

$ free -m
             total       used       free     shared    buffers     cached
Mem:           492        191        300          0         39         95
-/+ buffers/cache:         56        435
Swap:         1019         10       1008

The catalina.out doesn't contain anything that even seems related to the shutdowns; just a few "file not found" type errors here and there, and rarely, if ever, at shutdown times.

By the way, when Tomcat does crash, I see the following:

$ sudo service tomcat6 status
 * Tomcat servlet engine is not running, but pid file exists.

The server might run for 8-10 days with no hiccups. It might shut down 10 times in an hour (this is one of _those_ hours). There is no rhyme or reason that I can find to this madness. Jamie Krug has also looked at it several times and he has found nothing of any significance, either.

What's running on the server? Extremely little: 1 Mura instance (2 sites), 1 BlogCFC instance (one site). That's it. No mail server. No database server.

If you need more information, I'll be happy to provide it.  Any and all help would be appreciated!

Thanks in advance.

Matt Quackenbush

unread,
Feb 9, 2012, 5:16:40 PM2/9/12
to ra...@googlegroups.com
Beuller?

Bruce Kirkpatrick

unread,
Feb 9, 2012, 8:07:07 PM2/9/12
to Railo
Matt, I'm a railo user. I have found railo 3.3.1.000 (and tomcat 7)
to be extremely stable until it runs out of memory on centos 6.x.
Often, before railo crashes, I'd have strange railo errors where
certain variables have become "null" which would otherwise be
impossible and sometimes you'll have weird java threadstate errors
too. The java process tends to consume 100% of 1 cpu core until you
restart it. It can take weeks for this to happen, so what I've done
is make cron scripts to do an HTTP requests every so often and if it
fails for an entire minute, I have it automatically restart railo.
This limits the downtime to an extremely small amount of time. I was
equally frustrated with figuring out what memory settings I needed
because I have a fairly small box (6gb) with a lot needed for mysql
caching. However, It hasn't crashed lately, so I think I've got a
good configuration now.

It can easily be a memory problem due to using a lot of shared memory
scopes or having too many classes. The permgen size and the max heap
size arguments for the jvm may need tweaking and your 512mb cloud
instance may be too small even though railo can start ok with that
much.

I found in a production environment the java garbage collection can
lag behind (which is why i have the parallelgc enabled) or that
session memory accumulates out of control when your server is under
heavier load due to abusive robots perhaps. Preventing abusive
behavior (clear session scope when there are too many hits close
together?) helps reduce the memory spikes.

If you have multiple tomcat virtual hosts, then it could take even
more memory because railo duplicates your files between each one to
some extent and can use hundreds MB more ram compared to adobe CF's
magic jrun connector. I had a large memory savings by using only 1
host for all web sites, but it took a lot of work to redo my rewrites
and paths and is not a quick change.

My test machine needs this much memory allocated to the JVM just to
run a few of my sites at a time. The virtual machine has 1.5 ram for
linux. I have it this high because it crashes when smaller.

JAVA_OPTS="-Xms128m -Xmx768m -XX:MaxPermSize=128m ";

My production machine needs these settings to be stable:

JAVA_OPTS="-Xms256m -Xmx1224m -XX:MaxPermSize=450m -XX:MaxNewSize=64m
-XX:+UseParallelGC";

This info goes in the /opt/railo/tomcat/bin/setenv.sh file

You may want to get a 1gb or 1.5gb cloud instance instead. If you are
in the USA, I have found hivelocity.com to be extremely affordable
compared to cloud servers.

These memory settings are actually lower then they used to be because
I rewrote my app to be more memory efficient.

Shared memory is a great CFML feature, so I use it heavily. You
could also try using the database for session / client storage to see
if it still crashes, but it will run quite a bit higher on I/O wait if
you do this. I am not using database for session/client, but I have
tested it as a possible solution.

good luck!


On Feb 9, 5:16 pm, Matt Quackenbush <quackfu...@gmail.com> wrote:
> Beuller?
>

Lyle Karstensen

unread,
Feb 9, 2012, 8:22:29 PM2/9/12
to ra...@googlegroups.com
I want to put in my .02 since I have been dealing with this also. Make sure you are not getting a large number of errors. I have been dealing with Out of Memory errors again recently and found that I had a page that I had no idea was generating errors. I wrote a simple chunk of code in my Application.cfc onError function that emails me the details of any error that occurs on my sites. I had NO IDEA that this page was generating an error and a quite a few others. Now my out of memory errors are gone. I have been running my sites on Railo for about a year now and I can tell you without a doubt it is stable. Our servers process more than 2 million requests a day across 4 servers and had no issues for more than 6 months. Then after an upgrade recently we started getting memory leaks. I have found that if your site generates errors Java will not cleanup after that code error and you will run out of memory. Especially if you are working with so little memory you just have no room for error.

Lyle Karstensen Jr.
Chief Executive Officer  (CEO)

Phone: 702.940.4200
Fax: 702.940.4201
Direct: 702.932.8200
Cell: 702.683.3666
Email: ly...@realistiq.com
Website: http://www.realistiq.com

Bruce Kirkpatrick

unread,
Feb 9, 2012, 9:25:54 PM2/9/12
to Railo
Lyle and I work on the same kind of projects it seems (real estate
search) so perhaps my advice is more relevant for Lyle since I have
built the exact same stuff.

The permgen size is the easier error to have because that is the one
that seems to grow worse over time. It eventually stabilizes if you
give it enough. You could also schedule an intentional railo restart
once a week to just avoid downtime in the middle of the day if you are
desparate.

If you are overriding the error page and doing custom logging, make
sure you occasionally check the context's logs for errors too since if
your custom error script fails, it is possible for the error to go
into the context's log instead.

I'm obsessed with performance optimization and validating to eliminate
errors even the less important ones generated by robots. I have seen
other companies where they have hundreds of errors throughout the day
because they don't have much validation to protect again form spam.
That makes it a lot harder to determine which errors are serious
problems.

Making sure you don't have long running requests that spike memory
usage is important as this used to crash adobe coldfusion too. I
usually break requests into multiple requests if they are going to be
running more then a few minutes. Make sure high memory usage scripts
aren't crawled by robots by adding extra security to check the ip or
referer is railo or use passwords. Make sure intense scripts don't
run at the same time as each other if possible.

Lyle and I both have to deal with importing large amounts of data from
many remote sources throughout the day, so I'd have to encourage you
to look at this import process and see if the database or CFML is
causing locks that last too long and cause other requests to queue
up. I use the database bulk insert features and only update about 30
listings on each loop to reduce the amount of time in locked state. I
also run most of my queries on a memory table (mysql memory engine)
instead of a disk based table. Memory tables update more quickly and
the disk table doesn't matter if it is slower. Its really impressive
on real estate search because you can't put an index on all of the
search fields - some queries need to be doing distance calculations or
full text search, so this gets a lot more complicated then just a
memory table, but I have a number of optimizations that combine cfml
shared memory, disk and memory tables with tons of pregenerated
information.

Also large deletes are massively slow unless you configure the
database and write different queries. A query wouldn't generate the
memory error on its own, but your other scripts waiting to run could
cause a crash when the query finishes because you'd have unusually
high load for a few seconds after the blocking query.

Another thing I built was a script to measure all the other pages
running and to store this in the server scope so I can monitor it from
another script. All of my web sites run on the same CFML app so I can
add global features like this. I can see the last 3000 requests
during an incident. This can occasionally help to discover what was
running prior to the errors and to understand where performance
problems are. I also keep track of the JVM memory state with the
error notifications using this code:

<cfset runtime =
CreateObject("java","java.lang.Runtime").getRuntime()>
<cfset freeMemory = runtime.freeMemory() / 1024 / 1024>
<cfset totalMemory = runtime.totalMemory() / 1024 / 1024>
<cfset maxMemory = runtime.maxMemory() / 1024 / 1024>

<cfoutput>
Free Allocated Memory: #Round(freeMemory)#mb<br>
Total Memory Allocated: #Round(totalMemory)#mb<br>
Max Memory Available to JVM: #Round(maxMemory)#mb<br>
</cfoutput>

I have achieved some impressive performance with a single 7200 rpm
hard drive on a sandy bridge 3.4ghz 6gb server with railo and mysql
5.5.x that costs $185/month.
http://www.sarasotaluxuryproperty.net/

I am able to use the slow hard drive because I eliminated nearly 100%
of the disk access on all requests. I do service a high amount of
hits across 100 domains. There are 250 cfml requests per minute with
most of them finishing under 100ms plus the static requests. I used
to have 15k sas drives and even intel SSDs but I found them to be a
waste of money after optimizing my code. Imagine if your 4 servers
could be reduced to 1 or 2 with lower spec hardware. It could save
you $500 to $1500 per month.

I believe I have one of the fastest real estate map searches on the
market which is largely achieved with indexing, grouping and memory
tables. You should also read about "database denormalization"
because performance requires you to have more redundant data in your
table design in order to have no joins with your performance sensitive
queries. If you rewrote the map search to query 1 table only and
that table was small width (no long text fields) and resided in
memory, you'd see 10 times performance improvement on your maps. The
listing lookups are fast because they rely on the primary key index,
but the map search needs ram caching to go fast. A fully cached
innodb performance isn't fast enough. The server is in Tampa,
Florida.

example search:
http://www.sarasotaluxuryproperty.net/z/_a/listing/search-form

You may want to look into the things I have done as a way to reduce
costs / improve response time on your sites.

If you have any need for additional assistance or want to perhaps see
if there is any interest in trading our services, you could contact me
through my web site. It's not that easy to find affordable
coldfusion experts and I've been doing this about 8 years now.

Bruce Kirkpatrick
http://www.realtyontop.com/


On Feb 9, 8:22 pm, Lyle Karstensen <l...@realistiq.com> wrote:
> I want to put in my .02 since I have been dealing with this also. Make sure you are not getting a large number of errors. I have been dealing with Out of Memory errors again recently and found that I had a page that I had no idea was generating errors. I wrote a simple chunk of code in my Application.cfc onError function that emails me the details of any error that occurs on my sites. I had NO IDEA that this page was generating an error and a quite a few others. Now my out of memory errors are gone.  I have been running my sites on Railo for about a year now and I can tell you without a doubt it is stable. Our servers process more than 2 million requests a day across 4 servers and had no issues for more than 6 months. Then after an upgrade recently we started getting memory leaks. I have found that if your site generates errors Java will not cleanup after that code error and you will run out of memory. Especially if you are working with so little memory you just have no room for error.
>
> Lyle Karstensen Jr.
> Chief Executive Officer  (CEO)
>
> Phone: 702.940.4200
> Fax: 702.940.4201
> Direct: 702.932.8200
> Cell: 702.683.3666
> Email: l...@realistiq.com

Matt Quackenbush

unread,
Feb 9, 2012, 9:41:23 PM2/9/12
to ra...@googlegroups.com
Thanks for the replies, guys. They are appreciated!

A couple of notes:

1) I apologize if it was somehow taken that I was suggesting Railo is unstable. That was not what I said, nor meant, nor intended to imply. We have *lots* of Railo sites running happily and smoothly. Just this one particular instance (a personal instance) is unstable.

2) I am definitely *not* dealing with much traffic on this server instance. Extremely little, in fact.

3) I do not intend to keep the server running at 512mb, as I plan to move a number of other sites over to it. I just don't want to throw more RAM at it simply for the sake of throwing more RAM at it.

4) Most of the time, there are no errors whatsoever that appear in catalina.out between Tomcat restart and crash. Generally speaking, the catalina.out entries are back-to-back startup details.

5) The only two errors that are ever showing up in catalina.out are a "file not found" that I cannot figure out why anything is trying to hit the URL that it's trying to hit, and an "file not found" on a CAPTCHA image that apparently is not being written to disk for some reason (likely a permission issue, but that's kinda weird since Tomcat has ownership of the directory).

The seriously odd thing is #4. No errors. I mean, sometimes I can start Tomcat up, browse to one of the sites, load the page, and then hit refresh and Tomcat has stopped. Other times it will run for a week or so with no hiccups.

/wrists

James Holmes

unread,
Feb 9, 2012, 9:56:52 PM2/9/12
to ra...@googlegroups.com
Memory monitoring will help. If you don't have FusionReactor, maybe http://cftracker.riaforge.org/ will do.

--
Shu Ha Ri: Agile and .NET blog
http://www.bifrost.com.au/

Bruce Kirkpatrick

unread,
Feb 9, 2012, 10:30:54 PM2/9/12
to Railo
catalina.out is not the only error log. I don't think that one helped
me much.

The application error logs are in one of these locations:
/opt/railo/tomcat/webapps/WEB-INF/railo/logs/exception.log
/opt/railo/tomcat/webapps/ROOT/WEB-INF/railo/logs/exception.log

Or if you have a custom context, then it would be like this:
/path/to/your/domain/WEB-INF/railo/logs/exception.log

I built my production machine from a netinstall image and also built a
identical copy in virtualbox to test with on my computer. If you
have a local test environment, you could test different ram values
without incurring costs. I love my virtual machine test environment
because I need windows for adobe software, and don't want to waste
resources on extra hardware. It is far too scary working in the dark
on production servers. You want to know it will work before applying
changes. You could run load test tools like apachebench perhaps to
similar a load condition. Keep in mind that even if your traffic is
low, spam robots and scrapers will still occasionally try to attack
you. Having protection against these attacks takes extra work and
having cron jobs for disaster scenario would handle the worst of them.

I thought I could start on small cloud instances and grow dynamically,
but then I found hivelocity.net and they had 3.4ghz sandy bridge
servers for $139 or $185 per month with 4gb or 8gb ram which blew my
mind. The equivalent rackspace cloud hosting costs $400 per month
with higher latency SAN storage. Even softlayer and liquidweb aren't
cheap enough to make cloud worth it. You can even install the open
source xenserver yourself and manage your own private cloud for free
(which is what I do). I needed a small windows machine for 1 app and
the rest is linux. Private cloud instance restart time is very fast
and you can get bios level access to your virtual machines.


On Feb 9, 9:56 pm, James Holmes <james.hol...@gmail.com> wrote:
> Memory monitoring will help. If you don't have FusionReactor, maybehttp://cftracker.riaforge.org/will do.
>
> --
> Shu Ha Ri: Agile and .NET bloghttp://www.bifrost.com.au/

Matt Quackenbush

unread,
Feb 9, 2012, 11:53:11 PM2/9/12
to ra...@googlegroups.com
Thanks for that reminder, Bruce. I thought we had already checked there, but apparently not. There was one intermittent error that was appearing in the logs, the cause of which I just fixed. We shall see how things go moving forward.

My guess is that it's just simply too little RAM, but I was hoping that it'd be stable with this little traffic at 512, then I'd move more sites over and up the ante. :-)

By the way, thanks for the tips on server setups. I don't pretend to be a real server admin, so I might be pinging you off-list one of these days. ;-)

Matt Quackenbush

unread,
Feb 9, 2012, 11:54:34 PM2/9/12
to ra...@googlegroups.com
Thanks for the reminder about FusionReactor, James. I might have to give their free trial a whirl. (I think they still have one of those? :-)

Matt Quackenbush

unread,
Feb 12, 2012, 4:31:03 PM2/12/12
to ra...@googlegroups.com
Just a bit of an update...

From Thursday evening through early Sunday morning, the server ran perfectly fine and dandy. When it went down again this morning, I decided to go ahead and give resizing the instance a shot, doubling the RAM from 512mb to 1024mb. Sadly, it still took just 3 hours to go down again.

As I was restarting it that time I remembered that I had not adjusted the Tomcat memory settings after the resize, so I went ahead and did so. The new settings are:

JAVA_OPTS="-Djava.awt.headless=true -Xms768m -Xmx768m -XX:MaxPermSize=256m"

This time, it took an agonizingly long 35 minutes to crash. :/

Oh, and no errors show up in _any_ logs. None. Zip. Nada. Zilch.

I am beginning to think that the instance is cursed.

Gert Franz

unread,
Feb 13, 2012, 2:51:29 AM2/13/12
to ra...@googlegroups.com

Matt,

 

Are you at OpenCFS? Then we can have a look at the server. What you can try to look at is to see whether in your railo-server.xml file the output is set to null (somewhere in the first few lines) and if yes, just set it to “default” so that you get some log info from the box. Prior to the crash you perhaps see some messages there…

 

Greetings from the southern hemisphere

Gert Franz

 

Railo Technologies      Professional Open Source

skype: gert.franz         ge...@getrailo.com

+41 76 5680 231           www.getrailo.com

Matt Quackenbush

unread,
Feb 13, 2012, 8:53:51 AM2/13/12
to ra...@googlegroups.com
Gert,

Thanks for chiming in. I have made the change in the out level (it was indeed null) as suggested. We shall see what happens.

And yes, I will definitely be seeing you at OpenCF Summit! Quite looking forward to it. :-)

Matt Quackenbush

unread,
Feb 18, 2012, 11:50:42 AM2/18/12
to ra...@googlegroups.com
Just wanted to give a quick update.

Changing the log settings resulted in a few more things showing up in logs, but still no errors are being reported. And the instance continuously shuts down.

So, I decided to create another instance, with certain differences, and do some testing to see what would happen.

New Instance:
   Ubuntu 11.10 Server
   Apache 2
   Tomcat 7
   OpenJDK 7
   WAR configuration, rather than "global" Railo setup (using Railo 3.3.1.000 WAR, upgraded to 3.3.2.002 via BER provider)

I then moved the 2 non-Mura sites running on the unstable instance over to this new instance. It ran wonderfully well for ~5 days, making me very happy.

I then moved the Mura sites (2 sites, single Mura instance) over to the new server instance. Within ~3 hours, it crashed. Clearly Mura cannot run on a server with 1024MB or less of RAM. That's disheartening. :-(

So I've now upgraded the instance to 2048MB of RAM, and set -Xms1536m and -Xmx1536m. We'll see how that goes.

Matthew Woodward

unread,
Feb 18, 2012, 12:12:35 PM2/18/12
to ra...@googlegroups.com
On Sat, Feb 18, 2012 at 8:50 AM, Matt Quackenbush <quack...@gmail.com> wrote:
New Instance:
   OpenJDK 7

Have you tried Sun/Oracle JDK 6? Unless this has changed recently the Tomcat folks don't recommend running Tomcat on Java 7, and I'm not sure what the official recommendation is on OpenJDK but if you haven't yet tried it, I'd try Oracle JDK 6.

 
I then moved the Mura sites (2 sites, single Mura instance) over to the new server instance. Within ~3 hours, it crashed. Clearly Mura cannot run on a server with 1024MB or less of RAM. That's disheartening. :-(

We run Mura for OpenCF Summit on a VPS at Viviotech (CentOS) that has 1GB total, giving Tomcat 512m, and it works extremely well even with MySQL running on this same machine. The only time it's ever crashed is when we got slammed when Tim O'Reilly tweeted about the event. We're running on OpenBD so it's not apples to apples; just commenting that Mura seems to run fine with this much RAM and I think Viviotech (Jordan will have to confirm) has good success on servers with less RAM than that for Mura.

Again, I don't know your full situation and I'll have to admit to only skimming this thread as it's been unfolding, so I'm just offering that up as a point of comparison.


So I've now upgraded the instance to 2048MB of RAM, and set -Xms1536m and -Xmx1536m. We'll see how that goes.

Sorry is this has been answered in previous messages in the thread, but do you even know for a fact that it's an OOM problem? That's of course always the first assumption I have as well but given the issues with seeing anything in your logs I was wondering if you knew for sure this was the issue.

Take all this for what it's worth. ;-) Sounds like you'll have some hands on help at OpenCF Summit next week!

--
Matthew Woodward
ma...@mattwoodward.com
http://blog.mattwoodward.com
identi.ca / Twitter: @mpwoodward

Please do not send me proprietary file formats such as Word, PowerPoint, etc. as attachments.
http://www.gnu.org/philosophy/no-word-attachments.html

Jordan Michaels

unread,
Feb 18, 2012, 1:17:26 PM2/18/12
to ra...@googlegroups.com
In my experience the most common problem with mura instances is that they typically require more then the default amount of Permgen space. Permgen is separate from the java heap and IIRC, defaults to 64MB. I don't see where you mention if you modify your permgen, but you can increase your permgen space with something like: -XX:MaxPermSize=128m.

Also, it may be worthwhile to experiment a little with your GC settings. I'm no expert when it comes to GC, but a couple of settings I found helpful in the past are:

-XX:+CMSClassUnloadingEnabled - which tells the GC process to also clean out Permgen of classes no longer in use; helps stop permgen errors but also slows performance.
-XX:+UseConcMarkSweepGC - this causes the GC process to run more frequently and lowers the amount of pause associated with the GC process.

If you're not already modifying your permgen, then you may be just fine with a simple permgen space increase. Worth a shot anyway!

HTH!

-Jordan

Matt Quackenbush

unread,
Feb 18, 2012, 4:25:03 PM2/18/12
to ra...@googlegroups.com
Matt,

Thanks for the tip on OpenJDK 7. I had read recently that it was going to be the default in the upcoming 12.04 LTS release, and was seeing good support moving forward. I was opting to give it a try just to see how things went. (I was using Oracle JDK 6 on the "old" instance.) Not that I've seen any issues with OpenJDK 7, but I've just changed the configuration to use OpenJDK 6 instead; we'll see how it goes.

As for Mura, I've been told by more than one Blue River employee that 1024 was the minimum recommended RAM, but that they had seen people (you, perhaps? ;-) running on 512 or 768. Those comments from them were part of my coming to the conclusion that 1024 was the minimum.

I do not know for 100% certain that it's an OOM issue, as the logs say absolutely nothing; the Tomcat instance just stops, leaving only the PID file. It's just a (somewhat educated?) guess.

YAY! for OpenCF Summit! See you in a few days! ;-)

Matt Quackenbush

unread,
Feb 18, 2012, 4:44:54 PM2/18/12
to ra...@googlegroups.com
Jordan,

Good call on the MaxPermSize. I was _not_ changing that previously. I've given 192m a shot. I'm going to guess that will make a HUGE difference.

In testing on our AWS instances at work, we found that +UseConcMarkSweepGC actually caused significant slowdowns in performance, so we've turned it off on all servers.

By the way, going up to 2048m alone (with Xms/Xmx at 1536m) had no change in the stability. It still crashed within an hour or two. Heh. We shall see what takes place with all the new configuration. :-)

Brad Wood

unread,
Feb 18, 2012, 6:09:07 PM2/18/12
to ra...@googlegroups.com
Memory doesn't have to be a black box.  Enable verbose GC logs and you can see exactly how much of your perm space is being used.  (As well as your heap)  If you were filling either up though, I would expect to see an OOM error.  The eclipse-based IBM workbench has a nice plug-in that will analyze GC logs from most JVMs.
 
Also, sorry if this has been answered but do you have a product such as SeeFusion or Fusion-Reactor on the machine?  Sometimes that makes a bad query, or a locked requests a piece of cake to identify.  SeeFusion can be configured to E-mail you if certain memory/request thresholds are hit and log long running requests to a database.
 
Thanks!
 
~Brad

Michael Offner

unread,
Feb 20, 2012, 3:29:34 AM2/20/12
to ra...@googlegroups.com
just some general infos

Railo 3.3 is no tested on Java 7 (Railo Appollo will be).

We have improved permgen handling a lot with versions 3.3.2, we have replaced the existing cloassloading completely and replaced the single classloader by mapping with a collection of classloaders (make "class unloading" a much better). We also make sure that the Page Object (Source of the "Problem") have no references to anything, but still the GC has problem to get rid of them.

Can you send me your system.err log? Railo logs when it runs out of perm gen and tries to flush it.

We are working on a Memory Plugin that also shows you the the usage of the permgen space, this can help a lot to see what is happening on the system.

/micha






2012/2/19 Brad Wood <br...@bradwood.com>

Matt Quackenbush

unread,
Feb 25, 2012, 6:17:51 PM2/25/12
to ra...@googlegroups.com
Just wanted to give a quickie update for those that might have been following along here (or might read later).

First things first. I made a really poor choice when selecting the title of this thread. I incorrectly thought that using "Server Instance" would clearly indicate that it was a Tomcat/JVM issue, but I was wrong. I knew from the get go that Railo *was not* the cause of the instability of the server. We run Railo (typically BER, although some instances are not upgraded as regularly as others) on _many_ server instances, and could not possibly be more happy with Railo as a technology choice! All of my professional work has been Railo-based for over a year now, which is what has driven me to migrate my personal projects away from ACF after being on various CF (Allaire, Macromedia, Adobe) versions for the last ~15 years.

At issue is the fact that I am new to doing all of the setup and what have you for a production Linux/Tomcat/Railo server. Therefore, I knew that I had simply overlooked _something_ in the setup process, and was searching for help in finding that something. I apologize profusely for even the possibility that my topic title choice may mislead anyone into thinking that I was referring to Railo as being unstable. Nothing could be further from the truth!

Now then, as for an update that can hopefully put this thread to bed, a week or so ago I made a couple of changes, and the instance has been happily zinging along like I've always known it would, once I found the missing ingredients to my server setup. Hopefully this will help users in the future!
  • HUGE thank you to Jordan for pointing me to one of those items. Changing the MaxPermSize was absolutely one of  - if not - the most important key. I had completely forgotten about the tiny default setting there.
  • I had also forgotten that my database server instance, while separate, was really small. So I increased the RAM available on that instance.

Thank you to everyone who provided input, either publicly or privately. You guys (and gals) all frequin ROCK!

And YES, I totally LOVE Railo! Thank you, Gert, Tanja, and Micha, and for everyone else that has contributed in any way to the development and availability of Railo. :-)

Reply all
Reply to author
Forward
0 new messages