Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Hardware Recommendations for new Oracle Server

14 views
Skip to first unread message

W. Scott Moore

unread,
Jul 31, 2001, 5:58:16 PM7/31/01
to
The lease on my current server is coming up, and I am working on trying to
spec out a new machine.

Our current server is a simple dual 400 Xeon on a Compaq Proliant with 2GB
of RAM. We have 2 Oracle Instances on it, Production and Development. We
are currently running Oracle 8.1.7 on windows 2000 server.

The total size of the associated datafiles for each instance is about 7.3
GB, however, the amount of data in the datafiles is significantly less.

The data is fairly static with a minimal amount of inserts, deletes, and
updates. Maybe 10,000 records total changed/inserted/deleted a day. What
we do is GIS (geographic information systems) and we don't use Oracle
Spatial. We use a third party software package (ESRI's SDE). Most of the
work performed are select statement that return approximately 100,000 rows
to a client to display a map and it needs to happen quickly. We also have
some pretty complex views because our data is highly normalized.

I want some insight on the direction I head in buying a new server based on
the following factors:

Money:
I have about $250,000 to spend on a server (or servers).
Users:
Needs to support 150 users fairly easily.
Uptime:
Needs to be up 24/7 (or as close as possible).

Based on our usage (low number of inserts/deletes/updates) and large result
sets, I am guessing our most efficient use of resources would be to stock up
on RAM. Our organization is an HP shop, and I was looking at an N Class
server with 4 550MHz processors and 16 GB of RAM. Any comments? Does
anyone recommend not using HP? Do to the recordsets, I believe that network
performance is important. Should I get a gigabit network card and have it
hooked straight into our switch?

What about any clustering solutions? Would anyone recommend sticking with
Win2k?

Any insight would be much appreciated.

Sincerely,
W. Scott Moore


Billy Verreynne

unread,
Aug 1, 2001, 2:22:01 AM8/1/01
to
"W. Scott Moore" <si...@hotmail.com> wrote

> The lease on my current server is coming up, and I am working on trying to
> spec out a new machine.

<snipped>


> Our organization is an HP shop, and I was looking at an N Class
> server with 4 550MHz processors and 16 GB of RAM. Any comments? Does
> anyone recommend not using HP?

Ignoring the price, the HP machines are fine work horses. We have both N and L
class boxes and the new L class one I'm playing with is pretty fast.

But when it comes to hardware, it is a bit of a jungle. Not an easy decision to
make. And the sales people coming up with all kinds of marketing bs about
performance and benchmarks and stats - none of which I would trust.

IMO, performance should not be the sole criteria when choosing your next
platform. You need to look at cost of maintenance, upgradability, scalebility,
administration costs (skills & resources) and how much _that_ will cost. It's a
hard to swallow that something like a new 9GB disk for such a box cost more than
5x times that of a standard SCSI 9 GB drive ... times like that when Intel
platforms look a lot more inviting.

> Do to the recordsets, I believe that network performance is important.
> Should I get a gigabit network card and have it hooked straight into
> our switch?

Sounds like a solution. In the case of something like a warehouse being hooked
into production machines, you may even want to consider running a dedicated
fibre link between them. What is important though is that the platform you
choose allows this type of scalebility - without costing you an arm and a leg.
:-)

> What about any clustering solutions?

Love them myself - techie paradise. :-) But you need to look at the reasons for
wanting a cluster closely and make sure that it does address you needs.

Clusters are a bit more complex with a learning curve to go through. After which
cluster maintenance should not be _significantly_ more difficult (or expensive
ito resources) than a normal bunch of SMP boxes running seperate databases. (and
don't let them tell you otherwise)

What is a new ball game though is performance. You are dealing with multiple
database engines for a single physical database. You need to consider
application partitioning when running OLTP as resolving locks across database
engines are expensive. Which could mean having to adapt/re-develop application
software.

However, clusters provide you IMO with unmatched scalability and performance -
something wich single unit SMP's can not beat (though I'm sure the SMP crowd
will disagree with me on that).

> Would anyone recommend sticking with Win2k?

Yes. Don't like saying this as I believe that _technically_ Unix based platforms
are a better _technical_ solution. But what you are trying to do is satisfy
business requirements and not built an ideal techie environment for DBA's,
developers and other assorted freaks. :-)

So if a Win2K based platform can address the business needs that a Unix platform
can not (given budget constraints, available skills and resources, etc.), then
use it. And don't let anyone tell you any different - unless there is a damn
good reason why using Win2K will not be able to address the business
requirements due to some serious technical issue or flaw.


--
Billy


Dusan Bolek

unread,
Aug 1, 2001, 6:58:12 AM8/1/01
to
"W. Scott Moore" <si...@hotmail.com> wrote in message news:<eSF97.1368$YJ3.2...@news.uswest.net>...

Just small comment to your questions:
Do you think that you really need more physical memory than actual
size of both databases ? It seems to me like overkill. Size of your
database is something between small and medium, there is no need for
16 gigs of RAM.
If you do not know what to do with money, then buy some nice UNIX box
(HP-UX is fine) and the rest give to charity. :-)
With 250.000 $ you can have a pretty powerful Unix box or NT/2000
server with a golden case. :-)


--
_________________________________________

Dusan Bolek, Ing.
Oracle team leader

Scott Moore

unread,
Aug 2, 2001, 5:08:25 PM8/2/01
to
> Just small comment to your questions:
> Do you think that you really need more physical memory than actual
> size of both databases ? It seems to me like overkill. Size of your
> database is something between small and medium, there is no need for
> 16 gigs of RAM.
> If you do not know what to do with money, then buy some nice UNIX box
> (HP-UX is fine) and the rest give to charity. :-)
> With 250.000 $ you can have a pretty powerful Unix box or NT/2000
> server with a golden case. :-)

I have 2 instances, both about 7GB (and growing), so if you factor in
memory for the system, databases and 150 oracle users, 16GB isn't too
unreasonable.

Colin McKinnon

unread,
Aug 2, 2001, 9:09:19 AM8/2/01
to

Billy Verreynne <vsl...@onwe.co.za> wrote in message
news:9k87c4$shf$1...@ctb-nnrp2.saix.net...

> "W. Scott Moore" <si...@hotmail.com> wrote
>
> > The lease on my current server is coming up, and I am working on trying
to
> > spec out a new machine.
> <snipped>
> > Our organization is an HP shop, and I was looking at an N Class
> > server with 4 550MHz processors and 16 GB of RAM. Any comments? Does
> > anyone recommend not using HP?
>
> Ignoring the price, the HP machines are fine work horses. We have both N
and L
> class boxes and the new L class one I'm playing with is pretty fast.

Me too. I'd definitley consider a fully decked L class as well.

<snip>


> So if a Win2K based platform can address the business needs that a Unix
platform
> can not (given budget constraints, available skills and resources, etc.),
then
> use it. And don't let anyone tell you any different - unless there is a
damn
> good reason why using Win2K will not be able to address the business
> requirements due to some serious technical issue or flaw.

Still not convinced about the reliability thing. And as for automating the
administration (e.g. backups etc) I don't think NT can hold a candle to
Unix.

Certainly an HP PA box of the spec you mention will be many times more
expensive than a PC based architecture of comparable spec / performance.

One way to get the best of both worlds is to use one of the Unixen available
for PC based systems. BSD and Linux are at least as good as most commercial
Unixen at the level you seem to be operating in terms of stability and
reliability. There are still some issues with scalability at the high end,
and the hysteria regarding support - like Bill Gates is going to re-write
Windows NT because you found a bug in it? Plus of course Caldera's various
flavours of Unix.

HTH

Colin


Billy Verreynne

unread,
Aug 3, 2001, 1:45:22 AM8/3/01
to
"Colin McKinnon" <co...@EditMeOutUnlessYoureABot.wew.co.uk> wrote

> > So if a Win2K based platform can address the business needs that a Unix
> > platform can not (given budget constraints, available skills and
> > resources, etc.), then> use it

> Still not convinced about the reliability thing. And as for automating the


> administration (e.g. backups etc) I don't think NT can hold a candle to
> Unix.

I tend to agree with you. There are many areas where Unix is far superior from a
practical administration perspective. However, on the reliability issue, it has
very little to do with operating system.

Do not intend for this thread to deteriorate in another "which is the best o/s?"
discussion, but I have used various Unix flavours and NT in production
environments.. running what the business people call corporate critical
applications.

I have seen my share of problems with both Unix and NT. Without one being worse
ito reliability than the other. To me the bottomline is administration. Good sys
admins for NT is hard to find as most of these sys admins are young kids with
MCP qualifications and never walked the hard road of experience that Unix sys
admins did. I contribute this to many of the so-called realibility problems
there are with NT. Simple clueless sods doing the driving.

> Certainly an HP PA box of the spec you mention will be many times more
> expensive than a PC based architecture of comparable spec / performance.

Yep. But then I do not see an Intel-based solution delivering the same
performance, scalebility and power. :-)

> There are still some issues with scalability at the high end,
> and the hysteria regarding support - like Bill Gates is going to
> re-write Windows NT because you found a bug in it?

There is a lot of FUD around NT. And most of this I contribute to Unix
supporters and ignorance about using NT as a production platform. One should not
sell NT short, nor Microsoft's ability to assimilate new technologies into their
products.

I agree that NT as a high-end server platform, is somewhat dubious. IMO NT and
Intel still need to prove them in this regard. But by the same token, I have
seen NT being used as a corporate platform and deliver. At the end of the day, I
think it comes down to us - the people responsible for development,
implementation and administration - and how good we can wield these tools.

One never blame the paintbrush for poor artwork. It is the artist who does the
painting. IMO it is the same when using NT or Unix, or Oracle or Informix.. the
reliability of the system, the ability to meet the business demands, depends on
how well we wield these tools and very seldom on the tools itself. Granted, a
sculptor will not use a paintbrush to chip away at a granite block. So you need
to use the right tools. But that does not mean that Unix is the only tool that
can deliver.. or Oracle is the only database that can do the job.

--
Billy

Dusan Bolek

unread,
Aug 3, 2001, 2:28:58 AM8/3/01
to
si...@hotmail.com (Scott Moore) wrote in message news:<6b4abc8f.01080...@posting.google.com>...

> I have 2 instances, both about 7GB (and growing), so if you factor in
> memory for the system, databases and 150 oracle users, 16GB isn't too
> unreasonable.

Sizing of database HW is a difficult task and I don't know much about
you databases, but I still think that this is an overkill. One of my
databases has also 7GB+ and perfectly running on HP-UX box with 256MBs
!
You has 14 gigs in datafiles, but datafiles are supposed to be on
disk, not in memory. :-) So you just need some space for SGA, system
and ORA processes etc., even with 150 users running on dedicated
connections, it can still perfectly run on 2GB of RAM. Maybe your
system is really special and all 150 concurrent users needs a lot of
memory to complete their tasks and I'm wrong, as I said I don't know
you system, but my experiences are different.
If you know that your system will be significantly rising in future,
to 50 GBs or more, then you must be prepared and buying more memory
can be reasonable. Also another way is to buy less memory now and
upgrade your server in future, but this is more business and
management decision than a technical one.

P.S. In one big insurance company are running AIX system, they have
SAP on it, 800 users and 4 GB of memory, everything runs fine.
Database has 60+ GB in datafiles.

Keith Boulton

unread,
Aug 3, 2001, 5:48:36 AM8/3/01
to

"Billy Verreynne" <vsl...@onwe.co.za> wrote in message
news:9kde2a$8tm$1...@ctb-nnrp2.saix.net...
> "Colin McKinnon" <co...@EditMeOutUnlessYoureABot.wew.co.uk> wrote

>
> One never blame the paintbrush for poor artwork. It is the artist who does
the
> painting. IMO it is the same when using NT or Unix, or Oracle or
Informix.. the
> reliability of the system, the ability to meet the business demands,
depends on
> how well we wield these tools and very seldom on the tools itself.
Granted, a
> sculptor will not use a paintbrush to chip away at a granite block. So you
need
> to use the right tools. But that does not mean that Unix is the only tool
that
> can deliver.. or Oracle is the only database that can do the job.
>

However, UNIX boxes are easy to configure. There is generally a set of text
files one (or more) per component. Application software is largely separate
from the OS. Applications do not overwrite each other on install.
Applications do not overwrite OS files. The problem with NT is that there is
no configuration control. Applications you do not want on a server (eg
Internet explorer) are mandatory for other software installs, for no good
reason. Given a set of applications to install, the sequence of installs has
to be exactly right and this isn't documented. The result is a very high
risk of spontaneous lock-ups and crashes which are very hard to isolate to
any particular component or even to decide between hardware and software
faults.

Partly, of course, it's because traditionally you had to manually configure
a UNIX server when setting it up, so you got to know where everything was.

Billy Verreynne

unread,
Aug 3, 2001, 6:46:46 AM8/3/01
to
"Keith Boulton" <kbou...@ntlunspam-world.com> wrote

> However, UNIX boxes are easy to configure.

Hehehe.. this is something that the NT crowd with very much disagree with, given
their WIMP way of doing things. :-)

But I agree with you - Unix and Oracle are very much alike. You can roll up your
sleeves, fire up the command line, and do it "properly". Unlike NT and
SQL-Server with their GUI's which fails to provide the power and flexibility
that a commandline interface (with text-based config files) gives.

> The problem with NT is that there is no configuration control.

The Registry is a case of all the configuration eggs in one basket. The single
massive point of failure of Windows. At times I wonder wtf the kids at Redmond
were thinking when they decided on the Registry concept for Windows. The
problems with INI config files (with Windows 3.1 and WFW) could have been far
better solved that throwing everything into a binary Registry "database".

> Partly, of course, it's because traditionally you had to manually configure
> a UNIX server when setting it up, so you got to know where everything was.

I agree. But again, most of these issues a good sys admin know all too well. And
can work around with, or use special procedures to ensure that the risk that
these issues cause, are minimal.

Give me Unix over NT any day. But from a business perspective where cost is a
major issue, something like NT often satisfy the business requirements far
better. And I believe that is our prime job - addressing business needs. By
keeping them in business, we keep ourselves in business. IT/IS is a service
environment. We need to look after our customers needs first and foremost.
Selecting technology just for the sake of technology is stupid. Even if
technology A is "better" (from a certain technical perspective) than technology
B.

IOW, if the customer want a plain cheeseburger, give him that, and do not insist
that he has to have a Big Mac with chips and coke on the side. :-)

--
Billy

0 new messages