This is very system dependent.
> - would you configure SGA around 50 - 70% of physical RAM?
IMO, that is just a wild-ass guess starting point from the days of
smaller systems. Really, you need to determine what-all is going on
on a fully loaded running system. Then Iterate. With a mixed system,
you may want to allow some big PGA at times, to avoid spilling sorts
to disk, but that can have funny optimizer effects. 9i is different
than 10g, too, there's just no usable rule of thumb.
> - would you use 8K block size of bigger blocks?
There have been some very interesting discussions of this (like this
classic: http://www.oraclealchemist.com/oracle/hey-guys-does-size-matter/
) , which has convinced me to stay with 8K. But my systems are more
limited than yours.
> - would you allow automatic resizing of SGA components in 10g (ASMM)
> or use fixed sizes?
My gut feel is to use it, then turn it off if it starts being too
stupid, where too stupid means things like getting tuning exactly
backwards because it is working with what-was. Trust your testing
more than my gut. There are stories floating about... my general view
is that all new features need some amount of time to work out the
bugs, the greater the spread of your feature usage in production, the
more likely you'll hit something obscure, which can mean "difficult to
replicate and fix."
> - would you use some OS features like "large pages" on AIX in
> combination with LOCK_SGA?
Haven't been on AIX recently to comment. But it has always seemed
idiosyncratic among unix, to me.
Are you RAC?
jg
--
@home.com is bogus.
SAP and IBM say... what?
http://www.informationweek.com/news/software/database_apps/showArticle.jhtml?articleID=222600700&subSection=News
Let's say there is no RAC, only standalone instances.
Of course "start from something and then iterate" is the best approach
available.
However it is not very practical as:
1. Changing SGA_MAX_SIZE requires outage.
2. We found that on heavily loaded system dynamic changes of
DB_CACHE_SIZE don't work very well.
For example when we tried to resize DB_CACHE_SIZE from 20 GB to 30 GB
it took almost an hour.
During this time database experienced heavy waits on 'log file sync' -
but there were no log switches
and few transactions. When we tried to reisize back to 20 GB the
instance crashed. This was 9.2.0.8.
3. We would rather avoid significant changes (e.g. increasing
DB_CACHE_SIZE from 20 GB to 40 GB),
this means that we need to go in small steps (20, 24, 28, 32, etc).
But this is time-consuming.
Many systems are on weekly/monthly cycle (i.e. certain batch job run
once per month),
so we'll have to wait one month to see the results. By that time the
data may change.
I have 64bit linux and here is one of my instances:
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit
Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
SQL> show sga
Total System Global Area 1.7180E+10 bytes
Fixed Size 2310752 bytes
Variable Size 8578973088 bytes
Database Buffers 8589934592 bytes
Redo Buffers 8650752 bytes
SQL>
$>grep -i huge /proc/meminfo
HugePages_Total: 8192
HugePages_Free: 961
Hugepagesize: 2048 kB
The answer to your question is: yes, I do use huge pages because I don't
want to waste memory on page tables. Also, huge pages aren't swapped or
paged, handling of huge pages is much simpler. You're reducing your OS
overhead by using huge pages.
On a IBM server that has between 100 and 200 GB RAM, I would urge
management to license VLM ("Very Large Memory") option which allows me to
have 32K blocks and allocate at least half of the memory for SGA. I would
then call Connor McDonald to try getting bad BCHR on that machine.
You're welcome. Just noticed this:
http://hoopercharles.wordpress.com/2010/01/31/faulty-quotes-5-block-sizes/
Thanks for the real data points. Practicality usually wins.
jg
--
@home.com is bogus.
http://www.cleveland.com/living/index.ssf/2010/02/bill_watterson_creator_of_belo.html
You have 17 GB SGA, and db_cache_size = 8.5 GB. This leaves 8.5 GB for
shared pool,
large pool, and Java pool. Seems too much? Is ASMM enabled? I think 1
GB should be enough for shared pool in most cases.
> Linux is a bit different as there is no direct I/O (as far as I know),
> unless you use raw devices or ASM.
Huh? There is direct I/O, I am using it for years, I have even been
writing about it..
> So memory that doesn't get allocated to SGA will be used for filesystem
> buffer cache.
Yup. That's why I use direct I/O.
snip
On a 32 gig linux system I would use huge pages and probably setup 16
gig for a one instance SGA.
Lots of rumours of problems with huge pages and ASMM in 10g/11g so I
would stay away from that.
Probably go for a 16 gig SGA and 8 to 10 gig PGA ( aggregate target )
and leave some memory left over since the aggregate target is not a
hard limit.
Throw most of your memory in the SGA at buffer cache. How big does
your shared pool need to be? Got any monster apps with a ton of
dynamic SQL that is not using bind variables?
Except for one system running Solaris all my stuff is linux these days
( OEL 5.4 ) so not sure exactly on your AIX question but if it makes
sense to reserve memory permanently for oracle ( that's huge pages )
then go for it probably.
Here's a "must read" for Oracle configuration & tuning on AIX:
http://www.ibm.com/developerworks/wikis/download/attachments/104533513/Oracle_AIX+Tuning+1.pdf
--
Jeroen
> On a 32 gig linux system I would use huge pages and probably setup 16
> gig for a one instance SGA.
Yup. That's precisely what I did.