We have an application which is based around an oracle database and
processes 100's millions of records per day. A number of tables in the
oracle database are hit very hard, with reads and writes to a set of
tables occuring on a near-random basis (dictated by the data
received). I'm in the situation where I'm trying to develop a
benchmarking exercise to understand the requirements of the system in
terms of i/o as it's the bottleneck in the system.
- Can anyone suggest how I might understand the i/o requirements for
the different tablespaces and/or tables in the database per unit time?
- What kind of information do storage engineers need to configure
their array to meeet application i/o requirements?