Performance/Bottleneck Tuning

378 views
Skip to first unread message

chet justice

unread,
Apr 2, 2010, 4:28:58 PM4/2/10
to obiee-enterpri...@googlegroups.com
I'm curious as to where people start when a user reports that a report is slow, as it relates to OBIEE and disparate sources (not necessarily Oracle).

This is my methodology:
Review physical SQL being submitted (which is sometimes extremely tedious)
Review physical mappings in OBIEE
If Oracle, run explain plans on SQL

From there I can (quickly) see if something is amiss.  Maybe a missing index or something.  Perhaps the report is passing the keys.  Etc.

After that it seems to get more into database tuning which is probably outside the scope of this group.

What steps do you take?

chet

http://oraclenerd.com
http://www.linkedin.com/in/chetjustice

je...@brewpalace.com

unread,
Apr 2, 2010, 6:23:05 PM4/2/10
to obiee-enterpri...@googlegroups.com


I guess I'm pretty similar:

Look at physical SQL
** Compare to what I know it should be **
Fix what is causing the differences (report or BI Server)
Reports: looking for primarily if there are extra columns causing a lowered grain
BI Server/Physical SQL:
* proper # of queries
* extra joins
* incorrect grain/group bys
* not function shipping
Then tune the model to match the SQL statement in the database as needed
Build aggregates as needed and as a last resort.

Jeff M.

Mark Rittman

unread,
Apr 3, 2010, 9:37:49 AM4/3/10
to obiee-enterprise-methodology
This is an interesting question. As it happens, I'm off to a client site next week where my task is to resolve a number of performance issues with their system, including

- reports that run too slow
- aggregates in the database not getting used
- general sluggishness of the system

As such, I'll be proposing a methodology and approach at the start of the engagement to work through these issues. I'll update this thread with what works and the approach I took.

Stepping back from it for a moment, my typical approach is to try and (a) find out what response time is required, (b) work out what's causing the issue, and then (c) propose a solution that resolves this particular issue. This is in contrast to the approach that is often used where you end up suggesting a bunch of things that worked for you in the past - add indexes, add materialized views, turn on caching etc - in the hope that one of them might randomly work.

With all of these things though, the first thing you need to do is sit down with the users and find out what the business process is, what is the performance issue that they are hitting, and what to them would be an acceptable response time at the end of the exercise. Then I would try and capture some diagnostics for the particular situation that they are trying to resolve, such as

- physical SQL from the log file
- execution plans, either gathered after the event or ideally, in real-time using the database sql cache
- system-wide diagnostics such as the load on the server, number of users etc on the system
- system-wide configuration data such as whether caching is enabled, how security is set up
- response times for queries across a typical day

That said, nine times out of ten the culprit for slow-running query is an inefficient SQL query and/or execution plan, particularly if the BI EE system has been developed without much DBA input and this is the first time that people have looked at an execution plan for the system's queries. Then you get into classic database tuning, use of Oracle DW features (bitmap indexes, star transformations, partitioning, MVs etc) which often has been overlooked but can reduce the cost of queries down to 5%, 10% etc of their current cost.

I'll update this thread later in the week with the approach I took, and what solutions worked in the end.

Mark
--
Mark Rittman
Director
M:+44 (0) 7866 568 246
F: +44 (0) 1273 784 960
E: mark.r...@rittmanmead.com

Voted by UKOUG Members as UKOUG Business Intelligence Partner of the Year, 2008/2009

www.rittmanmead.com

Registered Office : Third Floor South Suite, 1 Jubilee Street Brighton, BN1 1GE, United Kingdom
Company No. : 6032852
VAT No. : 900 3839 48

Please note that this email communication is intended only for the addressee and may contain confidential or privileged information. The contents of this email may be circulated internally within your organisation only and may not be communicated to third parties without the prior written permission of Rittman Mead Consulting.  This email is not intended nor should it be taken to create any legal relations, contractual or otherwise.

Michael Wilcke

unread,
Apr 3, 2010, 12:03:15 PM4/3/10
to OBIEE Enterprise Methodology Group
Our way of analysing performance problems looks the same. Most often
we end up tuning the database.

Recently, we experienced a performance issue that we could not handle
this way. I think it is worth mentioning here:

We had a sluggish performance on some clients, where others worked
smoothly. Reported physical query time was a second or so. Checking
clients, their network attachment etc. didn't help. In the end we
could identify the dashboard design to cause the problems. In that
dashboard we made extensive use of pivot tables. Depending on
dashboard prompt settings query results could return say 100 objects
that were each displayed in a section of a pivot table. This caused
the HTML-page to become very large (5-10 MB). Depending on the clients
RAM and browser the result has been extremely slow rendering of
presentation server responses.

Michael

Jit Dutta

unread,
Apr 3, 2010, 2:01:36 PM4/3/10
to obiee-enterpri...@googlegroups.com
In the topic of DW indxes, in OOTB OBIA the fact table foreign key cols come with bitmap indexes (for DW on Oracle). Is there a reason why Oracle applies bitmap and not regular indexes on the FKs in the fact tables? Is it advisable to apply regular indexes when custom FKs are added in the OOTB fact tables?

Thanks,

Jit


Date: Sat, 3 Apr 2010 14:37:49 +0100
Subject: Re: [OBIEE EMG] Performance/Bottleneck Tuning
From: mark.r...@rittmanmead.com
To: obiee-enterpri...@googlegroups.com

The New Busy is not the too busy. Combine all your e-mail accounts with Hotmail. Get busy.

Mark Rittman

unread,
Apr 3, 2010, 3:04:06 PM4/3/10
to OBIEE Enterprise Methodology Group
Hi Jit

I would imagine the reason that the BI Apps creates bitmap indexes on
the fact table foreign keys, is so that a star transformation can take
place when the fact table is joined to the dimensions. See this blog
post on the background to a star transformation, and the role that
fact table bitmap indexes plays in it:

http://www.rittmanmead.com/2008/12/27/so-how-to-star-transformations-actually-work/

However, if I remember correctly, the BI Apps doesn't create foreign
key constraints between the fact table and the dimension tables, which
is also a pre-requisite for a star transformation. So OOTB I don't
think star transformations would happen anyway, until you add these FK
constraints (and change the STAR_TRANSFORMATION_ENABLED database
parameter to TRUE, amongst other things).

regards

Mark

On Apr 3, 7:01 pm, Jit Dutta <jdutta...@hotmail.com> wrote:
> In the topic of DW indxes, in OOTB OBIA the fact table foreign key cols come with bitmap indexes (for DW on Oracle). Is there a reason why Oracle applies bitmap and not regular indexes on the FKs in the fact tables? Is it advisable to apply regular indexes when custom FKs are added in the OOTB fact tables?
>
> Thanks,
>
> Jit
>

> Date: Sat, 3 Apr 2010 14:37:49 +0100
> Subject: Re: [OBIEE EMG] Performance/Bottleneck Tuning

> From: mark.ritt...@rittmanmead.com


> To: obiee-enterpri...@googlegroups.com
>
> This is an interesting question. As it happens, I'm off to a client site next week where my task is to resolve a number of performance issues with their system, including
>
> - reports that run too slow
> - aggregates in the database not getting used
>
> - general sluggishness of the system
>
> As such, I'll be proposing a methodology and approach at the start of the engagement to work through these issues. I'll update this thread with what works and the approach I took.
>
> Stepping back from it for a moment, my typical approach is to try and (a) find out what response time is required, (b) work out what's causing the issue, and then (c) propose a solution that resolves this particular issue. This is in contrast to the approach that is often used where you end up suggesting a bunch of things that worked for you in the past - add indexes, add materialized views, turn on caching etc - in the hope that one of them might randomly work.
>
> With all of these things though, the first thing you need to do is sit down with the users and find out what the business process is, what is the performance issue that they are hitting, and what to them would be an acceptable response time at the end of the exercise. Then I would try and capture some diagnostics for the particular situation that they are trying to resolve, such as
>
> - physical SQL from the log file
> - execution plans, either gathered after the event or ideally, in real-time using the database sql cache
> - system-wide diagnostics such as the load on the server, number of users etc on the system
>
> - system-wide configuration data such as whether caching is enabled, how security is set up
> - response times for queries across a typical day
>
> That said, nine times out of ten the culprit for slow-running query is an inefficient SQL query and/or execution plan, particularly if the BI EE system has been developed without much DBA input and this is the first time that people have looked at an execution plan for the system's queries. Then you get into classic database tuning, use of Oracle DW features (bitmap indexes, star transformations, partitioning, MVs etc) which often has been overlooked but can reduce the cost of queries down to 5%, 10% etc of their current cost.
>
> I'll update this thread later in the week with the approach I took, and what solutions worked in the end.
>
> Mark
>

> On Fri, Apr 2, 2010 at 11:23 PM, j...@brewpalace.com <j...@brewpalace.com> wrote:
>
> I guess I'm pretty similar:
>
> Look at physical SQL
> ** Compare to what I know it should be **
> Fix what is causing the differences (report or BI Server)
>
> Reports: looking for primarily if there are extra columns causing a lowered grain
> BI Server/Physical SQL:
> * proper # of queries
> * extra joins
> * incorrect grain/group bys
> * not function shipping
> Then tune the model to match the SQL statement in the database as needed
>
> Build aggregates as needed and as a last resort.
>
> Jeff M.
>
> --
> Mark Rittman
> Director
> M:+44 (0) 7866 568 246
> F: +44 (0) 1273 784 960

> E: mark.ritt...@rittmanmead.com


>
> Voted by UKOUG Members as UKOUG Business Intelligence Partner of the Year, 2008/2009
>
> www.rittmanmead.com
>
> Registered Office : Third Floor South Suite, 1 Jubilee Street Brighton, BN1 1GE, United Kingdom
>
> Company No. : 6032852
> VAT No. : 900 3839 48
>
> Please note that this email communication is intended only for the addressee and may contain confidential or privileged information. The contents of this email may be circulated internally within your organisation only and may not be communicated to third parties without the prior written permission of Rittman Mead Consulting.  This email is not intended nor should it be taken to create any legal relations, contractual or otherwise.
>

> _________________________________________________________________
> The New Busy is not the too busy. Combine all your e-mail accounts with Hotmail.http://www.windowslive.com/campaign/thenewbusy?tile=multiaccount&ocid...

EvanWilson

unread,
Apr 3, 2010, 11:34:23 PM4/3/10
to OBIEE Enterprise Methodology Group
One of the most common things I have found is reports with extra
columns causing a lower grain than required (as mentioned by Jeff). A
lot of report writers like using pivot tables for certain
functionality but this can be "dangerous" because it can hide the
query grain and will use BI server resources to summarise the extra
records.

We currently have an issue where a query from the BI server takes 3
minutes (as per BI server and DB logs) but the same physical query
takes 30sec if run directly on the DB. If I find out why this is
happening I'll report back. Mark, this may be something to look out
for on your client site.

I'm also a fan of seeding the cache with "superset" reports if there
are commonly used reports that take a long time to execute (e.g
requiring count distinct on a large fact table using timeseries
functions)

Evan.

Venkat

unread,
Apr 4, 2010, 9:04:31 AM4/4/10
to OBIEE Enterprise Methodology Group
I always start my analysis using the following approach.

1. Note the Physical SQL & the corresponding Time
2. Note the Overall report execution time

There are 4 generic possibilities which can slow down a report

1. Physical SQL generated is not optimized - Directly relates to
Repository Design
2. Physical SQL generated is correct but a lot of calculations being
pushed to BI Server memory. There are multiple reasons why this can
happen
a. Lot of logical calculations across measures in different
logical fact tables - Could be a case where Physical Conforming SQL
join is not enforced
b. Pivot tables - A possibility where a lot of columns are pushed
to the excluded section in Pivot table - Here BI Server will push a
lower granular SQL Query and BI Server has to generate the Pivot
report out of this in its own memory
c. Reporting across data in different data sources - Again
possible wrong data model in RPD or could be a case where the data
sources are actually different
3. The query response times are faster (both logical and physical) but
a bad dashboard design where multiple views from a single report are
included separately inside a dashboard. Or it could just be a case of
no filters being applied properly (tune the report itself)
4. Everything is correct but its a question of tuning the underlying
database (indexes etc in the case of relational sources or
DYNCALCCACHE etc in the case of Essbase sources).

In most of the cases, it will be one of the above 4 in that order. So
i always start with understanding what the report does and then try to
come up with a SQL query of my own. Then compare and contrast it with
what BI EE fires(RPD design is very important in this case). If there
is not much of a difference then i tend to look at means of tuning the
report itself like converting the Pivot Tables to Normal Tables. In
many cases almost most of the functionality can be achieved in a
normal table view itself. In most of the cases, it will be either be
sub-optimal sql queries or BI Server in-memory calcs or Bad report
design. Only after eliminating these 3 will i look at ways of tuning
the database.

-Venkat

Vlad Valeyev

unread,
Apr 4, 2010, 5:05:44 PM4/4/10
to OBIEE Enterprise Methodology Group
I agree with Mark's general approach but would add one more step to
it. In my experience slow running reports 9 times out of 10 are coming
from client's attempt of dumping the entire DW into Excel spreadsheet
for further manipulations. When you try to get hundreds of thousands
sorted rows on the report it's hard to expect any quick response. So
when on step one you're trying to set the tuning scope (i.e. what
exact reports need tuning and what's the expected response treshold)
the expectation needs to be set that the report that returns more then
NN pages (my benchmark is usually 10 pages) should not really be
considered for tuning but rather considered for re-design. 10 pages is
differently an exaggerated target and even bigger result could be
retrieved under 20 sec but it helps setting client's mind into the
right direction - OBIEE should be an analytical tool not a download
utility.

> On Fri, Apr 2, 2010 at 11:23 PM, j...@brewpalace.com <j...@brewpalace.com>wrote:
>
>
>
>
>
>
>
> > I guess I'm pretty similar:
>
> > Look at physical SQL
> > ** Compare to what I know it should be **
> > Fix what is causing the differences (report or BI Server)
> > Reports: looking for primarily if there are extra columns causing a lowered
> > grain
> > BI Server/Physical SQL:
> > * proper # of queries
> > * extra joins
> > * incorrect grain/group bys
> > * not function shipping
> > Then tune the model to match the SQL statement in the database as needed
> > Build aggregates as needed and as a last resort.
>
> > Jeff M.
>
> --
> Mark Rittman
> Director
> M:+44 (0) 7866 568 246
> F: +44 (0) 1273 784 960

> E: mark.ritt...@rittmanmead.com

chet justice

unread,
Apr 4, 2010, 9:31:32 PM4/4/10
to obiee-enterpri...@googlegroups.com
What about the "stitching" thing?  I haven't seen that mentioned yet (or maybe it was implied and I'm still just a newbie).

I've noticed that when data is accessed within the same database but across different connection pools, OBI will send separate queries.  I am guessing that this is logical as OBI cannot assume that a given connection pool has the same SELECT privs as another.

Getting these queries to use the same connection pool you can see the join being pushed down and performance increasing, from 2 separate queries to 1.

I guess this would apply to disparated or federated sources as well.  Avoiding OBI doing the joins is probably a good thing as this is a strength of databases.

chet




--
To unsubscribe, reply using "remove me" as the subject.

je...@brewpalace.com

unread,
Apr 5, 2010, 3:58:57 PM4/5/10
to obiee-enterpri...@googlegroups.com

Chet -

The stitching thing really boils down to: In the database or in OBI?  Normally you'll have 1 connection pool.  However, you will encounter multiple pass SQL - where 2 recordsets are needed from either the same fact table (e.g., time series) or different ones.  When that occurs, you need a way to merge the recordsets.

I would try both options with your particular database and see which works better - 1 big query with a full outer join or 2 smaller ones that OBI then merges. 

Also, if you end up making a switch from one technique to the other, REGRESSION TEST everything - I've noticed discrepancies when doing this before.

Jeff M.


Robin Moffatt

unread,
Apr 6, 2010, 12:02:40 PM4/6/10
to OBIEE Enterprise Methodology Group
Hey chet,

> I'm curious as to where people start when a user reports that a report is
> slow

Taking your question at face value, I would suggest one should always
start here: http://carymillsap.blogspot.com/2009/12/my-whole-system-is-slow-now-what.html

All the other stuff on this thread is good, but Cary's post was a bit
of a revelation for me (it probably shouldn't have been). Along the
lines of what Mark said, it's too easy to sometimes get lost in the
depths of execution plans etc, tuning for an assumed or mythic goal.

Other things :
- Define what you're testing. Which report? Run with which parameter
values? Along with what Mark mentioned, the scatter-gun approach, not
nailing down the target is a frequent problem.
- Ideally make sure you've got baselines for "normal" performance,
otherwise you don't know what you're trying to fix other than a user's
suggestion that it "didn't used to take this long".
- Use Usage Tracking data to look at report response time trends -
when did the run time change? Make sure you've got QUERY_TEXT defined
long enough to capture the full text of what users are up to (http://
rnm1978.wordpress.com/2009/10/06/usage-tracking-only-half-the-story/)
- A good post from Mark that I was reading just yesterday :)
http://www.rittmanmead.com/2008/11/28/thoughts-on-obiee-performance-optimization-diagnostics/

cheers,

Robin
http://rnm1978.wordpress.com/

Reply all
Reply to author
Forward
0 new messages