Recently, we experienced a performance issue that we could not handle
this way. I think it is worth mentioning here:
We had a sluggish performance on some clients, where others worked
smoothly. Reported physical query time was a second or so. Checking
clients, their network attachment etc. didn't help. In the end we
could identify the dashboard design to cause the problems. In that
dashboard we made extensive use of pivot tables. Depending on
dashboard prompt settings query results could return say 100 objects
that were each displayed in a section of a pivot table. This caused
the HTML-page to become very large (5-10 MB). Depending on the clients
RAM and browser the result has been extremely slow rendering of
presentation server responses.
Michael
I would imagine the reason that the BI Apps creates bitmap indexes on
the fact table foreign keys, is so that a star transformation can take
place when the fact table is joined to the dimensions. See this blog
post on the background to a star transformation, and the role that
fact table bitmap indexes plays in it:
http://www.rittmanmead.com/2008/12/27/so-how-to-star-transformations-actually-work/
However, if I remember correctly, the BI Apps doesn't create foreign
key constraints between the fact table and the dimension tables, which
is also a pre-requisite for a star transformation. So OOTB I don't
think star transformations would happen anyway, until you add these FK
constraints (and change the STAR_TRANSFORMATION_ENABLED database
parameter to TRUE, amongst other things).
regards
Mark
On Apr 3, 7:01 pm, Jit Dutta <jdutta...@hotmail.com> wrote:
> In the topic of DW indxes, in OOTB OBIA the fact table foreign key cols come with bitmap indexes (for DW on Oracle). Is there a reason why Oracle applies bitmap and not regular indexes on the FKs in the fact tables? Is it advisable to apply regular indexes when custom FKs are added in the OOTB fact tables?
>
> Thanks,
>
> Jit
>
> Date: Sat, 3 Apr 2010 14:37:49 +0100
> Subject: Re: [OBIEE EMG] Performance/Bottleneck Tuning
> From: mark.ritt...@rittmanmead.com
> To: obiee-enterpri...@googlegroups.com
>
> This is an interesting question. As it happens, I'm off to a client site next week where my task is to resolve a number of performance issues with their system, including
>
> - reports that run too slow
> - aggregates in the database not getting used
>
> - general sluggishness of the system
>
> As such, I'll be proposing a methodology and approach at the start of the engagement to work through these issues. I'll update this thread with what works and the approach I took.
>
> Stepping back from it for a moment, my typical approach is to try and (a) find out what response time is required, (b) work out what's causing the issue, and then (c) propose a solution that resolves this particular issue. This is in contrast to the approach that is often used where you end up suggesting a bunch of things that worked for you in the past - add indexes, add materialized views, turn on caching etc - in the hope that one of them might randomly work.
>
> With all of these things though, the first thing you need to do is sit down with the users and find out what the business process is, what is the performance issue that they are hitting, and what to them would be an acceptable response time at the end of the exercise. Then I would try and capture some diagnostics for the particular situation that they are trying to resolve, such as
>
> - physical SQL from the log file
> - execution plans, either gathered after the event or ideally, in real-time using the database sql cache
> - system-wide diagnostics such as the load on the server, number of users etc on the system
>
> - system-wide configuration data such as whether caching is enabled, how security is set up
> - response times for queries across a typical day
>
> That said, nine times out of ten the culprit for slow-running query is an inefficient SQL query and/or execution plan, particularly if the BI EE system has been developed without much DBA input and this is the first time that people have looked at an execution plan for the system's queries. Then you get into classic database tuning, use of Oracle DW features (bitmap indexes, star transformations, partitioning, MVs etc) which often has been overlooked but can reduce the cost of queries down to 5%, 10% etc of their current cost.
>
> I'll update this thread later in the week with the approach I took, and what solutions worked in the end.
>
> Mark
>
> On Fri, Apr 2, 2010 at 11:23 PM, j...@brewpalace.com <j...@brewpalace.com> wrote:
>
> I guess I'm pretty similar:
>
> Look at physical SQL
> ** Compare to what I know it should be **
> Fix what is causing the differences (report or BI Server)
>
> Reports: looking for primarily if there are extra columns causing a lowered grain
> BI Server/Physical SQL:
> * proper # of queries
> * extra joins
> * incorrect grain/group bys
> * not function shipping
> Then tune the model to match the SQL statement in the database as needed
>
> Build aggregates as needed and as a last resort.
>
> Jeff M.
>
> --
> Mark Rittman
> Director
> M:+44 (0) 7866 568 246
> F: +44 (0) 1273 784 960
> E: mark.ritt...@rittmanmead.com
>
> Voted by UKOUG Members as UKOUG Business Intelligence Partner of the Year, 2008/2009
>
> www.rittmanmead.com
>
> Registered Office : Third Floor South Suite, 1 Jubilee Street Brighton, BN1 1GE, United Kingdom
>
> Company No. : 6032852
> VAT No. : 900 3839 48
>
> Please note that this email communication is intended only for the addressee and may contain confidential or privileged information. The contents of this email may be circulated internally within your organisation only and may not be communicated to third parties without the prior written permission of Rittman Mead Consulting. This email is not intended nor should it be taken to create any legal relations, contractual or otherwise.
>
> _________________________________________________________________
> The New Busy is not the too busy. Combine all your e-mail accounts with Hotmail.http://www.windowslive.com/campaign/thenewbusy?tile=multiaccount&ocid...
We currently have an issue where a query from the BI server takes 3
minutes (as per BI server and DB logs) but the same physical query
takes 30sec if run directly on the DB. If I find out why this is
happening I'll report back. Mark, this may be something to look out
for on your client site.
I'm also a fan of seeding the cache with "superset" reports if there
are commonly used reports that take a long time to execute (e.g
requiring count distinct on a large fact table using timeseries
functions)
Evan.
1. Note the Physical SQL & the corresponding Time
2. Note the Overall report execution time
There are 4 generic possibilities which can slow down a report
1. Physical SQL generated is not optimized - Directly relates to
Repository Design
2. Physical SQL generated is correct but a lot of calculations being
pushed to BI Server memory. There are multiple reasons why this can
happen
a. Lot of logical calculations across measures in different
logical fact tables - Could be a case where Physical Conforming SQL
join is not enforced
b. Pivot tables - A possibility where a lot of columns are pushed
to the excluded section in Pivot table - Here BI Server will push a
lower granular SQL Query and BI Server has to generate the Pivot
report out of this in its own memory
c. Reporting across data in different data sources - Again
possible wrong data model in RPD or could be a case where the data
sources are actually different
3. The query response times are faster (both logical and physical) but
a bad dashboard design where multiple views from a single report are
included separately inside a dashboard. Or it could just be a case of
no filters being applied properly (tune the report itself)
4. Everything is correct but its a question of tuning the underlying
database (indexes etc in the case of relational sources or
DYNCALCCACHE etc in the case of Essbase sources).
In most of the cases, it will be one of the above 4 in that order. So
i always start with understanding what the report does and then try to
come up with a SQL query of my own. Then compare and contrast it with
what BI EE fires(RPD design is very important in this case). If there
is not much of a difference then i tend to look at means of tuning the
report itself like converting the Pivot Tables to Normal Tables. In
many cases almost most of the functionality can be achieved in a
normal table view itself. In most of the cases, it will be either be
sub-optimal sql queries or BI Server in-memory calcs or Bad report
design. Only after eliminating these 3 will i look at ways of tuning
the database.
-Venkat
> On Fri, Apr 2, 2010 at 11:23 PM, j...@brewpalace.com <j...@brewpalace.com>wrote:
>
>
>
>
>
>
>
> > I guess I'm pretty similar:
>
> > Look at physical SQL
> > ** Compare to what I know it should be **
> > Fix what is causing the differences (report or BI Server)
> > Reports: looking for primarily if there are extra columns causing a lowered
> > grain
> > BI Server/Physical SQL:
> > * proper # of queries
> > * extra joins
> > * incorrect grain/group bys
> > * not function shipping
> > Then tune the model to match the SQL statement in the database as needed
> > Build aggregates as needed and as a last resort.
>
> > Jeff M.
>
> --
> Mark Rittman
> Director
> M:+44 (0) 7866 568 246
> F: +44 (0) 1273 784 960
> E: mark.ritt...@rittmanmead.com
--
To unsubscribe, reply using "remove me" as the subject.
> I'm curious as to where people start when a user reports that a report is
> slow
Taking your question at face value, I would suggest one should always
start here: http://carymillsap.blogspot.com/2009/12/my-whole-system-is-slow-now-what.html
All the other stuff on this thread is good, but Cary's post was a bit
of a revelation for me (it probably shouldn't have been). Along the
lines of what Mark said, it's too easy to sometimes get lost in the
depths of execution plans etc, tuning for an assumed or mythic goal.
Other things :
- Define what you're testing. Which report? Run with which parameter
values? Along with what Mark mentioned, the scatter-gun approach, not
nailing down the target is a frequent problem.
- Ideally make sure you've got baselines for "normal" performance,
otherwise you don't know what you're trying to fix other than a user's
suggestion that it "didn't used to take this long".
- Use Usage Tracking data to look at report response time trends -
when did the run time change? Make sure you've got QUERY_TEXT defined
long enough to capture the full text of what users are up to (http://
rnm1978.wordpress.com/2009/10/06/usage-tracking-only-half-the-story/)
- A good post from Mark that I was reading just yesterday :)
http://www.rittmanmead.com/2008/11/28/thoughts-on-obiee-performance-optimization-diagnostics/
cheers,