|"Destination: 220.127.116.11" in-place upgrades||Christian Berg||5/24/12 1:44 AM|
Good morning all,
This is a belated follow-up on the whole in-place vs. out-of-place topic (big h/t to Christian Screen here!) since we need to warm up this discussion in the light on 18.104.22.168's different behaviour when compared to 22.214.171.124.
After in-place upgrades of several environments, Michal Zima and me realised independently of each other that something is quite rotten in the state of EM:
The diagnostics backbone does not work properly anymore after the in-place upgrade since all managed component targets as well as their respective target types are being dropped from the EM-integrated diagnostics.
The instructions of the chapter "1.7 Moving from 126.96.36.199 or 188.8.131.52 to 184.108.40.206" of the "Upgrade Guide" -> http://docs.oracle.com/cd/E23943_01/bi.1111/e16452/bi_plan.htm#BABECJJH to do an in-place upgrade (rather than an out-of-place one) were followed each time and the setups this was validated comprise Linux instances (OEL and SLES) as well as Windows ones. (And yes, the upgradenonj2ee was run in every case).
For example: a non-patches v107 SampleApp contains 29 configured log targets and 7 available log target types. After applying 220.127.116.11 in-place, these numbers drop to 25 log targets and 2 log target types. Since even the log target types are missing it is impossible to manually re-configure and expose the managed component logs again.
I.e. pre-upgrade the following log target types exist:
Oracle WebLogic Server
Oracle BI Server
Oracle BI Cluster Controller
Oracle BI Presentation Server
Oracle BI JavaHost
Oracle BI Scheduler
whereas after the upgrade only Oracle WebLogic Server and Application Deployment remain.
Sure, the WLS and application deployment logs are useful and for environment aministration highly critical, the actual log messages of interest are stored in the managed component log files!
Oracle has validated this as a bug just this morning but I am curious as to whether we're the only ones facing this (as Oracle claimed so far) and whether anyone knows how to hack this if push comes to shove and Oracle schedules this for "fixed in 18.104.22.168"
|Re: [OBIEE EMG] "Destination: 22.214.171.124" in-place upgrades||Mark Rittman||5/24/12 10:59 PM|
If there is still a problem with the documentation (and/or the patch upgrade process), then you should raise it directly with Nick Tuson - he mentioned at the Brighton BI Forum (after taking loads of stick from us regarding this topic) that if there were still issues, put them in an email to him and he'd follow it up. I think they're of the opinion that everything's fine (more or less) with the 126.96.36.199 > 6 in-place upgrade process, but he was concerned if we were saying this wasn't the case. Raise it with Nick directly (and copy the list back with the outcome).
M:+44 (0) 7866 568 246
F: +44 (0) 1273 784 960
Rittman Mead are now offering Exalytics test-drives and PoCs at our Exa-Lab in Brighton, UK. Click here for details!
Registered Office : Preston Park House, South Road, Brighton, East Sussex, BN1 6SB, United KingdomCompany No. : 6032852
VAT No. : 900 3839 48
Please note that this email communication is intended only for the addressee and may contain confidential or privileged information. The contents of this email may be circulated internally within your organisation only and may not be communicated to third parties without the prior written permission of Rittman Mead Consulting. This email is not intended nor should it be taken to create any legal relations, contractual or otherwise.
|Re: [OBIEE EMG] "Destination: 188.8.131.52" in-place upgrades||Christian Berg||5/25/12 12:08 AM|
Thanks for that. After discusiing with Nick at the forum, both Michal and me kept him up-to-date with all in-place upgrade issues via email - this issue included and specially emphasized. No result so far.
I will definitely keep the list up-to-date with whatever outcome there will be.
Just to re-iterate why I decided to post this despite ongoing SR's: I was hoping for a validation or contradiciotn of our problem by other parties to have a larger base upon which to base the statement rather than making blanket statements and it was just me and Michal (no offence buddy :-)) who screwed something up.
Secondly, I was espially hoping for that someone had already done some more digging than either one of us (or Oracle Support for that matter) and had an idea about the "why, how and where it goes wrong technically" (control files, instance/component registrations etc.)
Phone CH : +41 79 751 40 67
Mail : christia...@gmail.com
MSN : ber...@hotmail.com
Skype : christianberg1978
|Re: [OBIEE EMG] "Destination: 184.108.40.206" in-place upgrades||Mark Rittman||5/25/12 12:22 AM|
No problem - just thought it worth raising the other route too.
|Re: [OBIEE EMG] "Destination: 220.127.116.11" in-place upgrades||Christian Berg||6/7/12 11:53 PM|
Good morning everybody,
so Oracle has finally reproduced this by upgrading a vanilla 18.104.22.168 themselves and the results are clear: in-place upgrade damage Enterprise Managers log diagnostic capabilities. No workaround, no fix.
Here's the bug: 14111737: DIAGNOSTIC LOGS MISSING AFTER OBIEE INPLACE UPGRADE FROM 22.214.171.124 TO 126.96.36.199
Intermediate patch levels (188.8.131.52.1 and 184.108.40.206.2) have been tested and log diagnostics can not be re-established. 2 seperate SR's have been escalated for this as of this week.
|Re: [OBIEE EMG] "Destination: 220.127.116.11" in-place upgrades||Christian Berg||6/10/12 12:49 PM|
No, clean installs are fine from what I have seen so far (only 'nix flavours though).
Re-installation is simply a show-stopper for some clients.
On Jun 10, 2012 8:15 PM, "Kevin McGinley" <mcgin...@gmail.com> wrote:
|Performance Testing for OBIEE||Manu||9/12/12 4:16 AM|
I understand this topic has been discussed in length earlier but would like to open up again for discussion.
1.What are the latest tools available for performance testing an OBIEE application.
2.How can one guess upfront how a change to a repository would impact performance (Without testing !!!).
3.Are there any industry bench mark on performance of OBIEE dashboard or each customer usually define there expectation.
|Re: Performance Testing for OBIEE||Manu||9/17/12 1:52 AM|
|Re: [OBIEE EMG] Re: Performance Testing for OBIEE||Fayaz||9/17/12 10:28 AM|
Some of my inputs on point 2 and 3.
2. Change in repository certainly impacts performance of reports. The design itself predicts how the reports will be executed. For example the usage of different Joins can certainly change number of rows. Although it is to be decided from client how he is expecting data to be displayed. Also if one need to decide upfront if change can result in performance one has to analyse the change itself. In my example if someone changes any inner join to left outer join then the report will be bring more data.
3. Performance of OBIEE dashboard can vary customer to customer. But one has to analyse the performance based on the data which is shown in the dashboard. Sometime requirement is like show huge report which certainly takes time to render. But it is highly advisable to always use filters or by default ask for parameter before showing any report. Usually standard expectation is within 2 minutes for large reports. Although it can vary for different implementation.
Thanks and Regards
|Re: [OBIEE EMG] Re: Performance Testing for OBIEE||chet justice||9/17/12 2:55 PM|
To add to Fayaz's #3:
Performance is multi-layered and thus incredibly difficult to monitor with OBIEE. Especially so now that mobile is getting so big.
First step that I would take, test. Test with 1 user, write down the time. Test with 10 users, write down the time. You can use tools like JMeter if you want to capture those things automatically (after setup of course) from OBIEE. You should run the physical SQL directly against the database, removing anything to do with OBIEE, write down those times.
Don't forget to ask all the relevant questions (not an inclusive list by any means):
It's by no means easy. Baseline it though. Work to get the end-user to accept that baseline so you have something to work from. "It's slow" doesn't tell you very much.
|Re: [OBIEE EMG] Re: Performance Testing for OBIEE||Manu||9/21/12 2:22 AM|
Thanks Jet/Fayaz for your valuable inputs.
On point 2, totally agree with you that a change in repository (like the e.g. you gave, changing the joins or bringing in more data etc.) definitely would bring about a performance impact.
The problem I am struggling with (and I am sure many of you would be experiencing the same) is to quantify the impact upfront.
From a non-functional requirement users expect that a report to run in say < 30 sec but how can we ensure and confirm during the requirement stage that it will indeed be within 30 sec window.
I totally understand, unless we build and test we cannot say how much time the report is going to run, but tough to tell a business audience.
So was trying to understand from you all, is there a better way to quantify the time to run, which we hope to arrive at after we implement all the performance tuning? (rather than saying sorry , will come back to you after testing ).....
I know this is tough...but please put in your thoughts....
And thanks very much for pointing to JMeter....seems to be a promising tool for performance testing....
|Re: Performance Testing for OBIEE||Jeff McQuigg||9/21/12 8:05 AM|
I don't think you should ever sign up to any hard performance numbers
at the beginning of a project, especially if you have a tight
timeline. I don't believe there is any realistic way to predict how
long your reports will take until you build it, get the SQL working
and have all of your DB setups correctly done. The risk is even
greater if you use the BI Apps or some other pre-existing data model.
Who is to say that will be enough? If not you will simply need more
time to improve performance.
Performance can always be improved with more manual effort, but how
much effort are you willing to pay for? Discuss this expectation and
reality with them. Mention that the improvement effort will never be
completed - it will always change based on usage patterns, new
requirements or simply a demand to have a greater % of the application
I am now making sure all of my projects avoid hard numbers in the SOW.
> So was trying to understand from you all, is there a better way to quantify the time to run, which we hope to arrive at after we implement all the performance tuning? (rather than saying sorry , will come back to you after testing ).....
>> 1.. How does it run on mobile? 3g? 4g? wifi? (have fun with that one)
> 1.. Native iPhone/iPad app or web based?
> 2.. iOS or Android?
> 2.. VPN?
> 3.. Browser?
> 4.. Database load?
> 5.. Concurrent users?
> 6.. etc.
> 7.. etc.
> 8.. etc.
> It's by no means easy. Baseline it though. Work to get the end-user to accept that baseline so you have something to work from. "It's slow" doesn't tell you very much.
> On Mon, Sep 17, 2012 at 1:28 PM, Fayaz Malik <fayazmali...@gmail.com> wrote:
> On Mon, Sep 17, 2012 at 10:52 AM, <manojpaliss...@gmail.com> wrote:> From: manojpaliss...@gmail.com
> For more options, visit this group athttp://groups.google.com/group/obiee-enterprise-methodology?hl=en
|Re: [OBIEE EMG] Re: Performance Testing for OBIEE||Christian Berg||9/21/12 8:21 AM|
Amen and a hundred times amen to that, Jeff!
The quantificaiton of the "acutal upfront impact" is nigh impossible
since the number of factors potentially impacting the execution of
"your report" itself is nigh unlimited and constantly changing. Hence:
avoid hard numbers like the pest.
<seriouslybadjoke>If push comes to shove you can always simply turn on
caching and all your problems are solved!</seriouslybadjoke>
|RE: [OBIEE EMG] Re: Performance Testing for OBIEE||Greg||9/21/12 8:24 AM|
Obviously without detailed testing and optimization, you won't know the true range of performance; however, during requirements gathering and design, there are a number of key points to assess that help identify the simplicity or complexity of the future model and reports that can provide warning signs related to performance. Assessing these types of points can help you have proactive conversations with the user base where you can communicate that there could be some performance challenges with a couple of the deliverables they are requesting. This helps set expectations up front, and also helps educate the functional team on real world examples with their data that may provide performance challenges. A lot of these are fairly obvious, but these are things we look for when architecting a solution, as performance will always be one of the top success criteria points with data accuracy and refresh timing.
§ If the project scope includes full control of designing and developing a supporting data warehouse, then this typically provides the flexibility and environment to overcome most data and reporting challenges. So when this key component is in play, then we typically believe we have the tools to ensure good performance and success. You obviously still need to ensure that the hardware is sized appropriately, but good data warehousing practices allow us to design for high data volumes with aggregate fact tables as required, as well as leveraging ETL to combine disparate data sources and not RPD logical modeling.
§ Does the solution require leveraging the RPD to combine disparate data sources. This is a powerful feature that OBIEE offers developers, but it does mean that reports will often cause the BI Server to run multiple queries and combine the data sets. This obviously raises a flag that running a single report will trigger a lot of CPU work that is hidden to the user. Detailed analysis of the reasons and deliverables for federating these data sources helps assess how well caching strategies may or may not help in this situation. If the driver behind these requirements are mainly static, high level dashboard reports that pull data from across the business for management to view, then there may be an opportunity to leverage caching to overcome this performance hit; however, if the intent is to provide users with interactive analysis over the data set, then caching is minimized to some extent, and you need to focus on the data volumes within the various sources and expected query performance from each.
§ The number of tables in the physical model and the join criteria can have an impact. Obviously fewer tables and simple, single column joins, ideally over a numeric surrogate key, will optimize performance. The design team should be able to run some representative ad hoc query examples to test base performance for source data. So the team should be able to quantify performance before data reaches the RPD model.
§ Do the reports require a lot of logical column formulas within the RPD. There are a lot of nice formula capabilities within the RPD, but these will impact performance. Time series formulas for YTD or year over year can chug for a while, especially if you have limited options of working with a supporting data warehouse to simplify the approach used. Even simple logical columns such as concatenations or CASE statements that translate codes into descriptions, can have an impact with high volumes of data. Any time you have the option to perform some ETL at the source layer instead of the RPD, that will help performance.
§ Does the model include data sources other than typical relational databases, such as Hyperion Essbase, Hyperion Financial Management (HFM), SSAS cubes, SAP cubes, etc. These sources tend to take a bit of a performance hit for users who are used to hitting them with native tools like Smartview; so discussions with the users about a little added time to push this data into the nice GUI of OBIEE should be included in the design sessions. Understanding these data sources is also important. For example, if an Essbase databases has a lot of Attribute dimensions, and the users are requesting reports that use a lot of these, then this would raise a flag that the given report will not perform well, because Essbase reports that leverage multiple Attribute dimensions are slow, even if OBIEE is not in play.
§ Do the reports require presentation formulas, and how complex are these? Do the reports require Union queries to combine reporting sets ? Do formulas dynamically interact with page prompts? These are also examples of points that can be identified during the design phase that will negatively impact performance. Obviously data volumes then become a factor that can magnify the extra performance required for this type of a dashboard report.
§ Do the reports require Union queries to combine reporting sets
Taking all these points into consideration when assessing data sources and desired reporting requests during the requirements and design phase will help the team understand if and where there could be some performance challenges. Obviously experience plays a factor here, as the more solutions you build out, the better you can proactively see the challenge areas in advance. It is a good practice to discuss and set expectations with the functional team members during the design phase. Obviously hardware and network bandwidth are factors in performance, but I've focused here on requirement and design assessment points that we look for on projects. But to the point of the previous responses, there is no possible way to guarantee to a fixed performance response time in advance; but you can make an assessment on whether there will be known challenges or not.
|Re: [OBIEE EMG] Re: Performance Testing for OBIEE||Manu||9/23/12 10:38 AM|