ViewObject memory usage tuning

1,267 views
Skip to first unread message

Wilfred van der Deijl

unread,
Sep 11, 2013, 4:59:50 AM9/11/13
to adf-met...@googlegroups.com
Hi all,

We are facing a challenge and I'm wondering what the experience of other ADF EMG members is in this area. Our ADF application is very memory hungry. We currently run on 4 managed servers which 24GB each to support 600-800 concurrent sessions. The live set is approx. 50% of the max heap size, so 12GB per server.

Analyzing a memory dump shows that 40% of the heap is occupied by T4CPreparedStatement objects. These are the open cursors to the database for ViewObjects with AccessMode="SCROLLABLE" (which is the default). Most VO's are set to FetchSize="40" to ensure the initial records for an af:table can be fetched in a single JDBC roundtrip. MaxFetchSize="200" to limit the total number of rows to limit memory usage.
The problem is the buffer the JDBC PreparedStatement allocates to fetch the 40 rows. This buffer is sized in a worst-case scenario, meaning the maximum length of all attributes. We have several PL/SQL function calls in the SQL statement which are seen by JDBC as maximum length varchar2 results (thus 4k). For some ViewObjects this results in a 2MB buffer being allocated. This quickly adds up.

We are considering a number of changes and I'm wondering if ADF EMG members have experiences with these and if they established best practices:
1) Set AccessMode to RANGE_PAGING or RANGE_PAGING_INCR to prevent the VO from keeping an open PreparedStatement. Downside is additional database queries when the user scrolls an af:table
2) Reduce the FetchSize (for example to 10). This reduces the memory usage of the JDBC buffer but the downside is multiple JDBC roundtrips just to fetch the initial set
3)  Simply accept we need a total 48GB live set in memory, and thus 96GB max heap to give the garbage collector enough room to do its work

Let's discuss your experiences/best practices in this area.

Amr Gawish

unread,
Sep 11, 2013, 8:08:15 AM9/11/13
to adf-met...@googlegroups.com
Hi Wilfred,
Did you try to apply the performance tuning procedures in this blog post from Andrejus?

Best Regards,
Amr


--
--
You received this message because you are subscribed to the ADF Enterprise Methodology Group (http://groups.google.com/group/adf-methodology). To unsubscribe send email to adf-methodolo...@googlegroups.com
 
All content to the ADF EMG lies under the Creative Commons Attribution 3.0 Unported License (http://creativecommons.org/licenses/by/3.0/). Any content sourced must be attributed back to the ADF EMG with a link to the Google Group (http://groups.google.com/group/adf-methodology).
 
---
You received this message because you are subscribed to the Google Groups "ADF Enterprise Methodology Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to adf-methodolo...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Florin Marcus

unread,
Sep 11, 2013, 9:01:29 AM9/11/13
to adf-met...@googlegroups.com
Wilfred,

If I were to bet, I would say that you are facing one of the most common performance problems in ADF we encounter with all our clients: LOV fetch size setting.
I see you are tuning page def's iterators by setting up a fetch size , but you will further need to do exactly the same thing for your LOV's. 
How to set it: when you map an LOV to a view object, you will get a List binding, having the attribute ListRangeSize defaulted to "-1":

<ListBinding
    Name="LOV_XXX"
    ListVOName="XXXLOV"
    ListRangeSize="-1"
    ComboRowCountHint="10"
    NullValueFlag="none"
    MRUCount="0">

This would mean that you will get ALL the rows that your LOV query returns,  as soon as your LOV is executed. Most of the time, if the database is fast, this is not an obvious problem, since you can fetch thousands of records in a matter of seconds. But this will add up to your memory consumption and in the end - to your application scalability.
To fix this, you would need to set ListRangeSize manually into the source code (We didn't find any way of setting it through JDeveloper UI) to a positive value, something like 20-30. This way, it would fetch only the records visible in your LOV viewport and the rest of the rows will be fetches only if the user scrolls the LOV:

<ListBinding
    Name="LOV_XXX"
    ListVOName="XXXLOV"
    ListRangeSize="25"
    ComboRowCountHint="10"
    NullValueFlag="none"
    MRUCount="0">

Please note that this tuning refers to Input List of Values and not to Select One Choice UI elements.

Again, this is just a guess and with time we understood that by simply reviewing  ADF code for a couple of days and produce a set best practice recommendations it doesn't work 100%. Is how our Performance Audit was born, to automatically log such performance problems and monitor them during the entire application life-cycle.


Gawish,

You are absolutely right with your suggestion. As per Andrejus blog, our scalability tests showed that using "Disconnect Application Module Upon Release" jbo.doconnectionpooling=true setting on AM's provides higher scalability and less memory consumption, without decreasing response times.

Now, there is a inconsistency between Oracle tuning documents and what we notice at runtime.
Take this as a personal speculation, but I would say that Oracle architects took a bet on the way they decided to design JDBC connection consumption with an ADF application. In any Web Java framework I know,  JDBC connections are acquired for short periods, during request, and the statements and result sets are closed and connection are released back to the pool, when the response is returned to the browser (Maybe someone can come up with an example of different approach in managing JDBC connection in any common web java frameworks?). This basically means that closing JDBC PreparedStatements and  ResultSet objects at the end of each requests is NOT such an overhead after all. 
But yet, we do have plans to run more in-depth tests where to analyse the Heap allocations and maybe we can come up with an explanation.

Thanks,
Florin Marcus

Wilfred van der Deijl

unread,
Sep 11, 2013, 9:10:47 AM9/11/13
to adf-met...@googlegroups.com
Thanks for the great tips (Amr as well).

LOV's don't appear to be the most memory hungry. That might be because we don't use a lot of LOVs in the application. The main culprit at this time is the long live JDBC ResultSet and PreparedStatement. I hadn't thought about jbo.doconnectionpooling=true in combination with jbo.txn.disconnect_level=1. That seems like the ideal solution. We still have long-lived AMs with all the state without passivation, but prevent the long lived JDBC objects. We just need to test what the exact behavior is if the user does scroll an af:table and requires additional rows. ADF is then probably re-executing the query to get a new JDBC ResultSet. This means additional database queries when scrolling an af:table. If we are already prepared to take that performance hit, then why not use RANGE_PAGING_INCR and also limit the number of ViewRowImpls and EntityImpls in memory?

Florin Marcus

unread,
Sep 11, 2013, 10:46:28 AM9/11/13
to adf-met...@googlegroups.com
If going  jbo.doconnectionpooling = true way, there will be no re-querying when scrolling an af:table, this setting won't change the querying behaviour of your application. But with each scrolling you will get the JDBC ResultSet iterated from the start. 

One thing to note is that af:table scrolling is one good way to kill an application in case your rowset is large enough. For such cases, we tend to enable what we call as "slow scrolling", where we restrict users to scrolling one range length at the time. So the user cannot fire thousands of fetches with a single scroll.

In terms of range paging VO's, we sometimes use it on LOV's, but last time I've checked there were some bugs with editable af:tables. But maybe somebody  else knows more about this.  

Thanks,

Florin Marcus

Wilfred van der Deijl

unread,
Sep 11, 2013, 11:40:21 AM9/11/13
to adf-met...@googlegroups.com
We already had a number of discussions about limiting the number of rows in a table. We managed to get them down to max. 200 rows. We would prefer even smaller sets, but it is a difficult discussion with end-users. 
I know about issues with range-paging and editable VO's. Extra complicating factor is that we still run 11.1.1.4 so getting support on any bugs will be very difficult. This makes me very hesitant of using multi-row editable tables with range-paging. 
We'll definitely start testing with connection-pooling though

Stephen Johnson

unread,
Sep 11, 2013, 3:36:43 PM9/11/13
to adf-met...@googlegroups.com
Wilfred,
  have you tried changing your sql query to limit the size of the fields returned by the pl/sql?  For instance casting it to a varchar2(100) if you know the actual maximum return length is only 100?  If that doesn't work at the VO level, you might be able to create a database view that does it and have your VO query the database view.

Jan Vervecken

unread,
Sep 12, 2013, 9:47:06 AM9/12/13
to adf-met...@googlegroups.com
fyi

Although Wilfred mentions "we still run 11.1.1.4" ...

Those using JDeveloper 11g Release 2 and considering to use a "-Djbo.doconnectionpooling=true -Djbo.txn.disconnect_level=1" configuration, be aware of bug 17378310 as discussed in forum thread "task-flow transaction options and unavailable connections"
at https://forums.oracle.com/thread/2573716

regards
Jan Vervecken

Wilfred van der Deijl

unread,
Sep 12, 2013, 1:36:45 PM9/12/13
to adf-met...@googlegroups.com
This actually worked. Adding a cast to varchar2(10) let's JDBC know the column is max 10 chars. This reduces the JDBC buffer size but it would still require a lot of memory for all sessions and VOs combined.
Next, we're going to test connection-pooling with disconnect-level set to 1. That should close the JDBC resultsets after each request dramatically reducing the memory footprint. We just have to wait and see what the performance hit of the additional database queries will be.


Op woensdag 11 september 2013 schreef Stephen Johnson (sjoh...@integretasinc.com):

Florin Marcus

unread,
Sep 14, 2013, 6:14:22 PM9/14/13
to adf-met...@googlegroups.com

We did more testing on AM's "Disconnect Application Module Upon Release" setting and I need to rectify my previous statement:

On Wednesday, September 11, 2013 3:46:28 PM UTC+1, Florin Marcus wrote:
If going  jbo.doconnectionpooling = true way, there will be no re-querying when scrolling an af:table.

Testing showed that scrolling indeed produces re-querying, as follows:

Take the following use case:


Step 1.User opens a page containing an <af:table>.
Step 2.User scrolls down though the results 


This would translate to JDBC API level as follows:

Step1: User opens a page containing an <af:table>.
java.sql.Connection conn1  = ...........get connection from the pool.....
java.sql.PreparedStatement pstm1 = conn1.prepareStatement(.....)
java.sql.ResultSet rs1 = pstm1.executeQuery(.....)
rs1.close()
pstm1.close()
conn1.close()

Step2: User scrolls down though the results 

java.sql.Connection conn2  = ...........get connection from the pool.....
java.sql.PreparedStatement pstm2 = conn1.prepareStatement(.....)
java.sql.ResultSet rs2 = pstm2.executeQuery(.....)
rs2.close()
pstm2.close()
conn2.close()

So, indeed, there is a query overhead when scrolling, but how much this weights, really?

In practice, since the scrolling fires the same query second time, it will execute much faster because of the database buffer cache. From the test case, if first query took 77 milliseconds to run, the second one (when scrolling) took a millisecond (this numbers may be specific to my test case, though).

Another consequence worth mentioning: when scrolling from 26th to 50th record, the JDBC ResultSet will fetch 50 records, starting from the first. This makes sense, since the ResultSet is re-created, therefore re-iterated.

In conclusion, while "Disconnect Application Module Upon Release" looks worse on paper than default AM connection management, stress tests on various ADF applications showed that it is faster and more scalable. Though, I wouldn't say to take the above statements for granted, but to issue a series of stress testing (e.g. JMeter)  along with monitoring your server (e.g. JRockit Mission Control), then see which of the configurations works better for you.

That being said, we are eagerly waiting for Oracle to have a release on jbo.ampool.connection_threshold property, which should hopefully provide us "the best of both worlds" solution.

Chris Muir

unread,
Sep 14, 2013, 11:55:23 PM9/14/13
to adf-met...@googlegroups.com, adf-met...@googlegroups.com
Hi Florin

Stepping away from the scroll issues, & considering broader recommendations on jbo.doconnectionpooling=true & txn_disconnect=1, I'm curious to explore real life application scenarios.  I wonder in the tests have you gone beyond the single af:table example & sampled/gathered metrics on a real application with multiple sessions taking different paths through the application to gather throughput & avg request times? From a performance perspective repeating the same test again & again is great for just about every performance & caching feature as their caching the same 1 thing, but real applications with real (chaotic?) user paths imply the caching features have less to cache & can suffer contention in high volumes.

For example I wonder what would happen if we took the simple af:table example & start injecting random bind variable values (say on an indexed pk so we can ignore db indexing issues or the lack off) into the initial request then carried that for a session? I suspect with renewed db connections there will be a small hit at the Oracle db level as the PGA & bind vars need to be re-established for each request (new) connection (though I make no claims to being a DBA).

Going further given the default settings is aimed at caching connections, state (bind variables) & resultSets for user sessions which can be long running/over multiple requests over different data sets, theoretically at some point I would assume some benefit for users in avg request time. I wonder what scenarios we could define to watch those benefits come into play?


Just out of curiosity could you also state your ADF version, your WLS version, JDBC version, are you using JDBC URLs or data sources, & your database version.

Thanks & regards,

Chris Muir
--

Florin Marcus

unread,
Sep 22, 2013, 2:59:15 AM9/22/13
to adf-met...@googlegroups.com

Hi Chris, sorry for the delayed response, last weeks have been very busy.

I see that the use of "Disconnect Application Module Upon Release" still brings a lot of questions and rightfully so, since while being a 'tolerated' practice, it is not ideal, according to Oracle documentation: http://docs.oracle.com/cd/E24382_01/web.1112/e16182/bcampool.htm#sm0301. In the below lines, I will try to explain why the above recommendations may raise a few eyebrows to someone understanding JDBC. 

If you look at an ADF application from runtime perspective, you have one servlet (FacesServlet) executing JDBC statements against a database, as many other JEE applications. What is specific about ADF in relation with JDBC is the lifecycle of its prepared statements. 

The main idea behind ADF in relation to JDBC is that once a connection is acquired, it stays sticky across requests. This allows keeping Prepared Statements open, minimising the overhead of re-creating and closing the same statements across subsequent accesses - thereby providing the best performance. On the other hand, when using "Disconnect Application Module Upon Release", at the end of each request the connection is released back to the pool and the statements are closed  - therefore such option shouldn't be used unless limiting the number of database connections is a priority. That's shortly the theory, but in reality there is much more to it.

Statement Caching Levels

An ADF application benefits of three levels of statement caching :

1. BC statement cache
2. JDBC Datasource statement cache
3. Oracle Server statement pool (SGA)


Here is a short description of each pool level:

1. BC Statements Cache (as it works with 11g both Release 1 and Release 2 )

Anyone having access to ADF sources can observe this implementation on oracle.jbo.ViewObjectImpl source file. With the additional help of excellent JDeveloper Memory Profiler and JRockit Mission Control, we had a closer look at how this pool works:
Each view object instance caches its Prepared Statements. It worth mentioning that a view object's query may change at runtime, therefore there can be more than one PreparedStatement for each instance. For example, in simplest scenario involving an <af:table>, on first rendering you will get one PreparedStatement in cache. If you re-order your columns, you will already have two PreparedStatements, furthermore, when you filter by a column you will get a third PreparedStatement. The main benefit is that a PreparedStatement will be reused as long as the user keeps repeating the same actions.


2. JDBC Statements Cache

Introduced as a standard in version 3.0, JDBC statement pool works in a similar fashion as connection pooling does. To understand this functionality, a distinction must be made between logical and physical prepared statements.  When a developer creates a logical prepared statement by calling java.sql.Connection.prepareStatement() method, the driver will try to find a corresponding physical statement in the pool. Furthermore, when calling java.sql.PreparedStatement.close() method, the logical statement will close, but the physical statement will go back into the  pool. It is also important to note that by calling java.sql.Connection.close() method, the physical statement cache won't be cleared, since is associated with the physical connection being returned back to connection pool. 

Our tests showed that disabling JDBC cache didn't bring any notable performance decrease. This is due to more powerful Oracle Server caching mechanism that spans across sessions. However, it will save you few roundtrips - important when the network connection between Weblogic Server and Oracle Server is not too fast. 


3. Oracle Server Statement Pool (SGA)

This cache is called the library cache and located in the shared pool of the Oracle server’s System Global Area (SGA). When a SQL statement is submitted, the server first checks the library cache to see if an identical statement is already present in the cache. If it is, Oracle uses the stored parse tree and execution path for the cached statement, rather than rebuilding these structures from scratch. 


BC Statements Cache - memory hungry

With default settings, an ADF application will consume more server memory by of holding PreparedStatements open, especially for Oracle's JDBC driver implementation, where T4CPreparedStatement is a fairly large object, as reported by Memory Profiler tools. Closing prepared statements programatically doesn't mean closing the physical statement, but returning the physical statement back to the pool for later reuse. Moreover, there is  less reuse of physical Statements, since the BC Pool is limited to View Object instances and will basically suppress the use of JDBC statement pooling. On the database side, there it will be a higher number of opened CURSORS and consequently a smaller CURSOR reuse rate  (session cursor cache hits) - how this can affect the database server performance? This we leave for the database specialists. 

Probably the BC statement pooling was implemented prior to JDBC statement cache, so it may require a revisit. The only benefit it brings relates to a more effective scrolling by caching ResultSet objects, on the other hand it consumes more memory on Weblogic and produces a higher number of opened cursors on the database. 


In conclusion

The real power sits behind Oracle Server caching, the other two caching levels are less significant.

What we do hope is that Oracle best practice tips can be revised in the sense of bringing more justice to "Disconnect Application Module Upon Release" alternative. It will make our job more easier when recommending this solution to our customers.

Thanks,

Florin Marcus
www.redsamuraiconsulting.com


Reply all
Reply to author
Forward
0 new messages