No, this is a very common Oracle problem. It is unrelated to how many
times a connection from a pool is used. Oracle doesn't know or care
about client-side connection pools; from Oracle's perspective, they are
all open connections.
What causes the "maximum open cursors exceeded" error is failure to
close result sets.
--
Guy Rouillier
In Oracle, not exactly, as Ellis pointed out if you call PL/SQL
procedures through JDBC and reuse the connections you will end up with
such exception sooner or later, no matter if you close all your
resultsets or not. Some kind of implicit cursors are not closed in
PL/SQL, so you end up reaching the limit no matter what. We also
implemented the "number of usages" limit in our pool long time ago
precisely because of that. On a related note, there's also another
problem when calling PL/SQL procedures and Oracle as the calls are
"cached" on a user/session level, not connection level, so even if you
close physically a connection, the PL/SQL calls are ketp in cache to
be reused, consuming memory. This kind of cache is just flushed when
you close all the connections from the pool at the same time, so we
also implemented a time out so when no connection has been used for a
while, we close them all to reset the cache. That, and making really
sure that the strings sent to the driver use PreparedStatement and are
properly reusable, to consume the minimum amount of cache memory
necessary.
Isn't it fun playing with Oracle? :).
D.
Larry
--------------------------------------------------------------------
"You don't know the power of dark side" - Darth Vader
OTOH, the thing about prepared statements cache being shared amongst
all connections being open and not being flushed unless all of them
are closed at the same time was something I saw myself with the DB
guys, recycling connections and all. Was that also a confussion?
Thanks all for the correction.
D.
PD: Note to self: Don't trust other's code without seeing it :).
2010/6/24 Giovanni Cuccu <giovann...@gmail.com>:
The problem in that case was that even though we were using parameters
appropriately for the PLSQL procedures, the full string contained some
initialization parameters that could be slightly different for each
user. So, for the cache, almost every call was slightly different from
the previous one, so the memory was filling up. And you can end up
killing your DB like that :(.
As soon as we detected it, we made the init parameters also be part of
the callable statement parameters in order to have just one version of
each call no matter what user was calling it, but what surprised us
was that the first workaround that we tried, recycling the connections
more often, did not work because of that "shared pool of statements
among all connections open".
S!
D.
2010/6/24 Giovanni Cuccu <giovann...@gmail.com>:
Well, Oracle maintains a "shared SQL area" for all users, caching all
SQL sentences. So, I can understand this behaviour.
What I don't know is why flushes this area when you close all
connections of your pool
isn't there any session active ?
Regards
S!
D.
2010/6/25 Chema <demab...@gmail.com>:
It might have been an issue with the specific version of Oracle, as
that was a while ago when working in a very high traffic site. Since
then, I've had no problems because none of our current applications
are really accessed 24/7, so the pool auto-shutdowns after 15 minutes
of inactivity.
S!
D.
2010/6/25 Chema <demab...@gmail.com>:
> 2010/6/25 Daniel López <d.lo...@gmail.com>:
>> AFAIK, the only part that Oracle flushes when you close all the
>> sessions of a given user are the SQL sentences used by this user. So,
>> just at the user level.
>
> is this what you saw ( experimentally ) or this behaviour is
> documented by Oracle ?
>
> I'm not an expert on Oracle but I've only found information about
> cached SQL sentences for all users ( shared SQL area ).
>
> Thanks and regards
>