I'm using PDO as an adapter with Zend Framework (ZF).
I have a batch script that calls some querys on MySQL and between each
query and the next one, it needs some time to execute. During this
time MySQL drops the connection, due to the variable "wait_timeout =
20" set on MySQL (I cannot change this, since I'm on a shared host).
I see that the problem is actually PDO not reopening the connection !
At the moment I put the ZF PDO adapter object into a registry, in
order to avoid re-creating it everytime during the script execution.
In fact, if I remove that object from the registry, that is if I re-
create it each and every time I need to go to MySQL, the problem does
not occur.
So I thought about such possible solutions, which I would like to ask
your opinion on.
1) Remove the ZF PDO adapter object from the registry, having it
created everytime I need to do something on the database.
This is supposed to have as a drawback loss of performance when I need
to go multiple times to the DB during the script execution.
2) Evolve from a simple registry to some sort of cache, which may have
a cache timeout less than that MySQL wait_timeout.
This may be good except for the loss of performance due to the cache
management overhead.
3) Leave the ZF PDO adapter in the registry as it is at the moment for
the whole application, and force it to be reloaded each time (by
passing an option flag) only when I call that specific batch script.
This is the dirties solution maybe.
4) Have PDO check and reconnect if connection has been droppen from
MySQL side.
This would be THE solution if only it WERE possible.... is it?
Thank you for any tip / suggestion / opinion !
I don't use PDO (I have my own classes I developed long ago), but check
the return value from the PDO request. If the returned SQLSTATE
indicates a disconnect error (sorry, I don't know the code off hand),
reconnect and retry the request .
But if you're going to be a long time between requests, you are much
better off disconnecting and reconnecting. Each connection takes system
resources, and each connection must be monitored by the server. No one
connection is going to take a lot - but in a very busy system, it can
have an effect. And it's even more important with shared hosting -
where you don't know how many other connections are required.
--
==================
Remove the "x" from my email address
Jerry Stuckle
JDS Computer Training Corp.
jstu...@attglobal.net
==================
This sounds like an interesting possibility thank you!
> But if you're going to be a long time between requests, you are much
> better off disconnecting and reconnecting.
Actually I do not manage the PDO layer my self, but I let it be
managed by the framework. Let's see what can be done.
I experimented a bit anyway, and PDO does indeed have a method to
force "close connection", nevertheless it is not effective in such a
case: if MySQL timeouts the connection, PDO still believes conn is
alive and even though I force it to be closed, it does not reconnect
and fails. The only way is to reset and re-instantiate the PDO object.
Maybe some buggy behaviour of PDO.
No, I'm saying as soon as you're done with your query, close the
connection. Don't leave it open unused for 20 minutes or whatever.
Then MySQL won't disconnect.
Yeah, actually it is the PDO layer that manages the connection to the
database.
I do not see anything about DB connections, it's abstracted to me by
the PDO layer.
I just ask a factory for a PDO object, and then ask this object to
execute the query.
All low level connection stuff is handled by PDO and it's transparent
to me.
From my experiment, the only way I have to really close the connection
is to annihilate the PDO object and re-instantiate it.
Not sure if this clarifies or indeed I didnt get your point :)
If that were really true, you won't be having a problem with MySQL going
away, neh?
> From my experiment, the only way I have to really close the connection
> is to annihilate the PDO object and re-instantiate it.
And this is different from what the recommendation is ... how?
> Not sure if this clarifies or indeed I didnt get your point :)
Remember, PDO is an abstraction layer. It's job is to (insofar as is
reasonable) make various database engines programmatically similar.
They all still *do* the same kinds of things, and that's what makes
abstraction possible. PDO means that you don't have to remember all the
details are ("Is it foo_destroy() or foo_release() or foo_free() that
gets rid of a result set? Does it take a just a result handle or does it
need a result *and* a connection handle?"). Instead, you only need to
remember what PDO's details are. But it's not FM, you still need to
actually manage PDO, and to take best advantage of the portability, you
need to design your interactions with the database at the lowest common
denomenator. Sometimes that's going to mean doing things like tearing
down connections explicitly, and doing so every time you might be done.
--
How about an Australian-language version? 'Your program just attempted
an illegal instruction. No worries, mate.'
-- Paul Tomblin
What I wrote is true indeed... still I'm having the problem with MySQL
going away :)
(see below)
> > From my experiment, the only way I have to really close the connection
> > is to annihilate the PDO object and re-instantiate it.
>
> And this is different from what the recommendation is ... how?
Not sure to understand here. What is the recommendation?
> Instead, you only need to
> remember what PDO's details are. But it's not FM, you still need to
> actually manage PDO, and to take best advantage of the portability, you
> need to design your interactions with the database at the lowest common
> denomenator. Sometimes that's going to mean doing things like tearing
> down connections explicitly, and doing so every time you might be done.
Ok perfect, that's correct. Except for PDO apparently behaving buggy
or something.
Let's go with a quick example:
Say I have a method in a class which does some work on the database:
Let's make a PDO object (in my case I actually ask Zend Framework
factory for this, but it doesnt matter)
1. $dbh = new PDO($dsn, $user, $pass);
2. next I do query number one here...
3. then I have some processing here...
4. and then I do query number two here, using the same $dbh I created
above.
Here's the problem: if the processing at step 3 for whatever reason
takes longer than the wait_timeout, MySQL will drop the connection.
PDO is not going to realize it, nor does it try to reconnect! and
BOOOM MySQL gone away error!
So, in conclusion, I can manage PDO myself by closing and reopening
connection at my will. The (only) way to do this is to annihilate PDO
and then re-instantiate it:
$dbh = null;
The bad is that I should do this after each and every query, at least
in theory, since if I have even a bit of business processing between
two queries I cannot assume (in theory at least) that such processing
will take shorter than wait_timeout.
And in my case it takes way far longer!
Not a bug. It's not designed do automatically reconnect.
> Say I have a method in a class which does some work on the database:
>
> Let's make a PDO object (in my case I actually ask Zend Framework
> factory for this, but it doesnt matter)
>
> 1. $dbh = new PDO($dsn, $user, $pass);
> 2. next I do query number one here...
> 3. then I have some processing here...
> 4. and then I do query number two here, using the same $dbh I created
> above.
>
> Here's the problem: if the processing at step 3 for whatever reason
> takes longer than the wait_timeout, MySQL will drop the connection.
> PDO is not going to realize it, nor does it try to reconnect! and
> BOOOM MySQL gone away error!
>
That is correct. See above.
> So, in conclusion, I can manage PDO myself by closing and reopening
> connection at my will. The (only) way to do this is to annihilate PDO
> and then re-instantiate it:
>
> $dbh = null;
>
> The bad is that I should do this after each and every query, at least
> in theory, since if I have even a bit of business processing between
> two queries I cannot assume (in theory at least) that such processing
> will take shorter than wait_timeout.
>
> And in my case it takes way far longer!
>
Now you're getting the idea. And it is much better than trying to hold
resources you won't be using for long periods of time.
> Now you're getting the idea. And it is much better than trying to hold
> resources you won't be using for long periods of time.
Well great now we're on the same track :)
Let me go a step further.
I agree that it's not a good idea to hold resources not used for a
long time.
But please consider that I have two kinda different situation here.
A)
I have a web application (PHP + MySQL) whose scripts do queries as
well to load contents to be displayed on the web page (there can be
multiple queries fired by different pieces of scripts for building a
single web page).
Such scripts are usually very fast. I think that in this situation
it's indeed a good idea to HOLD the connection, since in my opinion
the OVERHEAD of creating a connection each and everytime during the
scripts execution would be worse.
(no profiling rigth now, but this opinion makes sense, doesn't it?)
B)
Packed together with my web application, I also have a batch script
(not rubbish code) which fetches data from the DB and builds search
engine indexes.
This only script is causing me timeouts with such strict wait_timeout
setting (20 secs).
This script calls the same business logic of the web application to
fetch data. So the PDO management is shared.
Should I go with your suggestion, then, and implement a different
behaviour AT LEAST for this script (that would be solution number 3 in
my first post) ?
Or do you think that I should go all the way with creating and
destroying PDO each and everytime also within THE WHOLE web
application? (kind of solution number 1 in my first post)
This is a completely different scenario, which we have not been
discussing here. However, when a script terminates, and if you don't
close the connection before a script ends, the system will do it for you
(unless you use persistent connections - which have their own problems).
But within a single script it's generally better to keep the
connection open and process everything under a single connection.
>
> B)
> Packed together with my web application, I also have a batch script
> (not rubbish code) which fetches data from the DB and builds search
> engine indexes.
> This only script is causing me timeouts with such strict wait_timeout
> setting (20 secs).
>
> This script calls the same business logic of the web application to
> fetch data. So the PDO management is shared.
> Should I go with your suggestion, then, and implement a different
> behaviour AT LEAST for this script (that would be solution number 3 in
> my first post) ?
>
> Or do you think that I should go all the way with creating and
> destroying PDO each and everytime also within THE WHOLE web
> application? (kind of solution number 1 in my first post)
>
The PDO management is not "shared" - they just happen to use the same
code, which is completely different (sharing would imply they both use
the same object in memory, which they do not). In your case, I would
separate the connection handling from the rest of the code; keep most if
the database-related code in one file, and pass the pdo object as
necessary to those functions. In your web page code, you create one pdo
object and use it. In your CLI job, create and destroy the PDO objects
as needed.
> The PDO management is not "shared" - they just happen to use the same
> code, which is completely different (sharing would imply they both use
> the same object in memory, which they do not). In your case, I would
> separate the connection handling from the rest of the code; keep most if
> the database-related code in one file, and pass the pdo object as
> necessary to those functions. In your web page code, you create one pdo
> object and use it. In your CLI job, create and destroy the PDO objects
> as needed.
Thank you for comments so far, I'm working on it... lets see :)