Alright, since originally writing this, I found setTimeout() and started using it, and it seems to work, but not how I had expected.
To begin, this works like I'd expect:
<?php
function po($o) {
echo date('Y-m-d H:i:s ') . print_r($o,1) . PHP_EOL;
}
po("Timeout is: " . $cb->getTimeout()/1000000);
po($cb->get('1'));
sleep(3);
po($cb->get('1'));
And run:
$ php cli/debug/cbisalive.php2013-01-02 14:53:49 Timeout is: 2.5
2013-01-02 14:53:49 {"data":1,"rand":724992}
Warning: Couchbase::get(): Failed to get a value from server: Operation timed out in /var/www/git-repos/pm/cli/debug/cbisalive.php on line 18
Call Stack:
0.0060 646472 1. {main}() /var/www/git-repos/pm/cli/debug/cbisalive.php:0
3.0153 647808 2. Couchbase->get(???) /var/www/git-repos/pm/cli/debug/cbisalive.php:18
2013-01-02 14:53:52
Next, I wanted to see how timeouts behave with regard to slow commands (ie, if a server is super loaded, how will my application respond?)
To emulate this, I created a view that was guaranteed to be slow (this is on a couchbase bucket called "timeout"):
function (doc, meta) {
var dt = new Date();
dt.setTime(dt.getTime() + 1000);
while (new Date().getTime() < dt.getTime());
}
Literally, it will take 1 second (plus a little extra) to evaluate each document. Using this and stale=false, I can pretty reliably generate slow loading views. This is where the "timeout" starts to get "not as expected":
<?php
function po($o) {
echo date('Y-m-d H:i:s ') . print_r($o,1) . PHP_EOL;
}
po("Timeout is: " . $cb->getTimeout()/1000000);
// generate/update 5 documents so the view is forced
// to update itself and incur a 5 second wait.
foreach (range(1,5) as $id) {
po(
$cb->set($id, json_encode(array(
'data' => $id,
'rand' => rand(0,999999)
)))
);
}
po('Fetch View');
po($cb->view('slow', 'slow', array(
'stale' => 'false'
)));
and running it:
$ php cli/debug/cbisalive.php2013-01-02 14:57:17 Timeout is: 2.5
2013-01-02 14:57:17 965089423903948800
2013-01-02 14:57:17 11322296543019008000
2013-01-02 14:57:17 2148466301088563200
2013-01-02 14:57:17 14195886674886000640
2013-01-02 14:57:17 18279059448368988160
2013-01-02 14:57:17 Fetch View
2013-01-02 14:57:22 Array
(
[total_rows] => 5
[rows] => Array
(
[0] => Array
(
[id] => 1
[key] => 1
[value] =>
)
[1] => Array
(
[id] => 2
[key] => 2
[value] =>
)
[2] => Array
(
[id] => 3
[key] => 3
[value] =>
)
[3] => Array
(
[id] => 4
[key] => 4
[value] =>
)
[4] => Array
(
[id] => 5
[key] => 5
[value] =>
)
)
)
You can pretty clearly see that the view actually took 5 seconds to come back. I would have expected the operation to fail after 2.5 seconds with a "operation timed out" exception. I can't figure out a way to delay a regular "fetch" command, so that's a bit harder to test.
In my mind, this seems like a bug or at least a feature request. What I'd expect to see is three timeouts: connection, idle, and command.
connection
- specified in ini or constructor
- when doing the initial cb communications steps, all should complete within X milliseconds or throw a CouchbaseConnectionTimeoutException
idle
- specified in ini or as method call
- consider the connection broken if there has been no activity for X milliseconds or throw a CouchbaseIdleTimeoutException
command
- specified in ini or as method call
- consider the connection broken if a command takes longer than X milliseconds or throw a CouchbaseCommandTimeoutException
Having these three separate timeouts would allow application developers full control of the timings and give us the ability to gracefully fail in situations which speed is of utmost importance. It would also improve our ability to create long running scripts without having to endure long waits for connection check commands (ie: connect=2s, idle=600s, command=2s) while still ensuring that frontend applications (ie: connect=1s, idle=10s, command=1s) can be snappy and catch any issues through exception handling instead. (I build both a user facing php pages as well as background scripts -- user facing stuff must be fast, but I don't care if a background script takes ten seconds.)
In all honesty, I'm not even sure the idle timeout makes sense -- why would you ever want the connection to close simply because the script hasn't used it for a while? I can't think of a good reason why, but I included it here for completeness because most systems (mysql, for example) have this option. I know I'd probably always set it quite high, if not to "never close" (idle=0).
Along with the exception CouchbaseIdleTimeoutException, I'd also expect to see something like CouchbaseConnectionLostException -- the meaning being slightly different. CouchbaseIdleTimeoutException implies that the script took to long and hit its own self-imposed limit and the connection was closed. CouchbaseConnectionLostException would mean that upon attempting to send a command we found the socket was closed caused by network loss or something like a server shutdown.
Finally, I suspect this isn't specifically an issue with the php extension, but more so with the libcouchbase code, as (from what I can tell) the php extension just passes the timeout value through and doesn't do any fancy footwork itself.
I hope I haven't gone off the rails too much with this!