What causes client(s) to get new message(s)?
function shoutcast() {
var _sRandom = Math.random();
send = $("#single").val();
oba = "getAndSetMessages.php?action=setMessages&value=" + send +'&_r=' + _sRandom,
$.ajax({
type: 'GET',
url: "getAndSetMessages.php?action=setMessages&value=" + send +'&_r=' + _sRandom,
timeout: 2000,
success: function(sonuc) {
$('.sonuc').append(sonuc);
} });
return true;
}
<?
include "memcache.php";
$room = "roomName";
$username = "Username";
if($_GET['action'] == "getMessages") {
$data = getCache($room);
// var_dump($data);
if($data !== false) {
if($data[2] == false) {
echo "$data[0]: $data[1]<br /> <br />";
$data[2] = true;
sleep(1);
$memcache->delete($room);
}
}
}
else if($_GET['action'] == "setMessages") {
$data = array($username,$_GET['value'],false);
setCache($room,$data);
}
?>
it could be made to work, but this is nowhere near an ideal design- memcached wasn't really made to be used like this and you're going to have to jump through some hoops if you do want to use it like this.
using this design, you're going to have to store the entire list of messages for a room in one key-value. this means that to add a message, you're going to have to read the (possibly large) value over the wire, deserialize it, add the new message, serialize it and send it back over the wire. this is an order of magnitude more traffic than necessary, in addition to not being threadsafe. if you're going to do it like this, at least take advantage of CAS operations in memcached to make it correct, though this won't do anything to reduce the workload-- in fact, it will actually make it worse in high-traffic situations since you'll probably have a fairly large number of failed-sets/retransmissions when multiple clients are trying to concurrently modify a room.
presumably, you're going to limit the number of messages for any given room to some max value, N. given that, you could instead implement a design wherein you create N slots for each room (room:0, room:1, ... room:N-1) and maintain a counter, I that tracks your current index and lets you treat them like a circular buffer. to add a message, you simply attempt to update room:(I mod N) with the message and, if successful, incr I. this way, every client can keep track of its last I for each room that it cares about. if I' == I, there are no new messages, otherwise it only needs to do a multiget on the keys between I' mod N and I mod N to get all the new messages.
that said, this is still not really ideal. i would check out some other projects like redis (each room as a list. to add a message just do a PUSH & a TRIM. basically just a formalized version of what i designed above, but persistent) or kesrtrel (each room as a queue and to listen to a room you just create a child queue for each client. kestrel takes care of persistence, concurrency, etc)
any of these designs should work for you, but i really think the non-memcached ones are your best bet... why reinvent the wheel when it comes to persistence, polling, in-memory data structures, concurrency, etc? let the backend do the heavy lifting and spend your time actually focusing on the unique logic of your app.
--
awl