Hi,
It is not the example you defined but for example I’ve made a demo about using Vert.x to replicate the C10K problem with websockets. In this case the server will handle 10K connections and publish the current time every 10 seconds:
package io.vertx.demos;
import io.vertx.core.AbstractVerticle;
import io.vertx.core.Vertx;
import io.vertx.core.json.JsonObject;
import io.vertx.ext.web.Router;
import io.vertx.ext.web.handler.sockjs.*;
public class WSServer extends AbstractVerticle {
public static void main(String[] args) {
Vertx.vertx().deployVerticle(new WSServer());
}
@Override
public void start() {
final Router app = Router.router(vertx);
// server does not read, only sends and sends ping every 5 minutes
final BridgeOptions options = new BridgeOptions()
.addOutboundPermitted(new PermittedOptions().setAddress("time"))
.setPingTimeout(5 * 60 * 1000);
app.route("/eventbus/*").handler(SockJSHandler.create(vertx).bridge(options));
vertx.createHttpServer().requestHandler(app::accept).listen(8080, res -> {
if (res.failed()) {
throw new RuntimeException(res.cause());
}
// publish a new message every 10 sec
vertx.setPeriodic(10000L, t -> {
vertx.eventBus().publish("time", new JsonObject().put("unixtime", System.currentTimeMillis()));
});
System.out.println("Server ready!");
});
}
}
And the client app will create 10K websockets and listen to the events:
package io.vertx.demos;
import io.vertx.core.AbstractVerticle;
import io.vertx.core.Vertx;
import io.vertx.core.http.WebSocket;
import io.vertx.core.http.WebSocketFrame;
import io.vertx.core.json.JsonObject;
public class WSClient extends AbstractVerticle {
public static void main(String[] args) {
Vertx.vertx().deployVerticle(new WSClient());
}
private static final String PING = new JsonObject().put("type", "ping").encode();
private static final String SUBSCRIBE_TIME = new JsonObject().put("type", "register").put("address", "time").encode();
@Override
public void start() {
// 10k problem, no big deal!
final WebSocket[] sockets = new WebSocket[10 * 1024];
for (int i = 0; i < sockets.length; i++) {
final int id = i;
vertx.createHttpClient().websocket(8080, "localhost", "/eventbus/websocket", ws -> {
sockets[id] = ws;
System.out.println("Connected " + id);
ws.frameHandler(frame -> System.out.println("Received message: " + frame.binaryData()));
ws.exceptionHandler(Throwable::printStackTrace);
ws.endHandler(v -> {
System.err.println("Connection ended: " + id);
sockets[id] = null;
});
// subscribe to the "time" address
ws.writeFrame(WebSocketFrame.textFrame(SUBSCRIBE_TIME, true));
});
}
// keep the socket open by sending ping messages every 2:30 min
vertx.setPeriodic(5 * 60 * 1000 / 2, t -> {
for (WebSocket socket : sockets) {
socket.writeFrame(WebSocketFrame.textFrame(PING, true));
}
});
}
}
Now I do run this on a Raspberry Pi 2, the old one, and it works fine, since both server and client run there i show that we are in fact handling 20k sockets 10k inbound 10k outbound so in terms of horizontal scalability, can imagine that scaling the eventbus should not be limited by the number of sockets. If a small RPi2 can handle 20k with low CPU usage (in this case it never goes above 50% in peak time) on a bigger server you might get way better results.
Of course that this example is too simple and there is no computation happening, on a real world scenario you might need to consider that the sockets might require some CPU but your services on the eventbus might become the CPU bottleneck!
Cheers,
Paulo
console
accounts
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 24792 34.1 2.8 1409560 227024 pts/4 Sl+ 12:01 0:05 node index.js
root 24335 17.6 4.4 5312604 348860 pts/0 Sl+ 11:59 0:12 java -cp target/reactive-1-fat.jar io.vertx.demos.WSServer
root 24379 33.4 3.5 5246116 278532 pts/1 Sl+ 11:59 0:18 java -cp target/reactive-1-fat.jar io.vertx.demos.WSClient
Hi Scott,
I don’t want to start a flamewar here, but you’re just making an assumption and the memory usage depends from case to case, for example, look at the example I’ve done, assuming the server is the same (the vertx code above)
If you run the client code (source above) and a nodejs replicate implementation
const WebSocket = require('ws');
const PING = {type: 'ping'};
const SUBSCRIBE_TIME = {type: 'register', address: 'time'};
const len = 10 * 1024;
const sockets = [];
for (var i = 0; i < len; i++) {
const id = i;
const ws = new WebSocket('ws://localhost:8080/eventbus/websocket');
sockets[id] = ws;
ws.on('open', function () {
console.log('Connected ' + id);
// subscribe to the "time" address
ws.send(JSON.stringify(SUBSCRIBE_TIME));
});
ws.on('message', function (frame) {
console.log(JSON.stringify(frame));
});
ws.on('error', function (error) {
console.log(error);
});
ws.on('close', function () {
console.log('Connection ended: ' + id);
sockets[id] = null;
});
}
// keep the socket open by sending ping messages every 2:30 min
setInterval(function () {
for (var i = 0; i < len; i++) {
if (sockets[i]) {
sockets[i].send(JSON.stringify(PING));
}
}
}, 5 * 60 * 1000 / 2);
Please note that I don’t tune either the JVM or Node, it’s all defaults, if I run both you can see:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 24792 34.1 2.8 1409560 227024 pts/4 Sl+ 12:01 0:05 node index.js
root 24379 33.4 3.5 5246116 278532 pts/1 Sl+ 11:59 0:18 java -cp target/reactive-1-fat.jar io.vertx.demos.WSClient
As you can see they are pretty much in pair, node uses slightly less memory 227024 vs 278532 while Vert.x uses slightly less CPU 33.4 vs 34.1
My 2cts: You should not trust everything you read on the internet and you should always measure things yourself since you know the exact problem you’re trying to solve!
Cheers,
Paulo
Cheers