Note, that is 38k peak (i.e. looking at the throughput graph over time) rather than the mean. The means were in the 27k-29k range.
Yeah here's my info:
Macbook pro, retina 2.7ghz core i7 (4 cores, 8 virtual cores), 16gb memory, ssd, OS X 10.9.4.
You've seen my simulation. I use the default gatling config. I don't think I've really adjusted much on OSX beyond what you guys suggest for open file descriptors and whatnot.
The service itself is actually really boring Jetty8 + a servlet not built on any framework with no ssl. Unfortunately it is tightly bound to our internal service infrastructure so I can't usefully put up the jetty part of the extraction, but here's the servlet:
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.IOException;
import java.io.PrintWriter;
/**
* An HTTP servlet which outputs a {@code text/plain} {@code "pong"} response.
*/
public class PingServlet extends HttpServlet {
private static final String CONTENT_TYPE = "text/plain";
private static final String CONTENT = "pong";
@Override
protected void doGet(HttpServletRequest req,
HttpServletResponse resp) throws ServletException, IOException {
resp.setStatus(HttpServletResponse.SC_OK);
resp.setHeader("Cache-Control", "must-revalidate,no-cache,no-store");
resp.setContentType(CONTENT_TYPE);
final PrintWriter writer = resp.getWriter();
try {
writer.println(CONTENT);
} finally {
writer.close();
}
}
}
Here are the settings on the embedded jetty8 we use:
val acceptors = Some(1)
val acceptQueueSize = None
val minThreads = Some(3)
val maxThreads = Some(10)
with no real access logging going on. If you are using any sort of access or per request logs you need to make sure to turn on async flushing / async appenders or you'll probably spend a bunch of time waiting for disk syncs.
We've also had some issues with back pressure for async servlets where the server lets a lot of work into the system and then proceeds to thrash on execution contexts. You'll notice my min and maxThreads above are pretty low so whatever you use to provide back pressure in spray or other framework make sure you use it. I tweaked the minThreads and maxThreads numbers up to 8/16 and the throughput stayed about the same.