What version of Vert.x are you running?
Also, can you please post a log of the exceptions you're getting?
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
at sun.nio.ch.IOUtil.read(IOUtil.java:186)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:59)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:471)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:332)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
1) What have you set max file handles on the server too?
2) Take a look at http://vertx.io/manual.html#performance-tuning
3) What OS are you running?
--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To view this discussion on the web, visit https://groups.google.com/d/msg/vertx/-/TSyqUtBoQMoJ.
check the setting of ulimit if your project over the linux.
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 1589248
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 100000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 1589248
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
On 29/11/12 14:47, 锟斤拷锟斤拷锟斤拷 wrote:
> Update: I've configured to server.setTCPKeepAlive(false),
TCP keep alive is not related to HTTP keep alive
--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To view this discussion on the web, visit https://groups.google.com/d/msg/vertx/-/nCMxCOLc4UUJ.
I've changed server settings to :
server.setTCPKeepAlive(true);
server.setResuseAddress(true);
And now it seems good. After a night, the ESTABLISHED connections (file descriptors) are consistent arount 55000 now.
But i don't know what this number would become when the traffic goes up (we are currently running at about 8000 request/minute, with 2 ad placements).
As I suppressed the exception "Connection reset by peers" by modifiying vert.x-core code, now we don't know whether those exceptions are still occuring. I'll report on that later.
But I still don't know what is actually going on. In my guess, there should only be several hundred connections actually active, but the number is more than 50000.
Do you have any suggestion?
Best Regards
Puming
锟斤拷 2012锟斤拷11锟斤拷30锟斤拷锟斤拷锟斤拷锟斤拷UTC+8锟斤拷锟斤拷12时19锟斤拷05锟诫,Tim Fox写锟斤拷锟斤拷
To view this discussion on the web, visit https://groups.google.com/d/msg/vertx/-/x9idNGPwtqMJ.
Are most of the connections in TIME_WAIT state?
If so, it would be normal to see a lot of them.
The way TCP works is that even after you've closed TCP connections the OS will keep them open a while longer (default is 2 minutes) to catch any stray packets that might arrive.
Setting TCP reuse address allows the server to reuse one of these addresses - which is why it probably helps you.
Also you can reduce the timeout at the OS level.
So... maths time. If you are getting 8000 connections / minute, then you should expect to see an average of 16000 connections at steady state.
The first thing I would do is add some logging to make sure that Vert.x is _actually_ closing the connection - just log out the call to channel.close in Vert.x and keep a count of connections in an AtomicLong (or whatever).
Once you've verified that Vert.x is actually closing connections properly, then it's probably just a matter of configuring your OS appropriately.
Also... as I mentioned before, you should also increase your accept backlog syn queue as specified in the performance chapter, if you haven't done so already or you might get refused connections at peak.
Regarding the "connection reset by peer" exceptions on the server - this is normal - you get this when the other side of the connection (in this case the browser) closes it.
Are most of the connections in TIME_WAIT state?
The first thing I would do is add some logging to make sure that Vert.x is _actually_ closing the connection - just log out the call to channel.close in Vert.x and keep a count of connections in an AtomicLong (or whatever).
On 30/11/12 09:47, 锟斤拷锟斤拷锟斤拷 wrote:
>
>
> 锟斤拷 2012锟斤拷11锟斤拷30锟斤拷锟斤拷锟斤拷锟斤拷UTC+8锟斤拷锟斤拷4时50锟斤拷57锟诫,Tim Fox写锟斤拷锟斤拷
>
> Are most of the connections in TIME_WAIT state?
>
>
> No, most of them are ESTABLISHED.
That's odd, if they are ESTABLISHED then setting reuse address shouldn't
help you at all.
>> 锟斤拷 2012锟斤拷11锟斤拷30锟斤拷锟斤拷锟斤拷锟斤拷UTC+8锟斤拷锟�
>> 锟斤拷12时19锟斤拷05锟�锟诫,Tim Fox写锟斤拷锟斤拷
We could consider adding a timeout to the Vert.x API, but this kind of
thing would be easy to add in your own code.
I suggest putting the checks in your own code for now, then open a
github issue to have a feature added once you've fixed your issue.
For now, I recommend keeping a counter in your code that gets
incremented when a request arrives in the request handler, and gets
decremented every time response.end gets called (search in your code for
this).
Then have a vert.x periodic timer that logs out the number of
connections every few seconds.
You can wrap this up easily in a small utility class.
--
在 2012年11月30日星期五UTC+8下午6时25分18秒,Tim Fox写道:
We could consider adding a timeout to the Vert.x API, but this kind of
thing would be easy to add in your own code.
req.response.written an req.response.closed are not public.
I suggest putting the checks in your own code for now, then open a
github issue to have a feature added once you've fixed your issue.
For now, I recommend keeping a counter in your code that gets
incremented when a request arrives in the request handler, and gets
decremented every time response.end gets called (search in your code for
this).
Then have a vert.x periodic timer that logs out the number of
connections every few seconds.
Just done that. But it seems my code covers nearly all requests. The count difference are nearly none. (-10 and constant)
And I counted the number of "Reset by peer" exceptions, and that number seems to concur with the number of excess opened ESTABLISHED connections.
I read vert.x code and it seems to have conn.close() when that exception is raised.
roughly 25 exceptions in 1000 request. and that number is close to how we pile up several hundred connections per minute.
锟斤拷 2012锟斤拷11锟斤拷30锟斤拷锟斤拷锟斤拷锟斤拷UTC+8锟斤拷锟斤拷7时17锟斤拷18锟诫,锟斤拷锟斤拷锟斤拷写锟斤拷锟斤拷
锟斤拷 2012锟斤拷11锟斤拷30锟斤拷锟斤拷锟斤拷锟斤拷UTC+8锟斤拷锟斤拷6时25锟斤拷18锟诫,Tim Fox写锟斤拷锟斤拷
--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To view this discussion on the web, visit https://groups.google.com/d/msg/vertx/-/BEUxhVN-ljUJ.
锟斤拷 2012锟斤拷11锟斤拷30锟斤拷锟斤拷锟斤拷锟斤拷UTC+8锟斤拷锟斤拷7时17锟斤拷18锟诫,锟斤拷锟斤拷锟斤拷写锟斤拷锟斤拷
锟斤拷 2012锟斤拷11锟斤拷30锟斤拷锟斤拷锟斤拷锟斤拷UTC+8锟斤拷锟斤拷6时25锟斤拷18锟诫,Tim Fox写锟斤拷锟斤拷
--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To view this discussion on the web, visit https://groups.google.com/d/msg/vertx/-/BEUxhVN-ljUJ.
Can you also check with master?
I just want to be sure you're not running into something that's already been fixed.
vertx.setTimer(2000L, new Handler<Long>() {
@Override
public void handle(Long event) {
try {
req.response.end();
} catch (Exception e) {
}
}
});
Also can I request again an example of the logs containing the "connection reset by peer" exceptions?
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
at sun.nio.ch.IOUtil.read(IOUtil.java:186)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:59)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:471)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:332)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
On 30/11/2012 11:34, 锟斤拷锟斤拷锟斤拷 wrote:
I've finally managed to reproduce the "connection reset by peer" exception.
To do this the browser needs to be Internet Explorer - it seems if IE
creates a connection to the server (keep alive), then the user shuts
down IE without closing tabs first then IE does not properly terminate
the connections - resulting in the exceptions on the server.
I will look into this more later on today.
private static AtomicLong reqCount = new AtomicLong();
private static AtomicLong s24Count = new AtomicLong();
private static AtomicLong s25Count = new AtomicLong();
private static AtomicLong badCount = new AtomicLong();
@Override
public void handle(final HttpServerRequest req) {
if (reqCount.incrementAndGet() % 100 == 0) {
logger.info("ReqCount:" + reqCount.get());
}
String s24 = "...s24code....";
String s25 = "...s25code....";
String pid = req.params().get("pid");
req.response.headers().put("Connection", "close");
if ("24".equals(pid)) {
if (s24Count.incrementAndGet() % 100 == 0) {
logger.info("S24Count:" + s24Count.get());
}
req.response.end(s24);
} else if ("25".equals(pid)) {
if (s25Count.incrementAndGet() % 100 == 0) {
logger.info("S25Count:" + s25Count.get());
}
req.response.end(s25);
} else {
if (badCount.incrementAndGet() % 100 == 0) {
logger.info("badCount:" + badCount.get());
}
req.response.statusCode = 400;
req.response.end();
}
}在此输入代码...
On 30/11/12 12:49, 锟斤拷锟斤拷锟斤拷 wrote:
>
>
> 锟斤拷 2012锟斤拷11锟斤拷30锟斤拷锟斤拷锟斤拷锟斤拷UTC+8锟斤拷锟斤拷8时27锟斤拷30锟诫,Tim Fox写锟斤拷锟斤拷
>
> Also can I request again an example of the logs containing the
> "connection reset by peer" exceptions?
>
>
> |
> java.io.IOException:Connectionreset bypeer
> at sun.nio.ch.FileDispatcherImpl.read0(NativeMethod)
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
> at sun.nio.ch.IOUtil.read(IOUtil.java:186)
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
> at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:59)
> at
> org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:471)
> at
> org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:332)
> at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:722)
>
> |
>
> On 30/11/2012 11:34, 锟斤拷锟斤拷锟斤拷 wrote:
>>
>>
>> 锟斤拷 2012锟斤拷11锟斤拷30锟斤拷锟斤拷锟斤拷锟斤拷UTC+8锟斤拷锟�
>> 锟斤拷7时17锟斤拷18锟诫,锟�锟斤拷锟斤拷锟斤拷写锟斤拷锟斤拷
>>
>>
>>
>> 锟斤拷 2012锟斤拷11锟斤拷30锟斤拷锟斤拷锟斤拷锟斤拷UTC+8锟斤拷
>> 锟斤拷锟斤拷6时25锟斤拷18锟�锟诫,Tim Fox写锟斤拷锟斤拷
What's very odd is you appear to have connections still established
*after* the client has closed them abruptly. This doesn't make a lot of
sense to me - if the connection is closed then you won't see it on the
list of established connections.
I've verified this locally - if I cause connection reset by peer on a
simple Vert.x Http Server example by browsing to it using IE then
closing IE with a tab open, I do get "connection reset by peer", but the
connection is closed (as expected).
Can you verify on netstat that the established connections are really to
Vert.x, not to some other server?
On 30/11/12 13:01, Tim Fox wrote:
> I've finally managed to reproduce the "connection reset by peer" exception.
>
> To do this the browser needs to be Internet Explorer - it seems if IE
> creates a connection to the server (keep alive), then the user shuts
> down IE without closing tabs first then IE does not properly terminate
> the connections - resulting in the exceptions on the server.
>
> I will look into this more later on today.
>
> On 30/11/12 12:49, 锟斤拷锟斤拷锟斤拷 wrote:
>>
>> 锟斤拷 2012锟斤拷11锟斤拷30锟斤拷锟斤拷锟斤拷锟斤拷UTC+8锟斤拷锟斤拷8时27锟斤拷30锟诫,Tim Fox写锟斤拷锟斤拷
>>
>> Also can I request again an example of the logs containing the
>> "connection reset by peer" exceptions?
>>
>>
>> |
>> java.io.IOException:Connectionreset bypeer
>> at sun.nio.ch.FileDispatcherImpl.read0(NativeMethod)
>> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>> at sun.nio.ch.IOUtil.read(IOUtil.java:186)
>> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>> at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:59)
>> at
>> org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:471)
>> at
>> org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:332)
>> at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>> at java.lang.Thread.run(Thread.java:722)
>>
>> |
>>
>> On 30/11/2012 11:34, 锟斤拷锟斤拷锟斤拷 wrote:
>>>
>>> 锟斤拷 2012锟斤拷11锟斤拷30锟斤拷锟斤拷锟斤拷锟斤拷UTC+8锟斤拷锟�
>>> 锟斤拷7时17锟斤拷18锟诫,锟�锟斤拷锟斤拷锟斤拷写锟斤拷锟斤拷
>>>
>>>
>>>
>>> 锟斤拷 2012锟斤拷11锟斤拷30锟斤拷锟斤拷锟斤拷锟斤拷UTC+8锟斤拷
>>> 锟斤拷锟斤拷6时25锟斤拷18锟�锟诫,Tim Fox写锟斤拷锟斤拷
>>>
>>>
>>>
Can you try a simple test
1. Run your server
2. Start IE, and point it at your advert page
3. Use netstat to verify you have an established connection
4. Close IE (or terminate it using task manager)
5. Verify you get a "connection reset by peer" on the server.
6. Use netstat again to see if you still have an established connection
Hi :
We have a web service written in vert.x and now it's in beta test phase. Today we got our first real world data flowing into tje server, but unfortunately it began to show a lot of "Connection Reset by peers" exception
and later was hogged into nearly frozen state, complaining about "Too many open files". and after checking netstat, we found that the server seems to be holding many many connections like this (The ip of the server here is fake):
> tcp 0 0 ::ffff:118.78.182.38:80 ::ffff:171.120.97.95:35669 ESTABLISHED
and the opened connection has reached to the max-file limit.
I don't understand where those "Connection reset by peer" comes from.
Our server is serving ads on a website, and all ad request comes from browsers over the Internet. Today we see roughly 10000 page view per minute, which should reasonably low. But there still seems to be a LOT of "Connection Reset by peers" exception.
I was having the impression that vert.x/NIO can serve 10000 request per SECOND without any problem. Maybe the problem here is that all the request coming from browsers needs a different connection? Is NIO OK with that?
Our typical response time is 100~200ms
So here are two questions I'd like to know:
1. Do the server automatically close a connection when it hits a "Connection reset by peer"? If not, how do we close it in the code?
2. Can anyone shed some light on why we've got so many peer resets? How can we avoid that? How can we scale up?
3. Does this have anything to do with keep-alive? Is a connection in vert.x Keep-alive by default?
if it is, does that mean we need to manually close a connection after sending a response?
On Thursday, November 29, 2012 1:26:26 PM UTC, 赵普明 wrote:Hi :
We have a web service written in vert.x and now it's in beta test phase. Today we got our first real world data flowing into tje server, but unfortunately it began to show a lot of "Connection Reset by peers" exception
and later was hogged into nearly frozen state, complaining about "Too many open files". and after checking netstat, we found that the server seems to be holding many many connections like this (The ip of the server here is fake):
> tcp 0 0 ::ffff:118.78.182.38:80 ::ffff:171.120.97.95:35669 ESTABLISHED
and the opened connection has reached to the max-file limit.
I don't understand where those "Connection reset by peer" comes from.
Our server is serving ads on a website, and all ad request comes from browsers over the Internet. Today we see roughly 10000 page view per minute, which should reasonably low. But there still seems to be a LOT of "Connection Reset by peers" exception.
I was having the impression that vert.x/NIO can serve 10000 request per SECOND without any problem. Maybe the problem here is that all the request coming from browsers needs a different connection? Is NIO OK with that?
Our typical response time is 100~200ms
So here are two questions I'd like to know:
1. Do the server automatically close a connection when it hits a "Connection reset by peer"? If not, how do we close it in the code?
2. Can anyone shed some light on why we've got so many peer resets? How can we avoid that? How can we scale up?
3. Does this have anything to do with keep-alive? Is a connection in vert.x Keep-alive by default?Whether a connection is keep alive or not is determined by what the browser sends.If the browser sends an HTTP 1.0 request and there is a header Connection:Keep-Alive then it's keep alive, if it's a HTTP 1.1 request it's keep alive by default.if it is, does that mean we need to manually close a connection after sending a response?A keep alive connection doesn't automatically get closed after sending a response - otherwise it would negate the point of keep alive (which is browses reusing the connection to send further requests) !The connection will remain open until the client closes it (e.g. they close their tab or browser, or maybe the browser decides to close it for some other reason), or you close it.
vertx.setTimer(2000L, new Handler<Long>() {
@Override
public void handle(Long event) {
req.response.close();
}
});
锟斤拷 2012锟斤拷11锟斤拷30锟斤拷锟斤拷锟斤拷锟斤拷UTC+8锟斤拷锟斤拷11时47锟斤拷45锟诫,Tim Fox写锟斤拷锟斤拷
--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To view this discussion on the web, visit https://groups.google.com/d/msg/vertx/-/wOhiXMHTuC0J.
锟斤拷 2012锟斤拷11锟斤拷30锟斤拷锟斤拷锟斤拷锟斤拷UTC+8锟斤拷锟斤拷11时47锟斤拷45锟诫,Tim Fox写锟斤拷锟斤拷
On Thursday, November 29, 2012 1:26:26 PM UTC, 锟斤拷锟斤拷锟斤拷 wrote:Hi :
We have a web service written in vert.x and now it's in beta test phase. Today we got our first real world data flowing into tje server, but unfortunately it began to show a lot of "Connection Reset by peers" exception
and later was hogged into nearly frozen state, complaining about "Too many open files". and after checking netstat, we found that the server seems to be holding many many connections like this (The ip of the server here is fake):
> tcp 0 0 ::ffff:118.78.182.38:80 ::ffff:171.120.97.95:35669 ESTABLISHED
and the opened connection has reached to the max-file limit.
I don't understand where those "Connection reset by peer" comes from.
Our server is serving ads on a website, and all ad request comes from browsers over the Internet. Today we see roughly 10000 page view per minute, which should reasonably low. But there still seems to be a LOT of "Connection Reset by peers" exception.
I was having the impression that vert.x/NIO can serve 10000 request per SECOND without any problem. Maybe the problem here is that all the request coming from browsers needs a different connection? Is NIO OK with that?
Our typical response time is 100~200ms
So here are two questions I'd like to know:
1. Do the server automatically close a connection when it hits a "Connection reset by peer"? If not, how do we close it in the code?
2. Can anyone shed some light on why we've got so many peer resets? How can we avoid that? How can we scale up?
3. Does this have anything to do with keep-alive? Is a connection in vert.x Keep-alive by default?
Whether a connection is keep alive or not is determined by what the browser sends.
If the browser sends an HTTP 1.0 request and there is a header Connection:Keep-Alive then it's keep alive, if it's a HTTP 1.1 request it's keep alive by default.if it is, does that mean we need to manually close a connection after sending a response?
A keep alive connection doesn't automatically get closed after sending a response - otherwise it would negate the point of keep alive (which is browses reusing the connection to send further requests) !
The connection will remain open until the client closes it (e.g. they close their tab or browser, or maybe the browser decides to close it for some other reason), or you close it.
Now I understand what you mean. Here is a recall of our situation:
1. First senario
We don't send Connection:close,
--
锟斤拷 2012锟斤拷11锟斤拷30锟斤拷锟斤拷锟斤拷锟斤拷UTC+8锟斤拷锟斤拷11时47锟斤拷45锟诫,Tim Fox写锟斤拷锟斤拷
I can't explain the 2 thing....
Which scenario do you think is good for our situation?
If you want the connection to remain open you should close it immediately after sending your response, or perhaps set a timer to close it after a timeout.
(All our requests are short connections coming from browsers--they are ads---so keep-alive is meaningless to us),
I really hope this problem would be solved or else our service can not go online and the whole project would fail ...
Thank you very much.
Best Regards
Puming.
Or can we config the server not to keep-alive?
--
I wouldn't bother with that. Just make sure you call response.close() after everywhere in the code where you call response.end()
public static ChannelFuture close(Channel channel) {
ChannelFuture future = channel.getCloseFuture();
channel.getPipeline().sendDownstream(new DownstreamChannelStateEvent(
channel, future, ChannelState.OPEN, Boolean.FALSE));
return future;
}
On 01/12/12 10:46, 锟斤拷锟斤拷锟斤拷 wrote:
>
>
> 锟斤拷 2012锟斤拷12锟斤拷1锟斤拷锟斤拷锟斤拷锟斤拷UTC+8锟斤拷锟斤拷5时48锟斤拷33锟诫,Tim Fox写锟斤拷锟斤拷
>
> On 01/12/2012 04:39, 锟斤拷锟斤拷锟斤拷 wrote:
>>
>>
>> 锟斤拷 2012锟斤拷11锟斤拷30锟斤拷锟斤拷锟斤拷锟斤拷UTC+8锟斤拷锟�
>> 锟斤拷11时47锟斤拷45锟�锟诫,Tim Fox写锟斤拷锟斤拷
>>
>>
>>
>> On Thursday, November 29, 2012 1:26:26 PM UTC, 锟斤拷锟斤拷锟�
在 2012年12月1日星期六UTC+8下午8时40分15秒,Tim Fox写道:
On 01/12/12 10:46, 锟斤拷锟斤拷锟斤拷 wrote:
>
>
> 锟斤拷 2012锟斤拷12锟斤拷1锟斤拷锟斤拷锟斤拷锟斤拷UTC+8锟斤拷锟斤拷5时48锟斤拷33锟 诫,Tim Fox写锟斤拷锟斤拷
To view this discussion on the web, visit https://groups.google.com/d/msg/vertx/-/KUQYTQ8kOM8J.
I suspect there is something else going on in your code or environment, but without seeing your code it's very hard to tell.
Have you considered the possibility that 3000 connections might be normal? Even if you are quickly closing each connection you're always going to have a certain number open at any one time.
To view this discussion on the web, visit https://groups.google.com/d/msg/vertx/-/2_mL7kUEOD4J.
i don't suggest that take vertx webHttpServer as a WebServer.we get a nginx as WebServer and LoadBalance.nginx connect to the several WebServer of Java such as Jetty Tomcat, here you can use VertxHttpServer.
Yes, we have this as a backup plan.But we thought vert.x could handle with these situations well even without an nginx in front. Theoretically connections should not be a hard problem to solve.Otherwise we might not have chosen to use vert.x/netty in the first place. Our company have been using lighttpd as server, and earlier this year it was decided that we should invest more in JVM technologies, and using vert.x is one of the moves. Unfortunately I was not quite experienced with highly concurrent servers (I was a Website dev), thinking that vert.x/netty should handle everything as well as ligttpd, did not foresee this problem. As you may have noticed, I didn't know much details of the HTTP/TCP connection process.
> > For more options, visit this group at
> > http://groups.google.com/group/vertx?hl=en-GB
> <http://groups.google.com/group/vertx?hl=en-GB>.
>
>
> --
> Tim Fox
>
> Vert.x - effortless polyglot asynchronous application development
> http://vertx.io
> twitter:@timfox
>
> --
> You received this message because you are subscribed to the Google
> Groups "vert.x" group.
> To view this discussion on the web, visit
> https://groups.google.com/d/msg/vertx/-/VQLDuxRM0k8J.
> To post to this group, send an email to ve...@googlegroups.com.
> To unsubscribe from this group, send email to
To view this discussion on the web, visit https://groups.google.com/d/msg/vertx/-/VQLDuxRM0k8J.
To post to this group, send an email to ve...@googlegroups.com.
To unsubscribe from this group, send email to vertx+un...@googlegroups.com.
One other thing that springs to mind:Are you using a RouteMatcher in your code?If so, can you check that you are setting the noMatch handler and ending + closing the response in there too?
@Override
public void handle(HttpServerRequest req) {
String path = req.path;
if ('/' == path.charAt(0)) {
path = path.substring(1);
}
String allpath = Paths.concat(this.rootDir, path);
HttpServerResponse response = req.response;
response.putHeader("Expires", "Thu Jan 01 2099 00:00:00 GMT");
response.sendFile(allpath);
response.close();
}
If not, then any keep alive connections to urls that you don't handle will leave their connection open.
> > vertx+un...@googlegroups.com
> <mailto:vertx%2Bu...@googlegroups.com>.
> > For more options, visit this group at
> > http://groups.google.com/group/vertx?hl=en-GB
> <http://groups.google.com/group/vertx?hl=en-GB>.
>
>
> --
> Tim Fox
>
> Vert.x - effortless polyglot asynchronous application development
> http://vertx.io
> twitter:@timfox
>
> --
> You received this message because you are subscribed to the Google
> Groups "vert.x" group.
> To view this discussion on the web, visit
> https://groups.google.com/d/msg/vertx/-/VQLDuxRM0k8J.
> To post to this group, send an email to ve...@googlegroups.com.
> To unsubscribe from this group, send email to
> vertx+un...@googlegroups.com.
> For more options, visit this group at
在 2012年12月3日星期一UTC+8下午7时05分06秒,Tim Fox写道:One other thing that springs to mind:
Are you using a RouteMatcher in your code?
If so, can you check that you are setting the noMatch handler and ending + closing the response in there too?
I'm using a RouteMatcher, and noMatch is not set. We have a sendFile handler that matches "/.*" at the end of routes.
> >> > 锟斤拷 2012锟斤拷12锟斤拷1锟斤拷锟斤拷锟斤拷锟斤拷UTC+8 锟 斤
To view this discussion on the web, visit https://groups.google.com/d/msg/vertx/-/OvI44vfJk98J.
Hi Tim:
Sorry I was not able to connect to the proxy server I was using in the last two days and was not able to log into google groups, which was blocked by Chinese government :-(.
We've found a work-around to get rid of this problem by using netty's IdleStateHandler, closing any connections that is not reading/writing for 30 seconds. Combined with
response.close(), now the connections are not leaking anymore.
I've tested with a simple netty program and it seems to be leaking connections as well. I'd like with your patch, but
unfortunately, with the deadline coming quick, we have to deal with other problems at this time.
在 2012年12月3日星期一UTC+8下午7时05分06秒,Tim Fox写道:One other thing that springs to mind:Are you using a RouteMatcher in your code?If so, can you check that you are setting the noMatch handler and ending + closing the response in there too?
I'm using a RouteMatcher, and noMatch is not set. We have a sendFile handler that matches "/.*" at the end of routes.
在 2012年12月3日星期一UTC+8下午7时05分06秒,Tim Fox写道:One other thing that springs to mind:Are you using a RouteMatcher in your code?If so, can you check that you are setting the noMatch handler and ending + closing the response in there too?
I'm using a RouteMatcher, and noMatch is not set. We have a sendFile handler that matches "/.*" at the end of routes. the code looks like
@Override
public void handle(HttpServerRequest req) {
String path = req.path;
if ('/' == path.charAt(0)) {
path = path.substring(1);
}
String allpath = Paths.concat(this.rootDir, path);
HttpServerResponse response = req.response;
response.putHeader("Expires", "Thu Jan 01 2099 00:00:00 GMT");
response.sendFile(allpath);
response.close();
}
where Paths.concat is a utility function that concat paths.
I don't know how response.sendFile() treats paths that have no matching file.
That might really be the cause of our problem. I'll test that later.
If not, then any keep alive connections to urls that you don't handle will leave their connection open.
So if there are not matched routers for a connection, wouldn't it be better to just close it? keep-alive does not seem useful here.