Hi, I've submitted a Jersey-related question on Stackoverflow. I'm posting it here hoping it will get answered by some of the experts guys on this mailing list. :) Will appreciate any hint on this. --- I have an asynchronous JAX-RS API for long-polling clients put together in Jersey Container Servlet 2.22 and hosted on Tomcat 7. It looks similar to the snippet shown below. It works well in production. On average 150 long-polling requests are being executed at the same time. It results in almost thesame number of live Tomcat HTTP connections (according to JMX metrics). For this low traffic scenario plain-old HTTP-BIO connector has been used without problems. No runtime connection leak can be detected provided you use only managed threads :)
The problem I'm facing is that after a successful Tomcat redeploy process the number of live connections will apparently increase to about 300 then to 450 and after some further redeploys it will hit the The clients of the API handle the redeploy by waiting for a client-side timeout (which is of course bigger than the one set at servlet-side) and start polling the API again. But they are guaranteed to send only one request at the same time. The shape of the monitoring graph on connection count gives a hint. Connection count remains constant after undeployment (connections are not released back to the pool even by After some digging around it's not difficult to find out by analyzing heap dumps made after few redeployments that unreleased, suspended I started to look around the undeployment-related source code of the Jersey Container hoping that some graceful shutdown process is implemented for async requests with some cleanup actions executed at I had an optimistic guess that by running each scheduled Any ideas on what my async setup is missing or how Jersey can be configured to release the Servlet Container's connections that are still suspended at redeploy-time? |
Hi Bela,
It seems to me that Tomcat should be responsible for releasing any lingering connections after an application is undeployed, but please, feel free to file a bug against Jersey; we can try to have a look and see if there is something more that Jersey can do in this case. Cheers, Marek
|
Hi, thanks for the input. Tomcat would have been a good guess but in the meantime I've managed to reproduce the connection leak issue on Jetty 9.3.9 as well. So the async cleanup problem doesn't seem to be related to servlet container but the JAX-RS container. The simple setup I used was the following. I tried it with Jetty and Tomcat both causing leaks at redeploy. A simple JAX-RS resource responding to async requests after 30 seconds.
A simple long-polling client starting 5 instances. It polls the API even if the client detects that the request timed out (40 seconds).
When the client starts the number of active HTTP connections reach a certain number (container-dependant). JMX metrics to watch for:
After each redeploy (i.e. touching war file manually) the number of connections increases by 5. Another hint: the connection metrics in JMX corellates with the number shown by netstat for the webapp port. The number of connections in CLOSE_WAIT state is increased by 5 at every redeploy. So should I submit a bug report on the Jersey JIRA? Cheers, Bela On 29 June 2016 at 15:22, Marek Potociar <[hidden email]> wrote:
|
I see. Can you please file a new bug agains Jersey with this description?
Thank you, Marek
|
Hi Marek,
do you know if a bug report was ever created for this? I couldn't find one on https://java.net/jira/. I'm asking because I'm facing exactly the same symptoms, in our case on Payara 4.1.1.171.0.1, i.e. a Grizzly container. Best regards, Joachim Kanbach |
Free forum by Nabble | Edit this page |