Skip to main content

Glassfish v4 sends sometimes empty response to "curl -o testin.txt "my.server.com" under load / access.log shows http OK

14 replies [Last post]
iceandfire
Offline
Joined: 2006-11-16
Points: 0

Hi,
normally the Server sends the standart index.html as response, but sometimes curl says "curl: (52) Empty reply from server".

Where can this empty responses come from?

I'm using glassfish v4 from today
I have 2000 Treads in the corresponding thread pool
I have 4 acceptor threads
My connection queue is unlimited

=> The Empty responses are marked in the accesslog as http 200 (OK)

The load peaks around 5% (so it's not heavy load)

What component does this, how can I debug this further.

Thanks
Stephan

Reply viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
iceandfire
Offline
Joined: 2006-11-16
Points: 0

Bug is fixed with current version of the dropbox jar

iceandfire
Offline
Joined: 2006-11-16
Points: 0

We coming from glassfish2, there I can see those empty responses under load.
So we switch to v4, same issue here.

Just from curiosity we switched to jetty, gladly this app is only using the servlets.

Surprise, surprise... the problem is gone here.

Is grizzly broken? Or is it my configuration?

iceandfire
Offline
Joined: 2006-11-16
Points: 0

I might have found the server.log entry:

[2013-06-25T15:46:44.405+0200] [glassfish 4.0] [WARNING] [] [javax.enterprise.web] [tid: _ThreadID=1708 _ThreadName=http-listener-1(55)] [timeMillis: 1372168004405] [levelValue: 900] [[
StandardWrapperValve[default]: Servlet.service() for servlet default threw exception
java.io.IOException: Connection closed
at org.glassfish.grizzly.asyncqueue.TaskQueue.onClose(TaskQueue.java:298)
at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.onClose(AbstractNIOAsyncQueueWriter.java:613)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.closeConnection(TCPNIOTransport.java:611)
at org.glassfish.grizzly.nio.NIOConnection$2.run(NIOConnection.java:432)
at org.glassfish.grizzly.nio.DefaultSelectorHandler$RunnableTask.run(DefaultSelectorHandler.java:487)
at org.glassfish.grizzly.nio.DefaultSelectorHandler.processPendingTaskQueue(DefaultSelectorHandler.java:293)
at org.glassfish.grizzly.nio.DefaultSelectorHandler.processPendingTasks(DefaultSelectorHandler.java:284)
at org.glassfish.grizzly.nio.DefaultSelectorHandler.preSelect(DefaultSelectorHandler.java:100)
at org.glassfish.grizzly.nio.SelectorRunner.doSelect(SelectorRunner.java:329)
at org.glassfish.grizzly.nio.SelectorRunner.run(SelectorRunner.java:274)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:564)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:544)
at java.lang.Thread.run(Thread.java:722)
]]

iceandfire
Offline
Joined: 2006-11-16
Points: 0

Found more evidence,
I looks NOT like http 200, it's http 500 !

"85.115.15.50" "NULL-AUTH-USER" "25/Jun/2013:14:48:46 +0100" "GET /(...)/index.html?20130625144816463687567 HTTP/1.0" 500 1384

The requested file is static content

but how to further investigate?

oleksiys
Offline
Joined: 2006-01-25
Points: 0

Hi,

looks like the Connection was closed for some reason (according to the exception you posted).
Is it easy to reproduce this error? Can you pls. share the testcase?

Thank you.

WBR,
Alexey.

iceandfire
Offline
Joined: 2006-11-16
Points: 0

Couldn't produce a simple testcase.

Currently this would be a load test for a static resource, but I think the glassfish time wouldn't release something with such a basic problem.

oleksiys
Offline
Joined: 2006-01-25
Points: 0

According to the log you send HTTP/1.0 requests, it that right?
Can you pls. share how exactly you run the load test? Some simple servlet and a script? Will appreciate if you can share the details.

Thanks.

WBR,
Alexey.

iceandfire
Offline
Joined: 2006-11-16
Points: 0

I'm seeing those 500 in my production monitoring.
The test there is just requesting
"GET / HTTP/1.0 \nUser-Agent:LogicMonitor\nConnection: close\n\n"
After this implemented a small bash test:

#!/bin/bash
while [ 1 ]; do
d=$(date +'%Y-%m-%d-%H:%M:%S-%N')
echo $d
curl -o testin.txt --connect-timeout 65 --max-time 60 "myhost?$d"
if grep -q copyright "testin.txt"; then
date
echo "Found"
rm testin.txt
else
date
echo "Not Found"
exit
fi
sleep 1
done

This exits every couple of minutes with "Not Found".
With higher load this happens faster.

On my testing platform I don't see this issue.

Btw.:
I gave jetty a thread pool of 2000, but it's peeking only on 100 concrurrent threads, so load should no be the issue.

I think both tests are http 1.0.
I have checked other request in the server.log there are more with 500 results wich should work, but the simple test should always work no matter what the server is doing.

Might this be a concurrency bug in grizzly, im using a pool of java.util.concurrent.locks.ReentrantLock locks to synchronize the concurrent request from one client. As said in jetty it works as expected.

Is there a fundamental difference between jetty 9 and grizzly in gf4?

Stephan

oleksiys
Offline
Joined: 2006-01-25
Points: 0

Hi Stephan,
I've added more logging to investigate the issue you observe. Can you pls. do this:

1) apply the patch [1] to the GF4 (copy this file to glassfishv4/glassfish/modules folder)
2) set org.glassfish.grizzly.nio.NIOConnection logging level to FINEST either via webgui, or by adding line "org.glassfish.grizzly.nio.NIOConnection.level=FINEST" to the glassfishv4/glassfish/domains/domain1/config/logging.properties file.

Once it's done, additionally to the exception you posted here you will also see the stacktrace of the code, that closed the connection. This way we'll figure out the reason why the connection has been closed.

Thank you in advance.

WBR,
Alexey.

[1] https://dl.dropboxusercontent.com/u/7319744/nucleus-grizzly-all.jar

iceandfire
Offline
Joined: 2006-11-16
Points: 0

Thank you for the afford, I put everything in place.
I think we might look for this (it has some more grizzly stack frames):

2013-06-26T09:52:07.724+0200] [glassfish 4.0] [WARNING] [] [javax.enterprise.web] [tid: _ThreadID=107 _ThreadName=http-listener-1(88)] [timeMillis: 1372233127724] [levelValue: 900] [[
  StandardWrapperValve[show]: Servlet.service() for servlet show threw exception
java.io.IOException: Connection closed
        at org.glassfish.grizzly.asyncqueue.TaskQueue.onClose(TaskQueue.java:302)
        at org.glassfish.grizzly.nio.AbstractNIOAsyncQueueWriter.onClose(AbstractNIOAsyncQueueWriter.java:591)
        at org.glassfish.grizzly.nio.transport.TCPNIOTransport.closeConnection(TCPNIOTransport.java:611)
        at org.glassfish.grizzly.nio.NIOConnection$2.run(NIOConnection.java:438)
        at org.glassfish.grizzly.nio.DefaultSelectorHandler$RunnableTask.run(DefaultSelectorHandler.java:487)
        at org.glassfish.grizzly.nio.DefaultSelectorHandler.processPendingTaskQueue(DefaultSelectorHandler.java:293)
        at org.glassfish.grizzly.nio.DefaultSelectorHandler.processPendingTasks(DefaultSelectorHandler.java:284)
        at org.glassfish.grizzly.nio.DefaultSelectorHandler.preSelect(DefaultSelectorHandler.java:100)
        at org.glassfish.grizzly.nio.SelectorRunner.doSelect(SelectorRunner.java:329)
        at org.glassfish.grizzly.nio.SelectorRunner.run(SelectorRunner.java:274)
        at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:564)
        at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:544)
        at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.Exception: Close stack trace
        at org.glassfish.grizzly.nio.NIOConnection.close0(NIOConnection.java:425)
        at org.glassfish.grizzly.nio.transport.TCPNIOConnection.close0(TCPNIOConnection.java:291)
        at org.glassfish.grizzly.nio.transport.TCPNIOTransport.read(TCPNIOTransport.java:938)
        at org.glassfish.grizzly.nio.transport.TCPNIOTransportFilter.handleRead(TCPNIOTransportFilter.java:75)
        at org.glassfish.grizzly.filterchain.TransportFilter.handleRead(TransportFilter.java:173)
        at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
        at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:288)
        at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:206)
        at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:136)
        at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:114)
        at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
        at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:838)
        at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:113)
        at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:115)
        at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:55)
        at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:135)
        ... 3 more
]]
oleksiys
Offline
Joined: 2006-01-25
Points: 0

Thank you, now it looks much clear.
I think the client closes a Socket output (makes the Socket half-closed) and the server, in response, closes the Socket completely, when there is still some data to be written.
I will try to fix that behavior asap and let you know.

BTW, can I ask you to file an issue?
https://java.net/jira/browse/GLASSFISH

Thank you!

WBR,
Alexey.

ravindra.gullapalli
Offline
Joined: 2014-03-08
Points: 0

Any update on this issue?

iceandfire
Offline
Joined: 2006-11-16
Points: 0

Hi Alexey,
I open https://java.net/jira/browse/GLASSFISH-20665

Best wishes
stephan

oleksiys
Offline
Joined: 2006-01-25
Points: 0

Unfortunately I can't reproduce the problem using your script even after putting GF under significant load. Can you pls. attach the domain.xml file?

Thank you.

WBR,
Alexey.