Skip to main content

Java Performs well on more load

14 replies [Last post]
vaghelarajesh
Offline
Joined: 2009-07-09

We have client server application.
Server is running on Sun box with 8 CPU.
We fire request from client to server in various batch size.
We notice that server perform well when we fire 10000 request in tight forloop.
While it perform bad when we fire 1000 request in tight forloop.

Here
Performance : mean time between request received, processed and sent back a response
forloop : means without waiting/sleeping of any time, we fire continues request.

When we fire 10k request each request is processed in 0.1 milli second while when we fire 1000 request it takes 1 to 2 millisecond for each request.

Reply viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
vaghelarajesh
Offline
Joined: 2009-07-09

Member have given very good feedback.
And finally they guided me to find solution.

To perform well on any load, I am using -XX:CompileThreshold=10 property of JVM.

peter__lawrey
Offline
Joined: 2005-11-01

> To perform well on any load, I am using -XX:CompileThreshold=10 property of JVM
Which tells me you didn't do what I suggested this just makes the JVM warm up faster.

The downside is that it won't optimise the code as well as it uses the information it collects on how the code is called before it is optimised,

vaghelarajesh
Offline
Joined: 2009-07-09

I followed your suggestion, but I have some concerns as listed below.

1> Warm up idea that you suggested, some times can not be aply in some application for certain execution path.

2> Even some times we don't know what are/willbe execution path of various type of request and what will be test cases for those.

3> If we know those, then also it is very time consume task and extra maintenance.

So kindly coordinate.
I uses warm up when I need some small benchmarking of algorithms.

peter__lawrey
Offline
Joined: 2005-11-01

> 1> Warm up idea that you suggested, some times can
> not be aply in some application for certain execution
> path.
You don't need to cover all possible paths, just critical ones
>
> 2> Even some times we don't know what are/willbe
> execution path of various type of request and what
> will be test cases for those.
This does require you to have a view on which type of requests you want to perform best after a restart. If you don't know, then you can just let the JVM warm itself up as you suggest.
> 3> If we know those, then also it is very time
> consume task and extra maintenance.
From the numbers you gave it would take about 10 seconds. (10K * 1 ms) By tuning the compile threshold to 1000 times you could have the critical requests warm up in 1 second.
Your application could still handle requests in this time, but they would be slower than usual for this time.

vaghelarajesh
Offline
Joined: 2009-07-09

Hi peter,

1>Your idea have base, no doubt.
2>Time consuming I meant, we need extra attention/resources to create and maintenance such test cases. (We already have cases, but those are not delivered with deployment).

With our limitation, It is good to accept VM-param approach.
We also need to look other VM param (Related to GC).

Thanks for your support.

Rajesh Vaghela

peter__lawrey
Offline
Joined: 2005-11-01

This sounds like a bug or tuning problem
Do you have Nagle turned off?
What happens when you decrease the load to 100/s, 10/s or 1/s

rajeshvaghela
Offline
Joined: 2007-02-01

Hey Peter
thanks for interest.

1> There is no bug.
2> There could be tuning problem.
3> We are not wary about network,so no need to think abt Nagle.
4> 100/s, 10/s, 1/s we are getting 1 millisecond to process.

Also :
Client is always connected to server, so no need to create Socket is on every request.
We are estimating 100k hit per day on server, and we are planning that our server should response withing .5 millisecond.

peter__lawrey
Offline
Joined: 2005-11-01

3> We are not wary about network,so no need to think abt Nagle.
Not sure what you mean here, but I suspect Nagle is the main cause of delay, sending data at a higher rate forces the buffers to flush, reducing the impact of nagle.

rajeshvaghela
Offline
Joined: 2007-02-01

When we receive req. we append one time with req.
Then we process the req.
Now before sending resp. we calculate the time.

We are just checking Processing time.

peter__lawrey
Offline
Joined: 2005-11-01

I see, you are only counting the processing time.
The problem may be that you are not giving the JVM a chance to warm up.
I suggest performing at least 10 K requests before measuring how long the request takes. i.e. do 10K requests as fast as possible and then do say 1/ms
It could be that you are measuring the time a request takes before (1 ms) and after (0.1 ms) the code has been optimised by the JIT
A simple way to ensure a service always responds with the optimised code is to do a self test on start up which exercises the key code with some realistic data (but is treated by your system as test data) I sounds like 10K requests would take about 10 seconds to warm up the JVM.

rajeshvaghela
Offline
Joined: 2007-02-01

I fired 50k request first, then waited for 5 minute and then started to fire 1 req/s.
Thats also not working.

[b]Almost all option I've tested.[/b]
I have disabled class unloading
GC Ratio also tried
Xmx is 1 gb
GC is in conc mode

But no uniform result.

It gives same result, On high loading it process in .1 ms and on normal loading (1 req/s) it gives 1 to 2 ms.

vaghelarajesh
Offline
Joined: 2009-07-09

I can see few cause.
1> When any Java code is repeatedly called, JVM marks it as HotSpot and cache that code in Stack. So whenever those method called again, it can fetch those very fast, because those are stacked.

2>Now when for long time (probbly : 1 second), if JVM do not face stacked procedure, then it moves those code to Heap.

So due to case 2, every method call after 1 second are fetched from Heap (that is slow) and that make it slow and method called within 1milliseconds are fetched from Stack and those are fast.

linuxhippy
Offline
Joined: 2004-01-07

@vaghelarajesh:

Code is never cached on the Stack, its stored in a place called "Code Cache", and at least as far as I know is part of the Permanent Generation.
So once the code is compiled (which happens eg. after 10000 invocations for hotpot server on x86), the same code is expecuted.

The JVM however is sometimes able to allocate temporary Objects on the stack, but this has nothing to do how frequently the method is called after it has been compiled.

- Clemens

vaghelarajesh
Offline
Joined: 2009-07-09

You are correct.

I tested with -XX:CompileThreshold=10 option.

It give me excellent result.

Thanks very much all of you guys.

Rajesh Vaghela