Skip to main content

JMS Performance Degraded With Cluster

4 replies [Last post]
smithbr4
Offline
Joined: 2011-07-28
Points: 0

Hello,
I am seeing some strange performance problems with JMS and Glassfish 3x. I have a process that submits 100,000+ messages to the JMS broker. On a single instance machine it can process about 4500 per minute. However, when I take this process and run it on a 5 node cluster(5 equivalent machines as my single instance) it can only process about 1000 per minute!? I was anticipating it would be able to do about 5 times as much processing as my single instance but that does not seem to be case at all. Is this expected?

I have tried several different broker settings Local-Conventional vs Local-HA but there was no difference. I am wondering since it uses a Master Broker (one of the nodes is the main broker) that with 5 nodes it is somehow causing super contention over the broker resource or something like that. Let me know any tweaking you think I can try, or thoughts in general on the issue.

Reply viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
nigeldeakin
Offline
Joined: 2007-10-12
Points: 0

This is what the MQ admin guide says:

"As the number of clients connected to a broker increases, and as the number of messages being
delivered increases, a broker will eventually exceed resource limitations such as file descriptor,
thread, and memory limits. One way to accommodate increasing loads is to add more broker
instances to a Message Queue message service, distributing client connections and message
routing and delivery across multiple brokers.

"In general, this scaling works best if clients are evenly distributed across the cluster, specially message producing clients. Because of the overhead involved in delivering messages between the brokers in a cluster, clusters with limited numbers of connections or limited message delivery rates, might exhibit lower performance than a single broker."

So it depends on how many connections you were using to send/consume messages, and exactly what was limiting the throughput in the single-broker case.

Nigel

smithbr4
Offline
Joined: 2011-07-28
Points: 0

Nigel,

So I have went back and done some research and a bunch more testing and here are my results.

Single Instance Conventional Master Broker Local Datastore throughput was about 27 messages/second
2 Node Cluster Conventional Master Broker Local Datastore throughput was about 52 messages/second
2 Node Cluster Conventional Master Broker JDBC Datastore throughput was about 20 messages/second
2 Node Clsuter HA JDBC Datastore throughput was about 20 messages/second

I was expecting the JDBC brokers to be slower than a local store but not 61% slower! Can you explain why a JDBC store is so much slower, I know it is not bottle-necked on the database, the database is running on enterprise SSD's and can handle thousands of inserts per second.

Thanks,
Brody

nigeldeakin
Offline
Joined: 2007-10-12
Points: 0

Yes, JDBC is recognised to be slower than MQ's own file store. I can't give you a simple explanation for the difference (others may be able to say more), other than to make the glib observation that the file store is optimised by the particular patterns of access and update required by messaging.

Nigel

smithbr4
Offline
Joined: 2011-07-28
Points: 0

nigeldeakin wrote:
This is what the MQ admin guide says:

"As the number of clients connected to a broker increases, and as the number of messages being
delivered increases, a broker will eventually exceed resource limitations such as file descriptor,
thread, and memory limits. One way to accommodate increasing loads is to add more broker
instances to a Message Queue message service, distributing client connections and message
routing and delivery across multiple brokers.

"In general, this scaling works best if clients are evenly distributed across the cluster, specially message producing clients. Because of the overhead involved in delivering messages between the brokers in a cluster, clusters with limited numbers of connections or limited message delivery rates, might exhibit lower performance than a single broker."

So it depends on how many connections you were using to send/consume messages, and exactly what was limiting the throughput in the single-broker case.

Nigel

Thanks Nigel,

I am going to run several different tests and configurations based on your post and see if I can increase the throughput of my cluster. I will let you know what I find as the optimal configuration.

Thanks,
Brody