Skip to main content

Excessive logging from ShoalLogger up to 1GB per day

Please note these java.net forums are being decommissioned and use the new and improved forums at https://community.oracle.com/community/java.
4 replies [Last post]
jayv
Offline
Joined: 2007-08-14

We've recently installed 2 GlassFish 3.1 clusters. We have 2 physical machines, one running the DAS and 2 nodes, one for each cluster and the other machine also running 2 nodes, one for each cluster. At a random time after starting the cluster we notice the server log filling up quite rapidly at about 1GB of logs per day, allways containing the same message: unable to find message to resend broadcast event with masterViewId The same messages are repeated over and over again, we've restarted the clusters, after that these messages stopped but they reappeared some time after. Does anyone have a clue what's going on, the cluster seems to work just fine and none of the nodes failed to my knowledge.

Thanks!

[#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 48 to member: g3 of group: parleys1|#]

[#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=62;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 60 to member: g1 of group: parleys1|#]

[#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 49 to member: g3 of group: parleys1|#]

[#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 50 to member: g3 of group: parleys1|#]

[#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=62;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 61 to member: g1 of group: parleys1|#]

[#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 51 to member: g3 of group: parleys1|#]

[#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 52 to member: g3 of group: parleys1|#]

[#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=62;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 62 to member: g1 of group: parleys1|#]

[#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 53 to member: g3 of group: parleys1|#]

[#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 54 to member: g3 of group: parleys1|#]

[#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=62;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 63 to member: g1 of group: parleys1|#]

[#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 55 to member: g3 of group: parleys1|#]

[#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 56 to member: g3 of group: parleys1|#]

[#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=62;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 64 to member: g1 of group: parleys1|#]

[#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 57 to member: g3 of group: parleys1|#]

[#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 58 to member: g3 of group: parleys1|#]

[#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 59 to member: g3 of group: parleys1|#]

[#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 60 to member: g3 of group: parleys1|#]

[#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 61 to member: g3 of group: parleys1|#]

[#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 62 to member: g3 of group: parleys1|#]

[#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 63 to member: g3 of group: parleys1|#]

[#|2011-05-17T12:56:45.563+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 64 to member: g3 of group: parleys1|#]

[#|2011-05-17T12:56:46.324+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=38;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 37 to member: devoxx1 of group: devoxx|#]

[#|2011-05-17T12:56:46.324+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=38;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 38 to member: devoxx1 of group: devoxx|#]

[#|2011-05-17T12:56:46.325+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=38;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 39 to member: devoxx1 of group: devoxx|#]

[#|2011-05-17T12:56:46.325+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=38;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 40 to member: devoxx1 of group: devoxx|#]

[#|2011-05-17T12:56:46.325+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=38;_ThreadName=Thread-1;|GMS1112: unable to find message to resend broadcast event with masterViewId: 41 to member: devoxx1 of group: devoxx|#]

Reply viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
jfialli
Offline
Joined: 2003-06-16

This reply consists of multiple parts.

First a Workaround for the log files growing too big. Then a request for
you to submit a glassfish jira issue with
some of the log files attached to allow us to investigate this issue you
are observing. Lastly, some
system tuning of UDP buffer size may be necessary depending on your OS
and how heavy a load your system
is placing on the machines running the application servers.

**************

1. Workaround.

First of all, if you would like to disable these messages being
recorded, you can run the following commands.

// for each cluster, run the following command.
% asadmin set-log-levels --target ShoalLogger.mcast=WARNING

// for DAS, run the following command.
% asadmin set-log-levels ShoalLogger.mcast=WARNING

This is merely a workaround to address the issue that you reported that
the logs are getting too large too quickly. There is still an
underlying issue
requiring investigation.

*************

2. Submitting a GlassFish jira issue and attaching log files.

However, there is a problem that I would like to investigate in your
server logs.
It would be helpful if you could file a bug at
http://java.net/jira/secure/CreateIssue.jspa?pid=10231&issuetype=1
with Component type of group-management-service. The subject line should
read "Too many failures to rebroadcast GMS broadcast notifications".
If you could attach all the server logs from
the DAS and one of the clusters (with the logs all corresponding to same
time period.) it would assist
us in diagnosing what is going wrong. There is an asadmin command to
collect log files:

// for DAS
% asasadmin collect-log-files

// for a cluster
% asadmin collect-log-files --target

Attach these zip files to the newly created issue.
(if the files are too big, you can selectively choose files from DAS and
clustered instances logs.
Just please be sure to pick time periods across DAS and clustered
instances. Best to select from
start of DAS and cluster to when the issues start occuring.)

Each server log entry corresponds to an instance in the cluster
detecting that it has missed a GMS notification
and requesting it be rebroadcast. However, when the request for
rebroadcast arrives, the GMS notification
had already expired in the GMS master. There is an issue that requires
investigation that this is occuring so much.

************

3. Tuning UDP buffer sizing:

There does exist a chance that this is occurring due to insufficient UDP
buffersize configured.
Even if that is the case, it would not be expected to see so many
errors. The rebroadcast fail only
after 20 seconds after the initial broadcast. The DAS would need to be
running on a highly overloaded
machine for processing to fall that far behind. When a system falls
behind processing UDP
messages, they are just dropped. GlassFish GMS code is written to
compensate for the drops and typically
the rebroadcast will minimally. There are way too many in your system
based on what you are reporting.

It is OS specific to increase the UDP buffersize. We have found systems
such as Linux are configured with
too small a UDP buffersize by default (131071) for server like
processing and we have had to increase from the default to 500K
for our testing configurations to see no UDP drops. As with tuning UDP
buffersize in the OS, it is OS specific to
verify if there have been UDP message drops or not. You may want to
check your system for UDP drops to see if they
are excessive or not.

Additionally, running multiple instances on a single machine can result
in UDP drops if the machines are
too underpowered to process the UDP broadcast messages in a timely fashion.
The dropped UDP messages in GMS result in dropped notifications such as
the JOINING of a GlassFish instance to the glassfish cluster.
However, the very next message notification has the correct cluster view
with it and makes it so things will work sufficiently
despite the drop. However, code relying on the GMS notification may not
work correctly if it misses the notification of
an clustered instance joining or leaving. So it would still be helpful
to get the logs and figure out what the issue is
that is causing so many log messages.

Thanks for using Glassfish and thanks in advance for any assistance you
can provide us to investigate this issue.

-Joe Fialli

On 5/18/11 6:17 AM, forums@java.net wrote:
> We've recently installed 2 GlassFish 3.1 clusters. We have 2 physical
> machines, one running the DAS and 2 nodes, one for each cluster and
> the other
> machine also running 2 nodes, one for each cluster. At a random time
> after
> starting the cluster we notice the server log filling up quite rapidly at
> about 1GB of logs per day, allways containing the same message: unable to
> find message to resend broadcast event with masterViewId The same
> messages
> are repeated over and over again, we've restarted the clusters, after
> that
> these messages stopped but they reappeared some time after. Does
> anyone have
> a clue what's going on, the cluster seems to work just fine and none
> of the
> nodes failed to my knowledge.
>
>
>
> Thanks!
>
> [#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 48 to
> member: g3 of group: parleys1|#]
> [#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=62;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 60 to
> member: g1 of group: parleys1|#]
> [#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 49 to
> member: g3 of group: parleys1|#]
> [#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 50 to
> member: g3 of group: parleys1|#]
> [#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=62;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 61 to
> member: g1 of group: parleys1|#]
> [#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 51 to
> member: g3 of group: parleys1|#]
> [#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 52 to
> member: g3 of group: parleys1|#]
> [#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=62;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 62 to
> member: g1 of group: parleys1|#]
> [#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 53 to
> member: g3 of group: parleys1|#]
> [#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 54 to
> member: g3 of group: parleys1|#]
> [#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=62;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 63 to
> member: g1 of group: parleys1|#]
> [#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 55 to
> member: g3 of group: parleys1|#]
> [#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 56 to
> member: g3 of group: parleys1|#]
> [#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=62;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 64 to
> member: g1 of group: parleys1|#]
> [#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 57 to
> member: g3 of group: parleys1|#]
> [#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 58 to
> member: g3 of group: parleys1|#]
> [#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 59 to
> member: g3 of group: parleys1|#]
> [#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 60 to
> member: g3 of group: parleys1|#]
> [#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 61 to
> member: g3 of group: parleys1|#]
> [#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 62 to
> member: g3 of group: parleys1|#]
> [#|2011-05-17T12:56:45.562+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 63 to
> member: g3 of group: parleys1|#]
> [#|2011-05-17T12:56:45.563+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=64;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 64 to
> member: g3 of group: parleys1|#]
> [#|2011-05-17T12:56:46.324+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=38;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 37 to
> member: devoxx1 of group: devoxx|#]
> [#|2011-05-17T12:56:46.324+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=38;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 38 to
> member: devoxx1 of group: devoxx|#]
> [#|2011-05-17T12:56:46.325+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=38;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 39 to
> member: devoxx1 of group: devoxx|#]
> [#|2011-05-17T12:56:46.325+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=38;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 40 to
> member: devoxx1 of group: devoxx|#]
> [#|2011-05-17T12:56:46.325+0100|INFO|glassfish3.1|ShoalLogger.mcast|_ThreadID=38;_ThreadName=Thread-1;|GMS1112:
>
> unable to find message to resend broadcast event with masterViewId: 41 to
> member: devoxx1 of group: devoxx|#]
>
> --
>
> [Message sent by forum member 'jayv']
>
> View Post: http://forums.java.net/node/803337
>
>

jayv
Offline
Joined: 2007-08-14

Hi Joe,
I've implemented the logging tweaks, however it only stopped after I restarted both clusters and the DAS, but this is probably a known issue.
I'd like to send you some log files but the question is will it be helpfull as I can't seem to find the logfile containing the start of the problem, we've been deleting them from time to time... so only got about 2GB of logs of the past 2 days. I could re-enable logging but if this requires me to bring down the cluster it will take some time as we're not fond of bringing down a live system during business hours.
Regarding your other suggestions, yes we are on Linux (ubuntu), but we're on beefy machines with 8 CPUs and a 24GB RAM, so the machines are hardly used for the moment, 0.40 loadavg.
# cat /proc/sys/net/core/rmem_max
131071
# cat /proc/sys/net/core/rmem_default
124928
Our UDP buffers might need tuning for heavyer loads, which I expect in the future with other apps deployed, but I would not expect the current load to require larger buffersizes, but I'm no expert in this area so I could be wrong here.
Thanks for your assistance.

jfialli
Offline
Joined: 2003-06-16

Jayv,

One additional thing to check is if there is clock skew between the 3
machines involved.
The window for rebroadcast of a dropped message is 20 seconds, so a
clock skew of
15 seconds or more coupled with the small UDP buffer could be causing
the messages
that are coming up too frequently.

-Joe

On 5/19/11 5:50 AM, forums@java.net wrote:
> Hi Joe,
>
> I've implemented the logging tweaks, however it only stopped after I
> restarted both clusters and the DAS, but this is probably a known issue.
>
> I'd like to send you some log files but the question is will it be
> helpfull
> as I can't seem to find the logfile containing the start of the problem,
> we've been deleting them from time to time... so only got about 2GB of
> logs
> of the past 2 days. I could re-enable logging but if this requires me to
> bring down the cluster it will take some time as we're not fond of
> bringing
> down a live system during business hours.
>
> Regarding your other suggestions, yes we are on Linux (ubuntu), but
> we're on
> beefy machines with 8 CPUs and a 24GB RAM, so the machines are hardly
> used
> for the moment, 0.40 loadavg.
>
> # cat /proc/sys/net/core/rmem_max
>
> 131071
>
> # cat /proc/sys/net/core/rmem_default
>
> 124928
>
> Our UDP buffers might need tuning for heavyer loads, which I expect in
> the
> future with other apps deployed, but I would not expect the current
> load to
> require larger buffersizes, but I'm no expert in this area so I could be
> wrong here.
>
> Thanks for your assistance.
>
>
> --
>
> [Message sent by forum member 'jayv']
>
> View Post: http://forums.java.net/node/803337
>
>

jfialli
Offline
Joined: 2003-06-16

On 5/19/11 5:50 AM, forums@java.net wrote:
> Hi Joe,
>
> I've implemented the logging tweaks, however it only stopped after I
> restarted both clusters and the DAS, but this is probably a known issue.
>
> I'd like to send you some log files but the question is will it be
> helpfull
> as I can't seem to find the logfile containing the start of the problem,
> we've been deleting them from time to time... so only got about 2GB of
> logs
> of the past 2 days.
Please file an issue and attach the log files that you do have.
Please be sure to include the DAS server logs.

> I could re-enable logging but if this requires me to
> bring down the cluster it will take some time as we're not fond of
> bringing
> down a live system during business hours.
>
> Regarding your other suggestions, yes we are on Linux (ubuntu), but
> we're on
> beefy machines with 8 CPUs and a 24GB RAM, so the machines are hardly
> used
> for the moment, 0.40 loadavg.
>
> # cat /proc/sys/net/core/rmem_max
>
> 131071

The above value is too small for a server.

Given that you have 3 instances running on one machine, one being the DAS,
the above size is too small.

I would recommend the following values:

net.core.rmem_max=1024000
net.core.wmem_max=1024000
net.core.rmem_default=102400
net.core.wmem_default=102400

>
> # cat /proc/sys/net/core/rmem_default
>
> 124928
>
> Our UDP buffers might need tuning for heavyer loads, which I expect in
> the
> future with other apps deployed, but I would not expect the current
> load to
> require larger buffersizes, but I'm no expert in this area so I could be
> wrong here.
>
There is a flurry of UDP traffic when starting a cluster and stopping a
cluster, so the larger value
is recommended so the UDP loss during startup is not high. There is a
reliable rebroadcast for
dropped UDP messages that require delivery. Based on your reported
error messages, there
was a high level of drops, which I would expect given the small UDP
buffer size AND
having a DAS with 2 clusters and two instances all running on one machine.

Running the DAS on its own machine with a 500k UDP buffer would be
sufficient.
But to keep your current configuration, I would follow above recommendation.

-Joe Fialli

> Thanks for your assistance.
>
>
> --
>
> [Message sent by forum member 'jayv']
>
> View Post: http://forums.java.net/node/803337
>
>