Skip to main content

behavior of interface javax.microedition.io.Datagram

2 replies [Last post]
mzabel
Offline
Joined: 2009-01-13

Hello,

I am using javax.microedition.io.Datagram to receive UDP packets. Now, I want to extract the full data buffer using the interface method getData(). But IMO, the behavior of this method is not stated clearly in the javadoc from Sun.

Several behaviors are possible:
1.) The returned byte[] contains only user data, starting from index 0 up to the length of the byte[].

2.) In the returned byte[] the user data starts from index 0 up to the length (exclusive) returned by getLength(). All trailing bytes are garbage.

3.) The user data is stored from getOffset() up to getOffset()+getLength() (exclusive). All remaining bytes are garbage.

Which behavior is intended?

I have already taken a look into the implementation of phoneME, but I don't now if it is correct. On the web, I have found sample programs which rely on behavior 1) or on behavior 2). But IMO behavior 3) is correct.

Could someone clarify this point?

Thanks,
Martin

Reply viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
sfitzjava
Offline
Joined: 2003-06-15

I read the datagram javadoc and felt that this statement summed it up well.

"It should be noted the length above is returned from getLength and can have different meanings at different times. When sending length is the number of bytes to send. Before receiving length is the maximum number of bytes to receive. After receiving length is the number of bytes that were received. So when reusing a datagram to receive after sending or receiving, length must be set back to the maximum using setLength. "

So when you setup the datagram you need to tell it how big 'length' of a buffer you feel you want your system to try and work with. Then when you call getData() it will load up a buffer, of no more than 'length' size, and return it. You then need to call getLength() to find out how much of the buffer is usable. I believe that the buffer you get back can be as big as the initial 'length' you specified, meaning if you initialized the datagram to be 1k you would get a buffer of 1k length back even though you only read 23bytes. Bytes [23]-[1023] could be garbage.

Remember that different phone makers might interpret the spec differently and so what size of buffer you get back and if that is a copy of the system buffer may be up to interpretation as well. Again from datagram javadoc the getData method says:
"Depending on the implementation, this operation may return the internal buffer or a copy of it. However, the user must not assume that the contents of the internal data buffer can be manipulated by modifying the data returned by this operation. Rather, the setData operation should be used for changing the contents of the internal buffer. "

The getOffset() is there so that the readUTF and other DataInput / DataOutput methods can track where they currently are reading/writing into the buffer. I believe that if you use the getData() you can read multiple times from the buffer, which the system may be filling in while you read. So initially you may ask for 1k of data, and you get 23bytes when the server might block for a ms or 2 before sending another 500bytes. But if you read before the remainder of the read is done you will get 23 from getLength. Once you read those bytes, then you might check getLength and find it now returns 500, but that getOffset instead of returning 0 returns 23.

This is just going off the spec and javadoc, I have not used UDP/Datagrams on ME, but going from javaSE and 'C' experience I believe that correctly describes how the behavior works, and why you may be seeing multiple types of behavior in examples. These examples reflect the authors understanding, design, and style of writing.

-Shawn

mzabel
Offline
Joined: 2009-01-13

Hello Shawn,

relating to the 'length' and the copying of the data buffer, I agree with your argumentation, but it does not solve the 'offset' problem (more below).

> The getOffset() is there so that the readUTF and
> other DataInput / DataOutput methods can track where
> they currently are reading/writing into the buffer.

I think, this point is wrong. According to the documentation of javax.microedition.io.Datagram, a so called "read/write pointer" tracks the read/write position. This doc also states that, the offset other than 0 is not supported if the read*() / write*() methods are used. (However, the phoneME implementation supports this too.)

Keeping this in mind, I reread the documentation of Datagram and DatagramConnection. The default value for 'offset' is zero, which could also be enforced by Datagram.reset(). The only way to change the 'offset' state variable is by Datagram.setData(). Thus, if someone, reuses a Datagram with a modified 'offset', then the method DatagramConnection.receive() places the received data starting from 'offset' instead of zero into the internal data buffer. Moreover, if Datagram.getData() is used then the received data is returned starting from array index 'offset' as well.

Martin

Message was edited by: mzabel