HttpServer and big content length
The new httpserver functionality seems very nice and a is a well done addition IMHO.
For those that have missed it, for now(?) it is under com.som.* :
Having played around with it some now (b74 Win IA32) I have one issue/question/RFE and that is the content length. It can only be specified as an int (meaning signed 32bit integer) This means that we can only serve files as large as around 2GB. Could this simply be changed to a long?
If I remember correctly, having implemented simple http servers before, that at least 4GB (corresponding to unsigned 32bit integer) files work fine with Internet Explorer. Though, again if I remember correctly, larger than 2GB files where considered of negative size. Still worked fine up to the next 4GB limit were it would just cycle around to "size % 4GByte" of course.
Clients that use 64bit integers for example would work perfectly.
File.length() returns a long as well. We can skip a cast there then too :)
The HTTP 1.1 RFC does not limit this field in any other way than >= 0.
If it's easy to avoid it why not? Certainly not that uncommon to have files bigger than 2Gb now a days. Consider popular download sites, linux dvd images etc.
Though one could discuss the usage of a http server, especially one based on this, to serve them, but still, seems like an unnecessery limit.
I don't see how it could make it worse. This will not work for all http clients or browsers due to the exact implementation (how the content length is parsed & stored in those clients) but it would not work for them anyway!