Posted by gkbrown
on October 29, 2008 at 1:05 PM PDT
Accepting Richard Monson-Haefel's "One Million Records" challenge on behalf of the Pivot platform.
Earlier this week I came across this article on Inside RIA:
I decided to see how Pivot would handle this challenge. The results can be seen here:
Like the author of the Flex version, I omitted the 1,000,000 row dataset from the online example due to file size. However, I did run the test a number of times, and the numbers are as follows:
Nice and linear. Unfortunately, about half as fast as the Flex version when running locally. I had expected the performance of a Java app to exceed that of the Flash player, so I was disappointed.
I did a little research to try to identify the bottleneck. A significant part of it seems to be Pivot's use of hash maps to store deserialized CSV data rather than arrays, which is what the Flex version appears to use. So, it looks like some optimization may be in order, both in Pivot's handling of maps as well as whatever is contributing to the additional processing time, which I haven't yet had time to identify.
In any case, while the numbers aren't ideal, I was pleased to discover that Pivot was up to the "million record challenge" and fared pretty well, even if it didn't take first place.