This blog entry describes how WebSphere eXtreme Scale uses memory. This allows customers to better size how much memory they need when storing a large number of key value pairs in a grid.
The text is in my personal blog at this link.
Yesterday, I said the most important aspect of an XTP platform is the management. Automatic placing of data on a grid of computers, automatic scale out as new boxes are added and automatic replica count maintenance as boxes fail or are taken out of the grid.
I was explaining XTP in a meeting this morning and covered the usual aspects. It (in the form employed by ObjectGrid and its gigoherence competitors) uses replicated memory based storage for persistent state AND it uses a self healing/scaling grid to deploy that storage fabric on.
This is a great question. Depending on who you ask, you get a different answer. Vendors will pitch what they think and why competitors are wrong. I guess I just want to put out what I think it is so here goes.Topology
Topology wise, there are three styles. The fixed number of partitions where records hash to one partition is the first.
We have scenarios where a customer may want to have say 200 partitions and preload the data into the grid when the partition primaries are initially placed. The customer might want to load 100Gb of data and planned on 500MB of primary data and 500MB of replica data per JVM.
We started building http://www.trackpedia.com at the beginning of January this year. We initially knocked it together using mediawiki and vbulletin. It was very easy to do this.
I think OSGI has a big future on the server side.
The 4th International Conference on Information Technology in Financial Services is organized annually by State Street and the Zhejiang University. It's my second time at the conference and once again, it's been a great experience. The sessions were lively and very interactive, thats my favorite kind of session.
We just shipped WebSphere 5.1 XD a couple of weeks ago. XD has many features that should appeal to customers.