Double vs Float performance
We have a trading application that does all its floating point calculations using doubles. While reading "Programming Pearls", it occurred to me that we don't need the extra precision of a double and that switching to floats should use less memory and might improve performance.
We aren't doing huge amounts of matrix math, there are probably only a couple hundred floating point operations in our processing of a typical message.
Anyone care to speculate on the performance impacts of this change?