We ve started compiling both 32- and 64-bit versions of some of our applications. One of the guys on my project is encouraging us to switch all of our 32-bit integers to their 64-bit equivalents, even if the values are guaranteed to fit in a 32-bit space. For example, I ve got a value that is guaranteed to never exceed 10,000 which I m storing in an unsigned int. His recommendation is to switch this to a size_t so that it expands to 64 bits in a 64-bit environment, even though we ll never need the extra space. He says that using 64-bit variables will speed up the application regardless of the values stored in each variable. Is he right? It s turning out to be a lot of work, and I m not anxious to put in the effort if it doesn t actually make a difference.
We re using Microsoft Visual C++ 2008. I m kinda hoping for a more general, platform-independent answer though.
So what do you think? Are we right to spend time changing our data types for performance reasons rather than range reasons?