Just a quick question, I may have a couple more later but for now just one.
What's the difference between a single and a double variable? The definition I find is...
Which brings about on question, What about zero and all the values close to it that seem to be outside the ranfe of both of these variable types?? single ≡ 32 bit (4 byte) floating point number between
-3.402823E38 to - 1.401298E-45 and
1.401298E-45 to 3.402823E38
? double ≡ 64 bit (8 byte) floating point number between
-1.79769313486231E308 to -4.94065645841247E-324
4.94065645841247E-324 to 1.79769313486232E308
For the function Z= X/(Y^2), if we stick x=70 and y=1.71 and treat the Z as double, we get the answer 23.93898964 whereas the correct answer is actually 23.93898977. How does this discrepancy come about? Which makes me question when we should use Double as opposed to Single? I just though double has more decimal places and would be more accurate with some calculations but clearly I'm wrong.