What is the best way to find out the maximum precision of a data type on my machine? Specifically double and float.
I mean, how many decimal places will each one calculate? I thought double was pretty high, but a program that is outputting doubles only gives 6 decimal places to the right of the decimal point, and I need like 8 or 9.
Of course I will be asking the maintainer of the program about this, but I wanted to know how to find this information if I ever need it. Using FC2, kernel 2.6.6-1.424, and glibc 2.3.3-27.1
Thanks for the help


