Wednesday, April 20, 2011

What's the first double that deviates from its corresponding long by delta?

I want to know the first double from 0d upwards that deviates by the long of the "same value" by some delta, say 1e-8. I'm failing here though. I'm trying to do this in C although I usually use managed languages, just in case. Please help.


#include <stdio.h>
#include <limits.h>
#define DELTA 1e-8

int main() {
    double d = 0; // checked, the literal is fine
    long i;
    for (i = 0L; i < LONG_MAX; i++) {
         d=i; // gcc does the cast right, i checked
         if (d-i > DELTA || d-i < -DELTA) {
              printf("%f", d);
              break;
         }
    }
}

I'm guessing that the issue is that d-i casts i to double and therefore d==i and then the difference is always 0. How else can I detect this properly -- I'd prefer fun C casting over comparing strings, which would take forever.

ANSWER: is exactly as we expected. 2^53+1 = 9007199254740993 is the first point of difference according to standard C/UNIX/POSIX tools. Thanks much to pax for his program. And I guess mathematics wins again.

From stackoverflow
  • Off hand, I thought that doubles could represent all integers (within their bounds) exactly.

    If that is not the case, then you're going to want to cast both i and d to something with MORE precision than either of them. Perhaps a long double will work.

    mataap : I guess you mean "integers representable as int" will be exactly representable as doubles. This is true when the number of mantissa digits in a double is greater than the number of digits in the int. It's worth remembering that at high exponent values, the distance between representable floating point numbers can exceed 1, so that not all integers are exactly representable in floating point.
  • The first long to be 'wrong' when cast to a double will not be off by 1e-8, it will be off by 1. As long as the double can fit the long in its significand, it will represent it accurately.

    I forget exactly how many bits a double has for precision vs offset, but that would tell you the max size it could represent. The first long to be wrong should have the binary form 10000..., so you can find it much quicker by starting at 1 and left-shifting.

    Wikipedia says 52 bits in the significand, not counting the implicit starting 1. That should mean the first long to be cast to a different value is 2^53.

    Overflown : I like the math idea from wikipedia, I was just trying to use evidence.
  • Doubles in IEE754 have a precision of 52 bits which means they can store numbers accurately up to (at least) 251.

    If your longs are 32-bit, they will only have the (positive) range 0 to 231 so there is no 32-bit long that cannot be represented exactly as a double. For a 64-bit long, it will be (roughly) 252 so I'd be starting around there, not at zero.

    You can use the following program to detect where the failures start to occur. The original relied on the fact that the last digit in a number that continuously doubles follows the sequence {2,4,8,6}. However, I opted eventually to use a known trusted tool (bc) for checking the whole number, not just the last digit.

    Keep in mind that this may be affected by the actions of sprintf() rather than the real accuracy of doubles (I don't think so personally since it had no troubles with certain numbers up to 2143).

    This is the program:

    #include <stdio.h>
    #include <string.h>
    
    int main() {
        FILE *fin;
        double d = 1.0; // 2^n-1 to avoid exact powers of 2.
        int i = 1;
        char ds[1000];
        char tst[1000];
    

     

        // Loop forever, rely on break to finish.
        while (1) {
            // Get C version of the double.
            sprintf (ds, "%.0f", d);
    
            // Get bc version of the double.
            sprintf (tst, "echo '2^%d - 1' | bc >tmpfile", i);
            system(tst);
            fin = fopen ("tmpfile", "r");
            fgets (tst, sizeof (tst), fin);
            fclose (fin);
            tst[strlen (tst) - 1] = '\0';
    
            // Check them.
            if (strcmp (ds, tst) != 0) {
                printf( "2^%d - 1 <-- bc failure\n", i);
                printf( "   got       [%s]\n", ds);
                printf( "   expected  [%s]\n", tst);
                break;
            }
    
            // Output for status then move to next.
            printf( "2^%d - 1 = %s\n", i, ds);
            d = (d + 1) * 2 - 1;  // Again, 2^n - 1.
            i++;
        }
    }
    

    This keeps going until:

    2^51 - 1 = 2251799813685247
    2^52 - 1 = 4503599627370495
    2^53 - 1 = 9007199254740991
    2^54 - 1 <-- bc failure
       got       [18014398509481984]
       expected  [18014398509481983]
    

    which is about where I expected it to fail.

    As an aside, I originally used numbers of the form 2n but that got me up to:

    2^136 = 87112285931760246646623899502532662132736
    2^137 = 174224571863520493293247799005065324265472
    2^138 = 348449143727040986586495598010130648530944
    2^139 = 696898287454081973172991196020261297061888
    2^140 = 1393796574908163946345982392040522594123776
    2^141 = 2787593149816327892691964784081045188247552
    2^142 = 5575186299632655785383929568162090376495104
    2^143 <-- bc failure
       got       [11150372599265311570767859136324180752990210]
       expected  [11150372599265311570767859136324180752990208]
    

    with the size of a double being 8 bytes (checked with sizeof). It turned out these numbers were of the binary form "1000..." which can be represented for far longer with doubles. That's when I switched to using 2n-1 to get a better bit pattern (all ones).

    Overflown : Concise, and you also figured out why my program would never possibly work. Not just the casting, but rather the fact that long is only 32-bit over here. Maybe C truly is the stone age, and I'm not going back.
    Overflown : Thanks a lot for adding to this problem. I figured you have to use strings, there's no other way to test at a precision greater than what you are working with.
  • Although I'm hesitant to mention Fortran 95 and successors in this discussion, I'll mention that Fortran since the 1990 standard has offered a SPACING intrinsic function which tells you what the difference between representable REALs are about a given REAL. You could do a binary search on this, stopping when SPACING(X) > DELTA. For compilers that use the same floating point model as the one you are interested in (likely to be the IEEE754 standard), you should get the same results.

0 comments:

Post a Comment