Sujith Gunawardhane Sujith Gunawardhane - 1 month ago 12
C++ Question

Using std::bitset for double representation

In my application i'm trying to display the bit representation of double variables.
It works for smaller double variables. Not working for 10^30 level.


#include <iostream>
#include <bitset>
#include <limits>
#include <string.h>

using namespace std;

void Display(double doubleValue)
bitset<sizeof(double) * 8> b(doubleValue);
cout << "Value : " << doubleValue << endl;
cout << "BitSet : " << b.to_string() << endl;

int main()


return 0;


/home/sujith% ./a.out
Value : 1e+09
BitSet : 0000000000000000000000000000000000111011100110101100101000000000
Value : 2e+09
BitSet : 0000000000000000000000000000000001110111001101011001010000000000
Value : 3e+09
BitSet : 0000000000000000000000000000000010110010110100000101111000000000
Value : 1e+30
BitSet : 0000000000000000000000000000000000000000000000000000000000000000
Value : 2e+30
BitSet : 0000000000000000000000000000000000000000000000000000000000000000
Value : 3e+30
BitSet : 0000000000000000000000000000000000000000000000000000000000000000

My worry is why bitset always gives 64, zero for later 3. Interestingly "cout" for the actual values works as expected.


If you look at the std::bitset constructor you will see that it either takes a string as argument, or an integer.

That means your double value will be converted to an integer, and there is no standard integer type that can hold such large values, and that leads to undefined behavior.

If you want to get the actual bits of the double you need to do some casting tricks to make it work:

unsigned long long bits = *reinterpret_cast<unsigned long long*>(&doubleValue);

Note that type-punning like this is not defined in the C++ specification, but as long as sizeof(double) == sizeof(unsigned long long) it will work. If you want the behavior to be well-defined you have to go through arrays of char and char*.