Jake Jake - 25 days ago 7
C# Question

Best way to GetHashCode() for 44-bit number stored as Int64

I have around 5,000,000 objects stored in a

Dictionary<MyKey, MyValue>
.

MyKey
is a struct that packs each component of my key (5 different numbers) in the right-most 44 bits of an
Int64
(
ulong
).

Since the
ulong
will always start with 20 zero-bits, my gut feeling is that returning the native
Int64.GetHashCode()
implementation is likely to collide more often, than if the hash code implementation only considers the 44 bits that are actually in use (although mathematically, I wouldn't know where to begin to prove that theory).

This increases the number of calls to
.Equals()
and makes dictionary lookups slower.

The .NET implementation of
Int64.GetHashCode()
looks like this:

public override int GetHashCode()
{
return (int)this ^ (int)(this >> 32);
}


How would I best implement
GetHashCode()
?

Answer

Here's my attempt to answer this question, which I'm posting despite the fact that the answer is the opposite of what I was expecting. (Although I may have made a mistake somewhere - I almost hope so, and am open to criticism regarding my test technique.)

  // Number of Dictionary hash buckets found here:
  // http://stackoverflow.com/questions/24366444/how-many-hash-buckets-does-a-net-dictionary-use
  const int CNumberHashBuckets = 4999559;

  static void Main(string[] args)
  {
     Random randomNumberGenerator = new Random();

     int[] dictionaryBuckets1 = new int[CNumberHashBuckets];
     int[] dictionaryBuckets2 = new int[CNumberHashBuckets];

     for (int i = 0; i < 5000000; i++)
     {
        ulong randomKey = (ulong)(randomNumberGenerator.NextDouble() * 0x0FFFFFFFFFFF);

        int simpleHash = randomKey.GetHashCode();
        BumpHashBucket(dictionaryBuckets1, simpleHash);

        int superHash = ((int)(randomKey >> 12)).GetHashCode() ^ ((int)randomKey).GetHashCode();
        BumpHashBucket(dictionaryBuckets2, superHash);
     }

     int collisions1 = ComputeCollisions(dictionaryBuckets1);
     int collisions2 = ComputeCollisions(dictionaryBuckets2);
  }

  private static void BumpHashBucket(int[] dictionaryBuckets, int hashedKey)
  {
     int bucketIndex = (int)((uint)hashedKey % CNumberHashBuckets);
     dictionaryBuckets[bucketIndex]++;
  }

  private static int ComputeCollisions(int[] dictionaryBuckets)
  {
     int i = 0;
     foreach (int dictionaryBucket in dictionaryBuckets)
        i += Math.Max(dictionaryBucket - 1, 0);
     return i;
  }

I try to simulate how the processing done by Dictionary will work. The OP says he has "around 5,000,000" objects in a Dictionary, and according to the referenced source there will be either 4999559 or 5999471 "buckets" in the Dictionary.

Then I generate 5,000,000 random 44-bit keys to simulate the OP's Dictionary entries, and for each key I hash it two different ways: the simple ulong.GetHashCode() and an alternative way that I suggested in a comment. Then I turn each hash code into a bucket index using modulo - I assume that's how it is done by Dictionary. This is used to increment the pseudo buckets as a way of computing the number of collisions.

Unfortunately (for me) the results are not as I was hoping. With 4999559 buckets the simulation typically indicates around 1.8 million collisions, with my "super hash" technique actually having a few (around 0.01%) MORE collisions. With 5999471 buckets there are typically around 1.6 million collisions, and my so-called super hash gives maybe 0.1% fewer collisions.

So my "gut feeling" was wrong, and there seems to be no justification for trying to find a better hash code technique.