本文主要简介一下jump Consistent hash。

jump consistent hash

jump consistent hash是一致性哈希的一种实现,论文见A Fast, Minimal Memory, Consistent Hash Algorithm
经典的一致性哈希算法来自Consistent Hashing and Random Trees: Distributed Caching Protocols for Relieving Hot Spots on the World Wide Web
jump consistent hash与之的主要区别是节点可以扩容,但是不会移除节点。

算法代码

int32_t JumpConsistentHash(uint64_t key, int32_t num_buckets) {
    int64_t b = -1, j = 0;
    while (j < num_buckets) {
        b = j;
        key = key * 2862933555777941757ULL + 1;
        j = (b + 1) * (double(1LL << 31) / double((key >> 33) + 1));
    }
    return b;
}

java实现

guava里头有个现成的实现
guava-22.0-sources.jar!/com/google/common/hash/Hashing.java

/**
   * Assigns to {@code hashCode} a "bucket" in the range {@code [0, buckets)}, in a uniform manner
   * that minimizes the need for remapping as {@code buckets} grows. That is, {@code
   * consistentHash(h, n)} equals:
   *
   * <ul>
   * <li>{@code n - 1}, with approximate probability {@code 1/n}
   * <li>{@code consistentHash(h, n - 1)}, otherwise (probability {@code 1 - 1/n})
   * </ul>
   *
   * <p>This method is suitable for the common use case of dividing work among buckets that meet the
   * following conditions:
   *
   * <ul>
   * <li>You want to assign the same fraction of inputs to each bucket.
   * <li>When you reduce the number of buckets, you can accept that the most recently added buckets
   * will be removed first. More concretely, if you are dividing traffic among tasks, you can
   * decrease the number of tasks from 15 and 10, killing off the final 5 tasks, and {@code
   * consistentHash} will handle it. If, however, you are dividing traffic among servers {@code
   * alpha}, {@code bravo}, and {@code charlie} and you occasionally need to take each of the
   * servers offline, {@code consistentHash} will be a poor fit: It provides no way for you to
   * specify which of the three buckets is disappearing. Thus, if your buckets change from {@code
   * [alpha, bravo, charlie]} to {@code [bravo, charlie]}, it will assign all the old {@code alpha}
   * traffic to {@code bravo} and all the old {@code bravo} traffic to {@code charlie}, rather than
   * letting {@code bravo} keep its traffic.
   * </ul>
   *
   *
   * <p>See the <a href="http://en.wikipedia.org/wiki/Consistent_hashing">Wikipedia article on
   * consistent hashing</a> for more information.
   */
  public static int consistentHash(HashCode hashCode, int buckets) {
    return consistentHash(hashCode.padToLong(), buckets);
  }

  /**
   * Assigns to {@code input} a "bucket" in the range {@code [0, buckets)}, in a uniform manner that
   * minimizes the need for remapping as {@code buckets} grows. That is, {@code consistentHash(h,
   * n)} equals:
   *
   * <ul>
   * <li>{@code n - 1}, with approximate probability {@code 1/n}
   * <li>{@code consistentHash(h, n - 1)}, otherwise (probability {@code 1 - 1/n})
   * </ul>
   *
   * <p>This method is suitable for the common use case of dividing work among buckets that meet the
   * following conditions:
   *
   * <ul>
   * <li>You want to assign the same fraction of inputs to each bucket.
   * <li>When you reduce the number of buckets, you can accept that the most recently added buckets
   * will be removed first. More concretely, if you are dividing traffic among tasks, you can
   * decrease the number of tasks from 15 and 10, killing off the final 5 tasks, and {@code
   * consistentHash} will handle it. If, however, you are dividing traffic among servers {@code
   * alpha}, {@code bravo}, and {@code charlie} and you occasionally need to take each of the
   * servers offline, {@code consistentHash} will be a poor fit: It provides no way for you to
   * specify which of the three buckets is disappearing. Thus, if your buckets change from {@code
   * [alpha, bravo, charlie]} to {@code [bravo, charlie]}, it will assign all the old {@code alpha}
   * traffic to {@code bravo} and all the old {@code bravo} traffic to {@code charlie}, rather than
   * letting {@code bravo} keep its traffic.
   * </ul>
   *
   *
   * <p>See the <a href="http://en.wikipedia.org/wiki/Consistent_hashing">Wikipedia article on
   * consistent hashing</a> for more information.
   */
  public static int consistentHash(long input, int buckets) {
    checkArgument(buckets > 0, "buckets must be positive: %s", buckets);
    LinearCongruentialGenerator generator = new LinearCongruentialGenerator(input);
    int candidate = 0;
    int next;

    // Jump from bucket to bucket until we go out of range
    while (true) {
      next = (int) ((candidate + 1) / generator.nextDouble());
      if (next >= 0 && next < buckets) {
        candidate = next;
      } else {
        return candidate;
      }
    }
  }

/**
   * Linear CongruentialGenerator to use for consistent hashing. See
   * http://en.wikipedia.org/wiki/Linear_congruential_generator
   */
  private static final class LinearCongruentialGenerator {
    private long state;

    public LinearCongruentialGenerator(long seed) {
      this.state = seed;
    }

    public double nextDouble() {
      state = 2862933555777941757L * state + 1;
      return ((double) ((int) (state >>> 33) + 1)) / (0x1.0p31);
    }
  }

使用实例

    @Test
    public void testJumpHash(){
        List<String> nodes = Arrays.asList("ins1","ins2","ins3","ins4");
        List<String> keys = Arrays.asList("key1","key2","key3","key4");
        keys.stream().forEach(e -> {
            int bucket = Hashing.consistentHash(Hashing.md5().hashString(e, Charsets.UTF_8), nodes.size());
            String node = nodes.get(bucket);
            System.out.println(e + " >> " + node);
        });
    }

doc


codecraft
11.9k 声望2k 粉丝

当一个代码的工匠回首往事时,不因虚度年华而悔恨,也不因碌碌无为而羞愧,这样,当他老的时候,可以很自豪告诉世人,我曾经将代码注入生命去打造互联网的浪潮之巅,那是个很疯狂的时代,我在一波波的浪潮上留下...


引用和评论

0 条评论