12

I have store 111 million key-value pairs (one key can have multiple values - maximum 2/3) whose key are 50 bit Integers and values are 32 bit (maximum) Integers. Now, my requirements are:

  1. Fast Insertion of (Key, Value) pair [allowing duplicates]
  2. Fast retrieving of value/values based on key.

A nice solution of it is given here based on MultiMap. However, I want to store more key-values pairs in main memory with no/little bit performance penalty. I studied from web articles that B+ Tree, R+ Tree, B Tree, Compact Multimap etc. can be a nice solution for that. Can anybody help me:

Is there any Java library which satisfies my all those needs properly (above mentioned/other ds also acceptable. no issue with that) ? Actually, I want an efficient java library data structure to store/retrieve key-value/values pairs which takes less memory footprint and must be built in-memory.

NB: I have tried with HashMultiMap (Guava with some modification with trove) as mentioned by Louis Wasserman, Kyoto/Tokyo Cabinet etc etc.My experience is not good with disk-baked solutions. So please avoid that :). Another point is that, for choosing library/ds one important point is: keys are 50 bit (so if we assign 64bit) 14 bit will lost and values are 32 bit Int (maximum)- mostly they are 10-12-14 bits. So, we can save space there also.

4

6 回答 6

7

我认为 JDK 中没有任何东西可以做到这一点。

然而,实现这样的事情是一个简单的编程问题。这是一个带有线性探测的开放寻址哈希表,其键和值存储在并行数组中:

public class LongIntParallelHashMultimap {

    private static final long NULL = 0L;

    private final long[] keys;
    private final int[] values;
    private int size;

    public LongIntParallelHashMultimap(int capacity) {
        keys = new long[capacity];
        values = new int[capacity];
    }

    public void put(long key, int value) {
        if (key == NULL) throw new IllegalArgumentException("key cannot be " + NULL);
        if (size == keys.length) throw new IllegalStateException("map is full");

        int index = indexFor(key);
        while (keys[index] != NULL) {
            index = successor(index);
        }
        keys[index] = key;
        values[index] = value;
        ++size;
    }

    public int[] get(long key) {
        if (key == NULL) throw new IllegalArgumentException("key cannot be " + NULL);

        int index = indexFor(key);
        int count = countHits(key, index);

        int[] hits = new int[count];
        int hitIndex = 0;

        while (keys[index] != NULL) {
            if (keys[index] == key) {
                hits[hitIndex] = values[index];
                ++hitIndex;
            }
            index = successor(index);
        }

        return hits;
    }

    private int countHits(long key, int index) {
        int numHits = 0;
        while (keys[index] != NULL) {
            if (keys[index] == key) ++numHits;
            index = successor(index);
        }
        return numHits;
    }

    private int indexFor(long key) {
        // the hashing constant is (the golden ratio * Long.MAX_VALUE) + 1
        // see The Art of Computer Programming, section 6.4
        // the constant has two important properties:
        // (1) it is coprime with 2^64, so multiplication by it is a bijective function, and does not generate collisions in the hash
        // (2) it has a 1 in the bottom bit, so it does not add zeroes in the bottom bits of the hash, and does not generate (gratuitous) collisions in the index
        long hash = key * 5700357409661598721L;
        return Math.abs((int) (hash % keys.length));
    }

    private int successor(int index) {
        return (index + 1) % keys.length;
    }

    public int size() {
        return size;
    }

}

请注意,这是一个固定大小的结构。您需要创建足够大的数据来保存所有数据——我的 1.1 亿个条目占用了 1.32 GB。你做得越大,超出存储数据所需的容量,插入和查找的速度就越快。我发现对于 1.1 亿个条目,负载因子为 0.5(2.64 GB,所需空间的两倍),查找密钥平均需要 403 纳秒,但负载因子为 0.75(1.76 GB,比所需空间多三分之一),花费了 575 纳秒。将负载因子降低到 0.5 以下通常不会产生太大的影响,实际上,在负载因子为 0.33(4.00 GB,比所需空间多三倍)的情况下,我得到的平均时间为 394 纳秒。因此,即使您有 5 GB 可用空间,也不要全部使用。

另请注意,零不允许作为键。如果这是一个问题,请将空值更改为其他值,并在创建时使用该值预填充键数组。

于 2012-04-08T23:00:41.647 回答
2

是否有任何 Java 库可以正确满足我的所有这些需求。

AFAIK 没有。或者至少,没有一个可以最大限度地减少内存占用。

但是,编写一个专门满足这些要求的自定义地图类应该很容易。

于 2012-04-08T16:47:33.330 回答
2

基于@Tom Andersons 解决方案,我消除了分配对象的需要,并添加了性能测试。

import java.util.Arrays;
import java.util.Random;

public class LongIntParallelHashMultimap {
    private static final long NULL = Long.MIN_VALUE;

    private final long[] keys;
    private final int[] values;
    private int size;

    public LongIntParallelHashMultimap(int capacity) {
        keys = new long[capacity];
        values = new int[capacity];
        Arrays.fill(keys, NULL);
    }

    public void put(long key, int value) {
        if (key == NULL) throw new IllegalArgumentException("key cannot be " + NULL);
        if (size == keys.length) throw new IllegalStateException("map is full");

        int index = indexFor(key);
        while (keys[index] != NULL) {
            index = successor(index);
        }
        keys[index] = key;
        values[index] = value;
        ++size;
    }

    public int get(long key, int[] hits) {
        if (key == NULL) throw new IllegalArgumentException("key cannot be " + NULL);

        int index = indexFor(key);

        int hitIndex = 0;

        while (keys[index] != NULL) {
            if (keys[index] == key) {
                hits[hitIndex] = values[index];
                ++hitIndex;
                if (hitIndex == hits.length)
                    break;
            }
            index = successor(index);
        }

        return hitIndex;
    }

    private int indexFor(long key) {
        return Math.abs((int) (key % keys.length));
    }

    private int successor(int index) {
        index++;
        return index >= keys.length ? index - keys.length : index;
    }

    public int size() {
        return size;
    }

    public static class PerfTest {
        public static void main(String... args) {
            int values = 110* 1000 * 1000;
            long start0 = System.nanoTime();
            long[] keysValues = generateKeys(values);

            LongIntParallelHashMultimap map = new LongIntParallelHashMultimap(222222227);
            long start = System.nanoTime();
            addKeyValues(values, keysValues, map);
            long mid = System.nanoTime();
            int sum = lookUpKeyValues(values, keysValues, map);
            long time = System.nanoTime();
            System.out.printf("Generated %.1f M keys/s, Added %.1f M/s and looked up %.1f M/s%n",
                    values * 1e3 / (start - start0), values * 1e3 / (mid - start), values * 1e3 / (time - mid));
            System.out.println("Expected " + values + " got " + sum);
        }

        private static long[] generateKeys(int values) {
            Random rand = new Random();
            long[] keysValues = new long[values];
            for (int i = 0; i < values; i++)
                keysValues[i] = rand.nextLong();
            return keysValues;
        }

        private static void addKeyValues(int values, long[] keysValues, LongIntParallelHashMultimap map) {
            for (int i = 0; i < values; i++) {
                map.put(keysValues[i], i);
            }
            assert map.size() == values;
        }

        private static int lookUpKeyValues(int values, long[] keysValues, LongIntParallelHashMultimap map) {
            int[] found = new int[8];
            int sum = 0;
            for (int i = 0; i < values; i++) {
                sum += map.get(keysValues[i], found);
            }
            return sum;
        }
    }
}

印刷

Generated 34.8 M keys/s, Added 11.1 M/s and looked up 7.6 M/s

在带有 Java 7 update 3 的 3.8 GHz i7 上运行。

这比之前的测试慢得多,因为您正在访问主内存,而不是随机访问缓存。这真的是对你记忆速度的考验。写入速度更快,因为它们可以异步执行到主存储器。


使用这个集合

final SetMultimap<Long, Integer> map = Multimaps.newSetMultimap(
        TDecorators.wrap(new TLongObjectHashMap<Collection<Integer>>()),
        new Supplier<Set<Integer>>() {
            public Set<Integer> get() {
                return TDecorators.wrap(new TIntHashSet());
            }
        });

使用 5000 万个条目(大约使用 16 GB)运行相同的测试,-mx20g我得到以下结果。

 Generated 47.2 M keys/s, Added 0.5 M/s and looked up 0.7 M/s

对于 1.1 亿个条目,您将需要大约 35 GB 的内存和一台比我的 (3.8 GHz) 快 10 倍的机器来每秒执行 500 万次添加。

于 2012-04-09T07:34:05.933 回答
2

寻找数据库是个好主意,因为像这样的问题正是它们的设计目的。近年来,Key-Value 数据库变得非常流行,例如用于 Web 服务(关键字“NoSQL”),因此您应该找到一些东西。

自定义数据结构的选择还取决于您是要使用硬盘驱动器来存储数据(以及它的安全性)还是在程序退出时完全丢失。

如果手动实现并且整个数据库很容易适应内存,我只需在 C 中实现一个哈希图。创建一个哈希函数,从一个值中给出一个(良好分布的)内存地址。如果已经分配,​​则插入那里或旁边。然后分配和检索是 O(1)。如果您在 Java 中实现它,则每个(原始)对象都会有 4 字节的开销。

于 2012-04-08T17:01:34.027 回答
0

如果你必须使用 Java,那么实现你自己的 hashtable/hashmap。表的一个重要属性是使用链表来处理冲突。因此,当您进行查找时,您可能会返回列表中的所有元素。

于 2012-04-08T17:11:08.513 回答
0

可能是我回答这个问题迟了,但弹性搜索会解决你的问题。

于 2015-10-07T11:22:15.467 回答