3

I ran an experiment in Redis to test memory usage of large keys. I loaded 16 Million strings with 50-60 characters (bytes), roughly taking 802 MB on disk into a sorted set in Redis. It used up (got bloated to) 3.12 GB of RAM for this sorted set.

Then I loaded 16 Million short strings (10-12 characters) occupying 220 MB of space on disk into another sorted set which still used up 2.5 GB of RAM. It is evident that the reduction in space usage on disk is quite high (~72% reduced), but the Redis sorted set still uses quite a significant amount of memory used by the large strings.

The same is the case with Redis hashes also (the short strings use up pretty much 80% of memory used by long strings). Does memory used by Redis data structures depend only on the number of elements in the data structure (sorted set or hash) and does not on the length of each element (which is natural to think so- shorter strings=>lesser memory)?

It'll be great if I can understand why

16 Million long strings use up almost same space as 16 Million short strings

in sorted set, and if there is anything I can do to reduce memory taken up by the short strings (any memory optimization)?

4

1 回答 1

4

This question is similar to this one: Redis 10x more memory usage than data

Sorted set is the less memory efficient data structure of Redis. It is implemented as a dictionary plus a skip list. Both data structures involve a number of metadata and pointers (on a per item basis), whose size is higher than 10, 12, 50, or 60 bytes.

A 50 bytes difference in the size of your strings does not result in significant difference in the global memory footprint, because most of the memory is used by pointers, metadata, and internal fragmentation. Of course a larger difference would result in a larger impact.

To leverage memory optimizations, you need to split your data structures (as described in the above link). It is easier to do it with hash or set, and generally difficult (or not possible at all) for sorted sets.

于 2013-02-17T08:51:36.523 回答