I ran an experiment in Redis to test memory usage of large keys. I loaded 16 Million strings with 50-60 characters (bytes), roughly taking 802 MB on disk into a sorted set in Redis. It used up (got bloated to) 3.12 GB of RAM for this sorted set.
Then I loaded 16 Million short strings (10-12 characters) occupying 220 MB of space on disk into another sorted set which still used up 2.5 GB of RAM. It is evident that the reduction in space usage on disk is quite high (~72% reduced), but the Redis sorted set still uses quite a significant amount of memory used by the large strings.
The same is the case with Redis hashes also (the short strings use up pretty much 80% of memory used by long strings). Does memory used by Redis data structures depend only on the number of elements in the data structure (sorted set or hash) and does not on the length of each element (which is natural to think so- shorter strings=>lesser memory)?
It'll be great if I can understand why
16 Million long strings use up almost same space as 16 Million short strings
in sorted set, and if there is anything I can do to reduce memory taken up by the short strings (any memory optimization)?