3

我有一个 JSON 数据集,其中包含从 Redis 存储中提取的大约 870 万个键值对,其中每个键都保证是 8 位数字,并且键是 8 个字母数字字符值,即

[{
"91201544":"INXX0019",
"90429396":"THXX0020",
"20140367":"ITXX0043",
 ...
}]

为了减少 Redis 内存使用,我想将其转换为哈希哈希,其中哈希前缀键是键的前 6 个字符(请参阅此链接),然后将其存储回 Redis。

具体来说,我希望我生成的 JSON 数据结构(然后我将编写一些代码来解析这个 JSON 结构并创建一个由 HSET 等组成的 Redis 命令文件)看起来更像

[{
 "000000": { "00000023": "INCD1234",
             "00000027": "INCF1423",
              ....
           },
 ....
 "904293": { "90429300": "THXX0020",
             "90429302": "THXX0024",
             "90429305": "THXY0013"}
 }]

由于jq给我留下了深刻的印象,并且我正在努力更精通函数式编程,因此我想使用jq来完成这项任务。到目前为止,我想出了以下内容:

% jq '.[0] | to_entries | map({key: .key, pfx: .key[0:6], value: .value}) | group_by(.pfx)'

这给了我类似的东西

[
  [
    {
      "key": "00000130",
      "pfx": "000001",
      "value": "CAXX3231"
    },
    {
      "key": "00000162",
      "pfx": "000001",
      "value": "CAXX4606"
    }
  ],
  [
    {
      "key": "00000238",
      "pfx": "000002",
      "value": "CAXX1967"
    },
    {
      "key": "00000256",
      "pfx": "000002",
      "value": "CAXX0727"
    }
  ],
  ....
]

我尝试了以下方法:

% jq 'map(map({key: .pfx, value: {key, value}})) 
      | map(reduce .[] as $item ({}; {key: $item.key, value: [.value[], $item.value]} )) 
      | map( {key, value: .value | from_entries} ) 
      | from_entries'

这确实给了我正确的结果,但也为每次减少(我相信)打印出一个错误

jq: error: Cannot iterate over null

最终结果是

{
   "000001": {
     "00000130": "CAXX3231",
     "00000162": "CAXX4606"
   },
   "000002": {
     "00000238": "CAXX1967",
     "00000256": "CAXX0727"
   },
   ...
}

这是正确的,但我怎样才能避免抛出这个 stderr 警告呢?

4

3 回答 3

2

我不确定这里是否有足够的数据来评估问题的根源。我发现很难相信你尝试的结果是这样的。我一直在出错。

试试这个过滤器:

.[0]
    | to_entries
    | group_by(.key[0:6])
    | map({
          key:   .[0].key[0:6],
          value: map(.key=.key[6:8]) | from_entries
      })
    | from_entries

给定如下所示的数据:

[{
    "91201544":"INXX0019",
    "90429396":"THXX0020",
    "20140367":"ITXX0043",
    "00000023":"INCD1234",
    "00000027":"INCF1423",
    "90429300":"THXX0020",
    "90429302":"THXX0024",
    "90429305":"THXY0013"
}]

结果如下:

{
  "000000": {
    "23": "INCD1234",
    "27": "INCF1423"
  },
  "201403": {
    "67": "ITXX0043"
  },
  "904293": {
    "00": "THXX0020",
    "02": "THXX0024",
    "05": "THXY0013",
    "96": "THXX0020"
  },
  "912015": {
    "44": "INXX0019"
  }
}
于 2014-07-18T06:55:16.307 回答
0

I understand that this is not what you are asking for but, just for the reference, I think it will be MUCH more faster to do this with Redis's built-in Lua scripting.

And it turns out that it is a bit more straightforward:

for _,key in pairs(redis.call('keys', '*')) do
  local val = redis.call('get', key)
  local short_key = string.sub(key, 0, -2)
  redis.call('hset', short_key, key, val)
  redis.call('del', key)
end

This will be done in place without transferring from/to Redis and converting to/from JSON.

Run it from console as:

$ redis-cli eval "$(cat script.lua)" 0
于 2014-07-18T07:41:47.787 回答
0

作为记录,jqgroup_by依赖于排序,当输入足够大时,这当然会显着减慢速度。即使输入数组只有 100,000 个项目,以下代码也快了大约 40%:

def compress:
  . as $in
  | reduce keys[] as $key ({};
      $key[0:6] as $k6
      | $key[6:] as $k2
      | .[$k6] += {($k2): $in[$key]} );

.[0] | compress

给定 Jeff 的输入,输出是相同的。

于 2015-12-14T18:43:51.200 回答