由于您有 30GB 的输入数据,您可能不希望尝试将其全部保存在内存数据结构中的东西。让我们改用磁盘空间。
这是一种将所有数据加载到 sqlite 数据库中的方法,并为每个唯一的姓氏和地址对生成一个 id,然后将所有内容重新连接在一起:
#!/bin/sh
csv="$1"
# Use an on-disk database instead of in-memory because source data is 30gb.
# This will take a while to run.
db=$(mktemp -p .)
sqlite3 -batch -csv -header "${db}" <<EOF
.import "${csv}" people
CREATE TABLE ids(id INTEGER PRIMARY KEY, lname, address, UNIQUE(lname, address));
INSERT OR IGNORE INTO ids(lname, address) SELECT lname, address FROM people;
SELECT p.*, i.id AS ID
FROM people AS p
JOIN ids AS i ON (p.lname, p.address) = (i.lname, i.address)
ORDER BY p.rowid;
EOF
rm -f "${db}"
例子:
$./makeids.sh data.csv
D,FNAME,MNAME,LNAME,GENDER,DOB,snapshot,Address,ID
2,66M,J,Rock,F,1995,201211.0,J,1
3,David,HM,Lee,M,1991,201211.0,J,2
6,66M,"",Rock,F,1990,201211.0,J,1
0,David,"H M",Lee,M,1990,201211.0,B,3
3,Marc,H,Robert,M,2000,201211.0,C,4
6,Marc,M,Robert,M,1988,201211.0,C,4
6,Marc,MS,Robert,M,2000,201211.0,D,5
ID最好只由数字组成。
如果可以放宽该限制,您可以通过使用姓氏和地址的加密哈希作为 ID 一次性完成:
$ perl -MDigest::SHA=sha1_hex -F, -lane '
BEGIN { $" = $, = "," }
if ($. == 1) { print @F, "ID" }
else { print @F, sha1_hex("@F[3,7]") }' data.csv
D,FNAME,MNAME,LNAME,GENDER,DOB,snapshot,Address,ID
2,66M,J,Rock,F,1995,201211.0,J,5c99211a841bd2b4c9cdcf72d7e95e46b2ae08b5
3,David,HM,Lee,M,1991,201211.0,J,c263f9d1feb4dc789de17a8aab8f2808aea2876a
6,66M,,Rock,F,1990,201211.0,J,5c99211a841bd2b4c9cdcf72d7e95e46b2ae08b5
0,David,H M,Lee,M,1990,201211.0,B,e86e81ab2715a8202e41b92ad979ca3a67743421
3,Marc,H,Robert,M,2000,201211.0,C,363ed8175fdf441ed59ac19cea3c37b6ce9df152
6,Marc,M,Robert,M,1988,201211.0,C,363ed8175fdf441ed59ac19cea3c37b6ce9df152
6,Marc,MS,Robert,M,2000,201211.0,D,cf5135dc402efe16cd170191b03b690d58ea5189
或者,如果唯一 lname、地址对的数量足够小,可以合理地将它们存储在系统上的哈希表中:
#!/usr/bin/gawk -f
BEGIN {
FS = OFS = ","
}
NR == 1 {
print $0, "ID"
next
}
! ($4, $8) in ids {
ids[$4, $8] = ++counter
}
{
print $0, ids[$4, $8]
}