1

我正在尝试从具有数百万条记录的日志文件中计算每小时的平均响应时间,下面是日志的摘录

截至目前,我正在尝试创建临时文件,该文件将包含具有唯一 id 和开始时间和结束时间的行,然后另一个脚本将在此临时文件上运行以计算每小时的平均响应时间我的脚本需要一个多小时才能完成创建临时文件。

有什么方法可以让我们更快地做到这一点?或执行时间更短的更好的脚本。注意:这些 UNIQID 不是按顺序出现的。

log file format
2012-06-04 13:04:19,324 UNIQID1
2012-06-04 13:04:20,120 UNIQID1
2012-06-04 13:05:19,324 UNIQID2
2012-06-04 13:06:20,120 UNIQID2
2012-06-04 13:07:19,324 UNIQID3
2012-06-04 13:08:20,120 UNIQID3
2012-06-04 13:08:49,324 UNIQID4
2012-06-04 13:09:50,120 UNIQID4

这是我的代码:

uids=`cat $i|grep "UNIQ" |sort -u` >> $log
for uid in ${uids}; do  
    count=`grep "$uid" test.log|wc -l`
    if [ "${count}" -ne "0" ]; then
        unique_uids[counter]="$uid"
        let counter=counter+1   
    fi   
done


echo ${unique_uids[@]}   
echo $counter  
echo " Unique No:" ${#unique_uids[@]}
echo uid StartTime EndTime" > $log

for unique_uids in ${unique_uids[@]} ; do
    responseTime=`cat $i|grep "${unique_uids}" |awk '{split($2,Arr,":|,"); print Arr[1]*3600000+Arr[2]*60000+Arr[3]*1000+Arr[4]}'|sort -n`
    echo $unique_uids $responseTime >> $log
done

谢谢你的时间!

4

2 回答 2

1

一些简单的修复:

  • 你不需要cat电话;只需使用文件名作为grep.
  • 您不应该将值同时保存到文件和变量中;使用哪个更快。通常你不必使用任何一个;一个while IFS= read -r date time id循环可能会更快。
于 2013-06-05T13:36:02.300 回答
0

您的脚本有几个问题,我认为您会发现下面的内容更适合您的需求。首先,您不需要产生所有这些进程来完成这项工作——在 awk 中完成这一切相当简单。此外,您发布的代码假定特定的 UNIQID 仅在同一日期出现。如果你的记录从午夜到第二天,这个假设会引起很大的痛苦。

awk以下代码在脚本中执行您想要的操作。它假设您正在使用gawk(Gnu awk)。如果你不是,你可以awk在网上找到 mktime 的实现,包括这里

BEGIN {
  while (getline < UIDFILE) {
    x[$0] = 1;          # Awk will maintain these as an associative array, lookups are hashed
  }
}


{
  r = $NF;                  # Extract the unique ID from the record into r
  if (r in x) {             # If the UID is something we are interested in, then ...
    ts = $1 " " $2;         # concatenate these fields
    gsub ("[:-]", " ", ts); # Replace the : and - with spaces 
    gsub (",.*", "", ts);   # Remove everything after the comma
    # print ts, mktime(ts)  # If you want to see what mktime does 

    if (x[r] == "")         # First time seeing this unique ID?
      x[r] = mktime(ts);    # Store the timestamp
    else {                  # We're seeing it the second time
      now = mktime(ts)      # Keep track of the current log time 
      rt = now - x[r];      # Compute the delta
      delete (x[r])         # We don't need it any more
      # printf "Record <%s> has response time %f\n", r, rt;  # Print it out if you'd like
      hourrt += rt;         # Add it to this hour's total response time
      num++;                # And also keep track of how many records we have ending in this hour
       if (now % 3600 == 0) {  # Have we switched to a new hour?
          printf "Average response time = %f\n", hourrt / num;   # Dump the average
          num = hourrt = 0;
      }
    }
  }
}

您将需要按如下方式调用此脚本:

gawk -v UIDFILE=name_of_uid_file  -f scriptname.awk 
于 2013-06-05T13:56:30.440 回答