1

如何优化在 python 中搜索大 tsv 文件的两个元组?

你好。我是一个 python 新手,一直致力于使用两个单独的元组搜索匹配的元组元素。我正在使用最多 3M 行的文件,而我想出的文件非常慢。我已经阅读了几个星期的帖子,但似乎没有正确地将代码拼凑在一起。这就是我到目前为止所拥有的。(为清楚起见,数据已被编辑和简化)。比如说,我有:

authList = (jennifer, 35, 20),(john, 20, 34), (fred, 34, 89)  # this is a tuple of
#unique tweet authors with their x, y coordinates exported from MS Access in the form
#of a txt file.

rtAuthors = (larry, 57, 24, simon), (jeremy, 24, 15, john), (sandra, 39, 24, fred) 
# this is a tuple of tuples including the author, their x,y coordinates, and the
#author whom they are retweeting (taken from the "RT @ portion of their tweet)

我正在尝试创建一个新的元组(rtAuthList),它从 authList 中为 rtAuthors 中的任何转发作者提取 x、y 坐标。

所以我会有一个像这样的新元组:

 rtAuthList = (jeremy, 24, 15, john, 20, 34),(sandra, 39, 24, fred, 34, 89)

我的问题确实有两个部分,所以我不确定是否应该发布两个问题或重新命名我的问题以包含两者。首先,按照我写的方式运行这个过程大约需要一个小时。必须有更快的方法。

我的问题的另一部分是为什么它只完成了最终元组的大约一半?使用我当前的数据集,在这两个步骤之后,我在 authList 中有大约 250,000 行,在 rtAuthors 中有 500,000 行。但是当我处理第三步并在最后打开 rtAuthList 时,它只查看了我的前 10 天的数据,而忽略了最后 20 天——我有一个月的推文正在处理)。我不确定为什么它没有检查整个 rtAuthors 列表。

我在下面包含了我的整个代码,以便您了解我正在尝试做什么,但在创建 authList 和 rtAuthors 元组之后,我真的在寻求第 3 步的帮助。请理解我完全是编程新手,所以写答案就好像我什么都不知道,尽管当您查看我的代码时这可能很明显。

import csv
import sys
import os

authors= ""

class TwitterFields:             ### associated with monthly tweets from Twitter API
    def __init__(self, ID, COORD1, COORD2,TIME, AUTH, TEXT): 
        self.ID = ID
        self.COORD1 = COORD1
        self.COORD2 = COORD2
        self.TIME = TIME
        self.AUTH=AUTH
        self.TEXT=TEXT
        self.RTAUTH=""
        self.RTX=""
        self.RTY=""

        description="Twitter Data Class: holds twitter data fields from API "
        author=""

class AuthorFields:             ## associated with the txt file exported from MS Access
    def __init__(self, AUTH, COORD1, COORD2):
        self.AUTH=AUTH
        self.COORD1 = COORD1
        self.COORD2 = COORD2
        self.RTAUTH=""
        self.RTX=""
        self.RTY=""

        description="Author Data Class: holds author data fields from MS Access export"
        author=""


tw = [] #empty list to hold data from class TwitterFields
rt = [] #empty list to hold data from class AuthorFields


authList = ()        ## tuple for holding auth, x, and y from tw list
rtAuthors = ()      ## tuple for holding tuples from rt where "RT @" is in tweet text
rtAuthList =()      ## tuple for holding results of set intersection 

e = ()                  # tuple for authList
b=()                    # tuple for rtAuthors
c=()                    # tuple for rtAuthList
bad_data = []      #A container for bad data 

with open(r'C:\Users\Amy\Desktop\Code\Merge2.txt') as g:   #open MS Access export file
    for line in g:                                             
        strLine = line.rstrip('\r\n').split("\t")
        tw.append(AuthorFields( str(strLine[0]),   #reads author name       
                                 strLine[1],       # x coordinate
                                 strLine[2]))      # y coordinate


## Step 1 ##
# Loop through the unique author dataset (tw) and make a list of all authors,x, y
try:
    for i in range(1, len(tw)): 
                e=((tw[i].AUTH[:tw[i].AUTH.index(" (")], tw[i].COORD1,tw[i].COORD2))
                authList = authList +(e,)
except:
    bad_data.append(i)

print "length of authList = ", len(authList)    


# Loop through tweet txt file from MS Access 

with open(r'C:\Users\Amy\Desktop\Code\Syria_2012_08UTCedits3.txt') as f:
    for line in f:
        strLine=line.rstrip('\r\n').split('\t') # parse each line for tab spaces
        rt.append(TwitterFields(str(strLine[0]) ,      #reads tweet ID              
                              strLine[5],                         # x coordinate
                              strLine[6],                         # y coordinate
                              strLine[8],                         # time stamp
                              strLine[9],                         # author
                              strLine[12] ))                    # tweet text

## Step 2 ##
## Loop through new list (rt) to find all instances of "RT @" and retrieve author name

for i in range(1, len(rt)):        # creates tuple of (authors, x, y, rtauth, rtx, rty)
    if (rt[i].TEXT[:4] == 'RT @'): # finds author in tweet text between "RT @" and ":"
            end = rt[i].TEXT.find(":")
            rt[i].RTAUTH=rt[i].TEXT[4:end]
            b = ((rt[i].AUTH, rt[i].COORD1, rt[i].COORD2, rt[i].TIME, rt[i].RTAUTH))
            rtAuthors = rtAuthors + (b,)
    else:
        pass

print "length of rtAuthors = ", len(rtAuthors)


## Step 3 ##

## Loop through new rtAuthors tuple and find where rt[i].RTAUTH matches tw[i].AUTH in
## authList.


set1 = set(k[4] for k in rtAuthors).intersection(x[0] for x in authList)
#e = iter(set1).next()
set2 = list(set1)


print "Length of first set = ", len(set2)

# For each match, grab the x and y from authList and copy to rt[i].RTX and rt[i].RTY

for i in range(1, len(rtAuthors)):
    if rt[i].RTAUTH in set2:
        authListIndex = [x[0] for x in authList].index(rt[i].RTAUTH) #get record # 
        rt[i].RTX= authList[authListIndex][1] # grab the x 
        rt[i].RTY = authList[authListIndex][2] # grab the y
        c = ((rt[i].AUTH, rt[i].COORD1, rt[i].COORD2, rt[i].TIME, rt[i].RTAUTH,
        rt[i].RTX, rt[i].RTY))
        rtAuthList = rtAuthList + (c,)   # create new tuple of tuples with matches

else:
    pass

print "length of rtAuthList = ", len(rtAuthList)
4

1 回答 1

1

在第 3 步中,您使用 O(n²) 算法来匹配元组。如果您为 构建查找字典authList,则可以改为在 O(n) 中进行...

>>> authList = ('jennifer', 35, 20), ('john', 20, 34), ('fred', 34, 89)
>>> rtAuthors = ('larry', 57, 24, 'simon'), ('jeremy', 24, 15, 'john'), ('sandra', 39, 24, 'fred')
>>> authDict = {t[0]: t[1:] for t in authList}
>>> rtAuthList = [t + authDict[t[-1]] for t in rtAuthors if t[-1] in authDict]
>>> print rtAuthList
[('jeremy', 24, 15, 'john', 20, 34), ('sandra', 39, 24, 'fred', 34, 89)]
于 2013-06-20T17:31:57.230 回答