7

我是一个 R 用户,我无法弄清楚 match() 的 pandas 等价物。我需要使用此函数遍历一堆文件,获取关键信息,并将其合并回“url”上的当前数据结构。在 R 我会做这样的事情:

logActions <- read.csv("data/logactions.csv")
logActions$class <- NA

files = dir("data/textContentClassified/")
for( i in 1:length(files)){
    tmp <- read.csv(files[i])
    logActions$class[match(logActions$url, tmp$url)] <- 
            tmp$class[match(tmp$url, logActions$url)]
}

我认为我不能使用 merge() 或 join(),因为每次都会覆盖 logActions$class。我也不能使用 update() 或 combine_first(),因为它们都没有必要的索引功能。我还尝试基于此 SO post制作 match() 函数,但无法弄清楚如何使其与 DataFrame 对象一起使用。抱歉,如果我遗漏了一些明显的东西。

下面是一些 Python 代码,总结了我在 pandas 中执行 match() 之类的无效尝试:

from pandas import *
left = DataFrame({'url': ['foo.com', 'foo.com', 'bar.com'], 'action': [0, 1, 0]})
left["class"] = NaN
right1 = DataFrame({'url': ['foo.com'], 'class': [0]})
right2 = DataFrame({'url': ['bar.com'], 'class': [ 1]})

# Doesn't work:
left.join(right1, on='url')
merge(left, right, on='url')

# Also doesn't work the way I need it to:
left = left.combine_first(right1)
left = left.combine_first(right2)
left 

# Also does something funky and doesn't really work the way match() does:
left = left.set_index('url', drop=False)
right1 = right1.set_index('url', drop=False)
right2 = right2.set_index('url', drop=False)

left = left.combine_first(right1)
left = left.combine_first(right2)
left

所需的输出是:

    url  action  class
0   foo.com  0   0
1   foo.com  1   0
2   bar.com  0   1

但是,我需要能够一遍又一遍地调用它,这样我就可以遍历每个文件。

4

3 回答 3

10

请注意pandas.matchwhich 的存在正是 Rmatch所做的。

于 2013-04-07T18:35:18.360 回答
1

编辑

如果所有正确数据帧中的 url 都是唯一的,您可以将正确的数据帧作为由class索引的 Series url,然后您可以通过索引它获取左侧每个 url 的类。

from pandas import *
left = DataFrame({'url': ['foo.com', 'bar.com', 'foo.com', 'tmp', 'foo.com'], 'action': [0, 1, 0, 2, 4]})
left["klass"] = NaN
right1 = DataFrame({'url': ['foo.com', 'tmp'], 'klass': [10, 20]})
right2 = DataFrame({'url': ['bar.com'], 'klass': [30]})

left["klass"] = left.klass.combine_first(right1.set_index('url').klass[left.url].reset_index(drop=True))
left["klass"] = left.klass.combine_first(right2.set_index('url').klass[left.url].reset_index(drop=True))

print left

这是你想要的吗?

import pandas as pd
left = pd.DataFrame({'url': ['foo.com', 'foo.com', 'bar.com'], 'action': [0, 1, 0]})
left["class"] = NaN
right1 = pd.DataFrame({'url': ['foo.com'], 'class': [0]})
right2 = pd.DataFrame({'url': ['bar.com'], 'class': [ 1]})

pd.merge(left.drop("class", axis=1), pd.concat([right1, right2]), on="url")

输出:

   action      url  class
0       0  foo.com      0
1       1  foo.com      0
2       0  bar.com      1

如果左边的类列不全是 NaN,你可以将它与结果结合起来。

于 2013-04-06T22:36:59.913 回答
0

这是我最终使用的完整代码:

#read in df containing actions in chunks:
tp = read_csv('/data/logactions.csv', 
  quoting=csv.QUOTE_NONNUMERIC,
  iterator=True, chunksize=1000, 
  encoding='utf-8', skipinitialspace=True,
  error_bad_lines=False)
df = concat([chunk for chunk in tp], ignore_index=True)

# set classes to NaN
df["klass"] = NaN
df = df[notnull(df['url'])]
df = df.reset_index(drop=True)

# iterate over text files, match, grab klass
startdate = date(2013, 1, 1)
enddate = date(2013, 1, 26) 
d = startdate

while d <= enddate:
    dstring = d.isoformat()
    print dstring

    # Read in each file w/ classifications in chunks
    tp = read_csv('/data/textContentClassified/content{dstring}classfied.tsv'.format(**locals()), 
        sep = ',', quoting=csv.QUOTE_NONNUMERIC,
        iterator=True, chunksize=1000, 
        encoding='utf-8', skipinitialspace=True,
        error_bad_lines=False)
    thisdatedf = concat([chunk for chunk in tp], ignore_index=True)
    thisdatedf=thisdatedf.drop_duplicates(['url'])
    thisdatedf=thisdatedf.reset_index(drop=True)

    thisdatedf = thisdatedf[notnull(thisdatedf['url'])]
    df["klass"] = df.klass.combine_first(thisdatedf.set_index('url').klass[df.url].reset_index(drop=True))

    # Now iterate
    d = d + timedelta(days=1)
于 2013-04-07T02:42:31.607 回答