我想根据 18 到 25 岁的用户年龄组找出热门网站页面访问量。我有两个文件,一个包含用户名、年龄,另一个文件包含用户名、网站名称。例子:
用户.txt
约翰,22 岁
pages.txt
约翰,google.com
我在 python 中编写了以下内容,它在 hadoop 之外按预期工作。
import os
os.chdir("/home/pythonlab")
#Top sites visited by users aged 18 to 25
#read the users file
lines = open("users.txt")
users = [ line.split(",") for line in lines] #user name, age (eg - john, 22)
userlist = [ (u[0],int(u[1])) for u in users] #split the user name and age
#read the page visit file
pages = open("pages.txt")
page = [p.split(",") for p in pages] #user name, website visited (eg - john,google.com)
pagelist = [ (p[0],p[1]) for p in page]
#map user and page visits & filter age group between 18 and 25
usrpage = [[p[1],u[0]] for u in userlist for p in pagelist if (u[0] == p[0] and u[1]>=18 and u[1]<=25) ]
for z in usrpage:
print(z[0].strip('\r\n')+",1") #print website name, 1
样本输出:
yahoo.com,1 google.com,1
现在我想使用 hadoop 流解决这个问题。
我的问题是,如何在我的映射器中处理这两个命名文件(users.txt、pages.txt)?我们通常只将输入目录传递给 hadoop 流。