1

如何从第 1 部分顺利过渡到第 2 部分并将结果保存在第 3 部分中?到目前为止,我无法解析抓取的 url 链接,除非我自己将它插入到第 2 部分中。此外,我无法保存输出结果,因为最后一个 url 链接覆盖了所有其他链接。

import urllib
import mechanize
from bs4 import BeautifulSoup
import os, os.path
import urlparse
import re
import csv

第1部分:

path = '/Users/.../Desktop/parsing/1.html'

f = open(path,"r")
if f.mode == 'r':       
    contents = f.read()

soup = BeautifulSoup(content
search = soup.findAll('div',attrs={'class':'mf_oH mf_nobr mf_pRel'})
searchtext = str(search)
soup1 = BeautifulSoup(searchtext)   

for tag in soup1.findAll('a', href = True):
    raw_url = tag['href'][:-7]
    url = urlparse.urlparse(raw_url)
    p = "http"+str(url.path)

第2部分:

for i in url:
    url = "A SCRAPED URL LINK FROM ABOVE"

    homepage = urllib.urlopen(url)
    soup = BeautifulSoup(homepage)

    for tag in soup.findAll('a',attrs={'name':'g_my.main.right.gifts.link-send'}):
        searchtext = str(tag['href'])
        original = searchtext
        removed = original.replace("gifts?send=", "")
        print removed

第 3 部分

i = 0
for i in removed:
    f = open("1.csv", "a+")
    f.write(removed)
    i += 1
    f.close

更新 1.根据建议,我仍然得到这个: Traceback(最近一次调用最后一次):文件“page.py”,第 31 行,在主页 = urllib.urlopen(url) 文件“/System/Library/Frameworks/Python. framework/Versions/2.7/lib/python2.7/urllib.py”,第 87 行,在 urlopen 返回 opener.open(url) 文件“/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2. 7/urllib.py”,第 180 行,在 open fullurl = unwrap(toBytes(fullurl)) 文件“/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib.py”中,行1057,在 unwrap url = url.strip() AttributeError: 'ParseResult' object has no attribute 'strip'

4

1 回答 1

1

url在第 1 部分中,您将继续使用新的 URL 进行覆盖。您应该使用一个列表并将 URL 附加到该列表:

urls = []
for tag in soup1.findAll('a', href = True):
    raw_url = tag['href'][:-7]
    url = urlparse.urlparse(raw_url)
    urls.append(url)
    p = "http"+str(url.path) # don't know what that's for, you're not using it later

然后,在第 2 部分中,您可以直接迭代urls。同样,removed不应该被每次迭代覆盖。此外,不需要变量original- 您的 searchtext 不会被replace操作更改,因为它返回一个新字符串并保留原​​始字符串:

removed_list = []
for url in urls:
    homepage = urllib.urlopen(url)
    soup = BeautifulSoup(homepage)

    for tag in soup.findAll('a',attrs={'name':'g_my.main.right.gifts.link-send'}):
        searchtext = str(tag['href'])
        removed = searchtext.replace("gifts?send=", "")
        print removed
        removed_list.append(removed)

然后,在第 3 部分中,您不必为输出的每一行打开和关闭文件。实际上,您甚至没有正确关闭它,因为您没有调用close()方法。无论如何,正确的方法是使用该with语句

with open("1.csv", "w") as outfile:
    for item in removed_list:
        outfile.write(item + "\n")

虽然我看不出这是一个 CSV 文件(每行只有一项?)...

于 2014-11-16T08:47:38.080 回答