我有这个代码:
import urllib
from bs4 import BeautifulSoup
base_url='https://en.wikipedia.org'
start_url='https://en.wikipedia.org/wiki/Computer_programming'
outfile_name='Computer_programming.csv'
no_of_links=10
fp=open(outfile_name, 'wb')
def get_links(link):
html = urllib.urlopen(link).read()
soup = BeautifulSoup(html, "lxml")
ret_list=soup.select('p a[href]')
count=0
ret=[]
for tag in ret_list:
link=tag['href']
if link[0]=='/' and ':' not in link and link[:5]=='/wiki' and '#' not in link:
ret.append(base_url+link)
count=count+1
if count==no_of_links:
return ret
l1=get_links(start_url)
for link in l1:
fp.write('%s;%s\n'%(start_url,link))
for link1 in l1:
l2=get_links(link1)
for link in l2:
fp.write('%s;%s\n'%(link1,link))
for link2 in l2:
l3=get_links(link2)
for link in l3:
fp.write('%s;%s\n'%(link2,link))
fp.close()
是将节点的邻域保存在 csv 文件中。但是当我尝试运行它时,我收到了这个错误:
for link in l3:
TypeError: 'NoneType' object is not iterable
当我尝试运行另一个 Wikipedia 链接(如https://en.wikipedia.org/wiki/Technology )的代码时,我遇到了同样的错误。它工作的唯一页面是:https ://en.wikipedia.org/wiki/Computer_science 。这是一个问题,因为我需要在更多站点上收集数据,而不仅仅是计算机科学站点。
谁能给我一个提示如何处理它?
非常感谢。