我有我的代码工作。现在我想做一些修改以从多个 URL 中获取日期,但是 URL 只有一个单词的区别。
这是我的代码,我只从一个 URL 获取。
from string import punctuation, whitespace
import urllib2
import datetime
import re
from bs4 import BeautifulSoup as Soup
import csv
today = datetime.date.today()
html = urllib2.urlopen("http://www.99acres.com/property-in-velachery-chennai-south-ffid").read()
soup = Soup(html)
print "INSERT INTO `property` (`date`,`Url`,`Rooms`,`place`,`PId`,`Phonenumber1`,`Phonenumber2`,`Phonenumber3`,`Typeofperson`,` Nameofperson`,`typeofproperty`,`Sq.Ft`,`PerSq.Ft`,`AdDate`,`AdYear`)"
print 'VALUES'
re_digit = re.compile('(\d+)')
properties = soup.findAll('a', title=re.compile('Bedroom'))
for eachproperty in soup.findAll('div', {'class':'sT'}):
a = eachproperty.find('a', title=re.compile('Bedroom'))
pdate = eachproperty.find('i', {'class':'pdate'})
pdates = re.sub('(\s{2,})', ' ', pdate.text)
div = eachproperty.find('div', {'class': 'sT_disc grey'})
try:
project = div.find('span').find('b').text.strip()
except:
project = 'NULL'
area = re.findall(re_digit, div.find('i', {'class': 'blk'}).text.strip())
print ' ('
print today,","+ (a['href'] if a else '`NULL`')+",", (a.string if a else 'NULL, NULL')+ "," +",".join(re.findall("'([a-zA-Z0-9,\s]*)'", (a['onclick'] if a else 'NULL, NULL, NULL, NULL, NULL, NULL')))+","+ ", ".join([project] + area),","+pdates+""
print ' ), '
这是我想同时获取的 URL
http://www.99acres.com/property-in-velachery-chennai-south-ffid
http://www.99acres.com/property-in-thoraipakkam-chennai-south-ffid
http://www.99acres.com/property-in-madipakkam-chennai-south-ffid
所以你可以看到每个 URL 中只有一个词不同。
我正在尝试创建一个如下所示的数组
for locality in areas (http://www.99acres.com/property-in-velachery-chennai-south-ffid
, http://www.99acres.com/property-in-thoraipakkam-chennai-south-ffid, http://www.99acres.com/property-in-madipakkam-chennai-south-ffid):
link = "str(locality)"
html = urllib2.urlopen(link)
soup = Soup(html)
这似乎不起作用,我实际上只想像这样将一个单词传递给 URL
for locality in areas(madipakkam, thoraipakkam, velachery):
link = “http://www.99acres.com/property-in-+ str(locality)+-chennai-south-ffid"
html= urllib2.urlopen(link)
soup = BeautifulSoup(html)
希望我说清楚了