我是scrapy的新手,我正在抓取一个由职位组成的基于工作的网站,即,当我们点击职位时,将打开一个新页面,其中包含我需要获取的数据。
例如,该页面包含具有以下格式的表格,
Job Title Full/Part Time Location/Affiliates
1. Accountant Full Time Mount Sinai Medical Center (Manhattan)
2. Accountant Full Time Mount Sinai Medical Center (Manhattan)
3. Admin Assistant Full Time Mount Sinai Hospital (Queens)
4. Administrative Assistant Full Time Mount Sinai Medical Center (Manhattan)
Page: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45
上面提到的所有职位都是 javascript 生成的链接,我需要提交所有带有值的 javascript 链接(使用 firebug 找到)。如何一次提交多个表单,或者如何编写一种循环遍历所有职位链接的方法所以我们可以从职位的每个链接中获取数据。
我还需要对上面提到的所有页面进行分页,当我单击第 2 页时,会打开一个页面,该页面由具有不同工作职位的相同表格格式组成,等等,我如何在 scrapy 中对这些页面进行分页。
我的代码是:
class MountSinaiSpider(BaseSpider):
name = "mountsinai"
allowed_domains = ["mountsinaicss.igreentree.com"]
start_urls = [
"https://mountsinaicss.igreentree.com/css_external/CSSPage_SearchAndBrowseJobs.ASP?T=20120517011617&",
]
# This method is for submitting starting page with some values for clicking "Find Jobs"
def parse(self, response):
return [FormRequest.from_response(response,
formdata={ "Type":"CSS","SRCH":"Search Jobs","InitURL":"CSSPage_SearchAndBrowseJobs.ASP","RetColsQS":"Requisition.Key¤Requisition.JobTitle¤Requisition.fk_Code_Full_Part¤[Requisition.fk_Code_Full_Part]OLD.Description(sysfk_Code_Full_PartDesc)¤Requisition.fk_Code_Location¤[Requisition.fk_Code_Location]OLD.Description(sysfk_Code_LocationDesc)¤Requisition.fk_Code_Dept¤[Requisition.fk_Code_Dept]OLD.Description(sysfk_Code_DeptDesc)¤Requisition.Req¤","RetColsGR":"Requisition.Key¤Requisition.JobTitle¤Requisition.fk_Code_Full_Part¤[Requisition.fk_Code_Full_Part]OLD.Description(sysfk_Code_Full_PartDesc)¤Requisition.fk_Code_Location¤[Requisition.fk_Code_Location]OLD.Description(sysfk_Code_LocationDesc)¤Requisition.fk_Code_Dept¤[Requisition.fk_Code_Dept]OLD.Description(sysfk_Code_DeptDesc)¤Requisition.Req¤","ResultSort":"" },
callback=self.parse_main_list)]
def parse_main_list(self, response):
return [FormRequest.from_response(response,
formdata={ "Key":"","Type":"CSS","InitPage":"1","InitURL":"CSSPage_SearchAndBrowseJobs.ASP","SRCH":"earch Jobs","Search":"ISNULL(Requisition.DatePostedExternal, '12/31/9999')¤BETWEEN 1/1/1753 AND Today¥","RetColsQS":"Requisition.Key¤Requisition.JobTitle¤Requisition.fk_Code_Full_Part¤[Requisition.fk_Code_Full_Part]OLD.Description(sysfk_Code_Full_PartDesc)¤Requisition.fk_Code_Location¤[Requisition.fk_Code_Location]OLD.Description(sysfk_Code_LocationDesc)¤Requisition.fk_Code_Dept¤[Requisition.fk_Code_Dept]OLD.Description(sysfk_Code_DeptDesc)¤Requisition.Req¤","RetColsGR":"Requisition.Key¤Requisition.JobTitle¤Requisition.fk_Code_Full_Part¤[Requisition.fk_Code_Full_Part]OLD.Description(sysfk_Code_Full_PartDesc)¤Requisition.fk_Code_Location¤[Requisition.fk_Code_Location]OLD.Description(sysfk_Code_LocationDesc)¤Requisition.fk_Code_Dept¤[Requisition.fk_Code_Dept]OLD.Description(sysfk_Code_DeptDesc)¤Requisition.Req¤","ResultSort":"[sysfk_Code_Full_PartDesc]" },
dont_click = True,
callback=self.parse_fir_pag_urls)]
def parse_fir_pag_urls(self, response):
print response'