我将逐步回答这些问题:
I want to search over many such pages which have the tag TV and then later perform the check over those pages to satisfy the above condition.
好吧,如果你想抓取多个这样的页面,你必须从主题的根页面开始,该页面有许多与该特定主题相关的问题,并开始抓取该根页面中列出的这些问题的链接。
Also, I want to scrape multiple questions(as many as 40 or so)
为此,您需要模拟滚动,以便在向下时可以找到越来越多的问题。
您不能直接使用Requests
,BeautifulSoup
来执行模拟滚动操作等事件。这是我在 Python 中使用该Selenium
库来满足您的要求的一段代码。
注意:
为您的 chrome 版本安装Chrome 驱动程序。
使用pip install -U selenium
.
如果您使用的是 windows - executable_path='/path/to/chromedriver.exe'
此代码要求 2 个链接,然后开始抓取“问题、答案编号、标签、4 个答案”并将它们保存为csv格式。
Keys.PAGE_DOWN
用于模仿滚动按钮。不同的详细信息已附加到row
列表中,最后保存到csv
文件中。
此外,您可以更改no_of_pagedowns
变量的值以增加编号。你想要的卷轴。
import time
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import csv
with open('submission.csv','w') as file:
file.write("Question,No. of answers,Tags,4 answers")
link1 = input("Enter first link")
#link2 = input("Enter second link")
manylinks = list()
manylinks.append(link1)
#manylinks.append(link2)
for olink in manylinks:
qlinks = list()
browser = webdriver.Chrome(executable_path='/Users/ajay/Downloads/chromedriver')
browser.get(olink)
time.sleep(1)
elem = browser.find_element_by_tag_name("body")
no_of_pagedowns = 50
while no_of_pagedowns:
elem.send_keys(Keys.PAGE_DOWN)
time.sleep(0.2)
no_of_pagedowns-=1
post_elems =browser.find_elements_by_xpath("//a[@class='question_link']")
for post in post_elems:
qlink = post.get_attribute("href")
print(qlink)
qlinks.append(qlink)
for qlink in qlinks:
append_status=0
row = list()
browser.get(qlink)
time.sleep(1)
elem = browser.find_element_by_tag_name("body")
no_of_pagedowns = 1
while no_of_pagedowns:
elem.send_keys(Keys.PAGE_DOWN)
time.sleep(0.2)
no_of_pagedowns-=1
#Question Names
qname =browser.find_elements_by_xpath("//div[@class='question_text_edit']")
for q in qname:
print(q.text)
row.append(q.text)
#Answer Count
no_ans = browser.find_elements_by_xpath("//div[@class='answer_count']")
# print("No. of ans :")
for count in no_ans:
# print(count.text)
append_status = int(count.text[:2])
row.append(count.text)
#Tags
tags = browser.find_elements_by_xpath("//div[@class='header']")
# print("\nTag :")
tag_field = list()
for t in tags:
tag_field.append(t.text)
# print(t.text,'\n')
row.append(tag_field)
#All answers
all_ans=browser.find_elements_by_xpath("//div[@class='ui_qtext_expanded']")
i=1
answer_field = list()
for post in all_ans:
if i<=4:
i=i+1
# print("Answer : ")
# print(post.text)
answer_field.append(post.text)
else:
break
row.append(answer_field)
print('append_status',append_status)
if append_status >= 4:
with open('submission.csv','a') as file:
writer = csv.writer(file)
writer.writerow(row)