1

脚本读取 URL 列表,我将该列表传递到队列中,然后使用 python-newspaper3k 处理它们。我有很多不同的 URL,其中许多不是很受欢迎的网站。问题是处理永远不会结束。有时它已经结束了,但是有些进程处理了一些问题而停止。问题是当 python-newspaper 尝试解析每个 HTML 时。代码是

在这里,我将 URL 加载到队列中,然后使用报纸下载并解析每个 HTML。

def grab_data_from_queue():
    #while not q.empty(): # check that the queue isn't empty
    while True:
        if q.empty():
            break
        #print q.qsize()
        try:
            urlinit = q.get(timeout=10) # get the item from the queue
            if urlinit is None:
                print('urlinit is None')
                q.task_done()
            url = urlinit.split("\t")[0]
            url = url.strip('/')
            if ',' in url:
                print(', in url')
                q.task_done()
            datecsv = urlinit.split("\t\t\t\t\t")[1]
            url2 = url
            time_started = time.time()
            timelimit = 2
            #page = requests.get(url)
            #page.raise_for_status()

            #print "Trying: " + str(url)

            if len(url) > 30:

                if photo == 'wp':
                    article = Article(url, browser_user_agent = 'Mozilla/5.0 (X11; Linux x86_64; rv:10.0) Gecko/20100101 Firefox/10.0')
                else:
                    article = Article(url, browser_user_agent = 'Mozilla/5.0 (X11; Linux x86_64; rv:10.0) Gecko/20100101 Firefox/10.0', fetch_images=False)
                    imgUrl = ""

                #response = get(url, timeout=10)
                #article.set_html(response.content)

                article.download()
                article.parse()
                print(str(q.qsize()) + " parse passed")

然后我做线程

for i in range(4): # aka number of threadtex
    try:
        t1 = Thread(target = grab_data_from_queue,) # target is the above function
        t1.setDaemon(True)
        t1.start() # start the thread
    except Exception as e:
        exc_type, exc_obj, exc_tb = sys.exc_info()
        print(str(exc_tb.tb_lineno) + ' => ' + str(e))


q.join()

有没有办法找到哪个 URL 有问题并且需要很长时间才能退出?如果我找不到 URL,是否可以停止线程守护程序?

4

0 回答 0