0

我正在尝试从 SEC 文件中获取 .xml 数据。它在第二张桌子上。 但是,如果我进入一个没有 .xml 的页面,我想要 html 版本,第一个和唯一的表。 有人可以帮我理解如何迭代或跳过第一个表,如果有两个表,如果只有一个表,则在第一个表中获取第一个 a['href']?

from urllib2 import urlopen
import requests
from bs4 import BeautifulSoup
tableCount = 0
linklist = [https://www.sec.gov/Archives/edgar/data/1070789/000149315217011092/0001493152-17-011092-index.htm, https://www.sec.gov/Archives/edgar/data/1592603/000139160917000254/0001391609-17-000254-index.htm]
for l in linklist:
html = urlopen(l)
soup = BeautifulSoup(html.read().decode('latin-1', 'ignore'),"lxml")    
table = soup.findAll(class_='tableFile') # works for getting all .htm links
for item in table:
    tableCount +=1
url = table[0].a["href"]
if table.count >= 1:
    url = table[1].a["href"]
else:
    url = table.a["href"]
4

1 回答 1

2

在这两种情况下,您总是需要最后一张表的信息,因此您可以使用 list 的索引 -1 来获取最后一张表:

import requests
from bs4 import BeautifulSoup

urls = ['https://www.sec.gov/Archives/edgar/data/1070789/000149315217011092/0001493152-17-011092-index.htm',
        'https://www.sec.gov/Archives/edgar/data/1592603/000139160917000254/0001391609-17-000254-index.htm']
for url in urls:
    response = requests.get(url)
    soup = BeautifulSoup(response.content, "html.parser")
    tables = soup.findAll('table', class_='tableFile')

    # assume xml table always comes after html one
    table = tables[-1]
    for a in table.findAll('a'):
        print(a['href'])  # you may filter out txt or xsd here
于 2017-09-30T02:59:35.947 回答