2

这是我的代码:

import urllib2
from bs4 import BeautifulSoup

url = "http://www.sec.gov/Archives/edgar/data/1288776/000119312512312575/goog-20120630.xml"

req = urllib2.Request(url, "r")
response = urllib2.urlopen(req)
xml = response.read()

soup = BeautifulSoup(xml, features="xml")
print soup.prettify()

输出仅显示来自目标的前几行 XML:

>>> 
<?xml version="1.0" encoding="utf-8"?>
<!-- EDGAR Online I-Metrix Xcelerate Instance Document, based on XBRL 2.1  http://www.edgar-online.com/ -->
<!-- Version:  6.17.6 -->
<!-- Round: 8321e8af-cc4a-498e-a38d-da694ed77a41 -->
<!-- Creation date: 2012-07-24T16:17:46Z -->
<xbrl xmlns="http://www.xbrl.org/2003/instance" xmlns:country="http://xbr" xmlns:iso4217="http://www.xbrl.org/2003/iso4217" xmlns:xbrll="http://www.xbrl.org/2003/linkbase" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"/>

任何想法如何提取所有的 XML?

4

2 回答 2

0

实际上我自己也遇到了这个问题,但是在我通过 FTP 从 SEC 网站获取完整的 SGML 文档并从磁盘读取它之后。我有:

soup = bs4.BeautifulSoup(xbrl, ["lxml", "xml"])

我将其更改为:

soup = bs4.BeautifulSoup(xbrl, "lxml")

...然后我就能够获得所有的 XML。我相信您的问题可能是 BeautifulSoup 函数调用中的附加 'features="xml"' 代码?这与 Inbar Rose 的回答一致,该回答对 BeautifulSoup() 函数调用没有任何附加参数。

祝你好运!

于 2013-08-11T18:52:54.130 回答
0

你试过用开瓶器吗?

import urllib2
from BeautifulSoup import BeautifulSoup

url = "http://www.sec.gov/Archives/edgar/data/1288776/000119312512312575/goog-20120630.xml"

opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]  
resource = opener.open(url)
data = resource.read()
resource.close()
soup = BeautifulSoup(data)
print soup.prettify()

上面的代码对我有用。

于 2012-08-07T11:09:02.457 回答