0

我正在尝试从 10-K SEC Edgar 报告中批量下载“最终用户”可见的文本(不关心表格)并将其保存在文本文件中。我在 Youtube 上找到了下面的代码,但是我面临两个挑战:

  1. 我不确定我是否正在捕获所有文本,当我从下面打印 URL 时,我收到了非常奇怪的输出(特殊字符,例如,在打印输出的最后)

  2. 我似乎无法将文本保存在txt文件中,不确定这是否是由于编码(我对编程完全陌生)。

import re
import requests
import unicodedata
from bs4 import BeautifulSoup

def restore_windows_1252_characters(restore_string):
    def to_windows_1252(match):
        try:
            return bytes([ord(match.group(0))]).decode('windows-1252')
        except UnicodeDecodeError:
            # No character at the corresponding code point: remove it.
            return ''

    return re.sub(r'[\u0080-\u0099]', to_windows_1252, restore_string)

# define the url to specific html_text file
new_html_text = r"https://www.sec.gov/Archives/edgar/data/796343/0000796343-14-000004.txt"

# grab the response
response = requests.get(new_html_text)
page_soup = BeautifulSoup(response.content,'html5lib')

page_text = page_soup.html.body.get_text(' ',strip = True)

# normalize the text, remove characters. Additionally, restore missing window characters.
page_text_norm = restore_windows_1252_characters(unicodedata.normalize('NFKD', page_text)) 

# print: this works however gives me weird special characters in the print (e.g., at the very end)
print(page_text_norm)

# save to file: this only gives me an empty text file
with open('testfile.txt','w') as file:
    file.write(page_text_norm)
4

1 回答 1

0

尝试这个。如果你以你期望的数据为例,人们会更容易理解你的需求。

from simplified_scrapy import SimplifiedDoc,req,utils
url = 'https://www.sec.gov/Archives/edgar/data/796343/0000796343-14-000004.txt'
html = req.get(url)
doc = SimplifiedDoc(html)
# text = doc.body.text
text = doc.body.unescape() # Converting HTML entities
utils.saveFile("testfile.txt",text)
于 2020-04-13T22:39:23.507 回答