0

以下代码下载并解压缩包含数千个文本文件的文件

zip_file_url = "https://docsia-temp.s3-sa-east-1.amazonaws.com/docsia-desafio-dataset.zip"
res = requests.get(zip_file_url, stream=True) # fazendo o request do dado
print("fazendo o download...")
z = zipfile.ZipFile(io.BytesIO(res.content))
print("extraindo os dados")
z.extractall("./")
print("ok..")

如何将这些文件加载​​到 pandas 数据框中?

4

1 回答 1

0
  • 查看代码的内联解释
  • 代码使用pathlib模块查找已经解压的文件
  • 文章类型有 20 种,这意味着数据框字典中有 20 个键,dd.
  • 每个键的值是一个数据框,其中包含每种文章类型的所有文章。
    • 每个数据框有 1000 行,每篇文章 1 行。
  • 总共有20000篇文章。
  • 此实现将保持文章的形状。
    • 当从数据框中打印一行时,文章将采用带有换行符和标点符号的可读形式。
  • 要从单个数据框创建单个数据框:
    • dfc = pd.concat(dd.values()).reset_index(drop=True)
    • 这就是'type'在最初创建数据框时添加列的原因。在组合数据框中,文章类型将是可识别的。
  • 这回答了如何将所有文件加载到数据框中的问题。
  • 有关处理文本的更多问题,请打开一个新问题。
from pathlib import Path
from io import BytesIO
import requests
import pandas as pd
from collections import defaultdict
from zipfile import ZipFile

######################################################################
# download and save zipped files

# location to save files; this create a pathlib object of the path, and patlib objects have methods, like rglob, parts, and is_file
save_path = Path('data/zipped')

zip_file_url = "https://docsia-temp.s3-sa-east-1.amazonaws.com/docsia-desafio-dataset.zip"
res = requests.get(zip_file_url, stream=True)

with ZipFile(BytesIO(res.content), 'r') as zip_ref:
    zip_ref.extractall(save_path)
######################################################################

# find all the files; the methods in this list comprehension are pathlib methods
files = [file for file in list(save_path.rglob('*')) if file.is_file()]

# dict to save dataframes for each file
dd = defaultdict(list)
for file in files:
    
    # extract the type of article from the path
    article_type = file.parts[-2].replace('.', '_')
    
    # open the file
    with file.open(mode='r', encoding='utf-8', errors='ignore') as f:
        # read the lines and combine them into one string inside a list
        f = [' '.join([line for line in f.readlines() if line.strip()])]
        
    # create a dataframe from f
    df = pd.DataFrame(f, columns=['article'])
    
    # add a column for the article type
    df['type'] = article_type
    
    # add the dataframe to the default dict
    dd[article_type].append(df.copy())

# each value of the dict is a list of dataframes, iterate through all keys and create a single dataframe for each key
for k, v in dd.items():
    # for all the article type, combine all the dataframes into a single dataframe
    dd[k] = pd.concat(v).reset_index(drop=True)
print(dd.keys())
[out]:
dict_keys(['alt_atheism', 'comp_graphics', 'comp_os_ms-windows_misc', 'comp_sys_ibm_pc_hardware', 'comp_sys_mac_hardware', 'comp_windows_x', 'misc_forsale', 'rec_autos', 'rec_motorcycles', 'rec_sport_baseball', 'rec_sport_hockey', 'sci_crypt', 'sci_electronics', 'sci_med', 'sci_space', 'soc_religion_christian', 'talk_politics_guns', 'talk_politics_mideast', 'talk_politics_misc', 'talk_religion_misc'])

# print the first article for the alt_atheism key
print(dd['alt_atheism'].iloc[0, 0])
[out]:
Xref: cantaloupe.srv.cs.cmu.edu alt.atheism:49960 alt.atheism.moderated:713 news.answers:7054 alt.answers:126
 Path: cantaloupe.srv.cs.cmu.edu!crabapple.srv.cs.cmu.edu!bb3.andrew.cmu.edu!news.sei.cmu.edu!cis.ohio-state.edu!magnus.acs.ohio-state.edu!usenet.ins.cwru.edu!agate!spool.mu.edu!uunet!pipex!ibmpcug!mantis!mathew
 From: mathew <mathew@mantis.co.uk>
 Newsgroups: alt.atheism,alt.atheism.moderated,news.answers,alt.answers
 Subject: Alt.Atheism FAQ: Atheist Resources
 Summary: Books, addresses, music -- anything related to atheism
 Keywords: FAQ, atheism, books, music, fiction, addresses, contacts
 Message-ID: <19930329115719@mantis.co.uk>
 Date: Mon, 29 Mar 1993 11:57:19 GMT
 Expires: Thu, 29 Apr 1993 11:57:19 GMT
 Followup-To: alt.atheism
 Distribution: world
 Organization: Mantis Consultants, Cambridge. UK.
 Approved: news-answers-request@mit.edu
 Supersedes: <19930301143317@mantis.co.uk>
 Lines: 290
 Archive-name: atheism/resources
...
于 2020-09-26T16:35:22.290 回答