由于我认为列表中的数据过多,我有一个应用程序不断崩溃。
我从 api 调用 xml 数据并将数据附加到列表中。我必须进行大约 300K 的 api 调用,我认为这会导致我的应用程序崩溃,因为当我附加到列表中时,数据太多。
下面是调用 api 并将结果保存到列表中的代码。lst1
是我传递给 api 的 id 列表。我必须考虑到,如果 http 请求超时,或者我可以建立一种机制来清除附加的数据列表,我可以从我离开的 ID 重新启动请求,从该 IDlst1
传递到 api url。
import requests
import pandas as pd
import xml.etree.ElementTree as ET
from bs4 import BeautifulSoup
import time
from concurrent import futures
lst1=[1,2,3]
lst =[]
for i in lst1:
url = 'urlId={}'.format(i)
while True:
try:
xml_data1 = requests.get(url).text
print(xml_data1)
break
except requests.exceptions.RequestException as e:
print(e)
lst.append(xml_data1)
我在想,如果我可以应用下面的函数将 xml 解压缩到数据帧中并执行所需的数据帧操作,同时清除lst
它可以释放内存的附加数据列表。如果不是这种情况,我愿意接受任何建议,以允许代码或应用程序不会因我认为列表中过多的 xml 数据而崩溃:
def create_dataframe(xml):
soup = BeautifulSoup(xml, "xml")
# Get Attributes from all nodes
attrs = []
for elm in soup(): # soup() is equivalent to soup.find_all()
attrs.append(elm.attrs)
# Since you want the data in a dataframe, it makes sense for each field to be a new row consisting of all the other node attributes
fields_attribute_list = [x for x in attrs if 'Id' in x.keys()]
other_attribute_list = [x for x in attrs if 'Id' not in x.keys() and x != {}]
# Make a single dictionary with the attributes of all nodes except for the `Field` nodes.
attribute_dict = {}
for d in other_attribute_list:
for k, v in d.items():
attribute_dict.setdefault(k, v)
# Update each field row with attributes from all other nodes.
full_list = []
for field in fields_attribute_list:
field.update(attribute_dict)
full_list.append(field)
# Make Dataframe
df = pd.DataFrame(full_list)
return df
with futures.ThreadPoolExecutor() as executor: # Or use ProcessPoolExecutor
df_list = executor.map(create_dataframe, lst)
full_df = pd.concat(df_list)
print(full_df)
#final pivoted dataframe
final_df = pd.pivot_table(full_df, index='Id', columns='FieldTitle', values='Value', aggfunc='first').reset_index()