0

我有一个简单的 netCDF 文件,它有一个数据立方体,即 LAT、LONG、TIME 作为 3 维,它存储温度。它以掩码数组的形式存储在 NumPy 中。下面的代码将其提取为 CSV 格式。但是处理 20 MB 文件非常慢,即每次迭代需要 20 秒,所以我总共有 4 * 548 * 20 秒 = 43840 秒 = 703 分钟 = 12 小时。

如果您查看有注释 TAKES_LONG_TIME 的行,则需要更多时间。我相信对于每个单元格,NumPy 中都会发生从 Python 到 C 代码的切换。不确定在以下情况下我该如何解决。请指教。谢谢。

# conda install -y -c conda-forge iris

import iris
import cf_units as unit
import numpy as np
import datetime
import urllib.request
from os import path


def make_data_object_name(dataset_name, year, month, day, hour, realization, forecast_period):
    template_string = "prods_op_{}_{:02d}{:02d}{:02d}_{:02d}_{:02d}_{:03d}.nc"
    return template_string.format(dataset_name, year, month, day, hour, realization, forecast_period)


def download_data_object(dataset_name, data_object_name):
    url = "https://s3.eu-west-2.amazonaws.com/" + dataset_name + "/" + data_object_name
    urllib.request.urlretrieve(url, data_object_name)  # save in this directory with same name


def load_data():
    filename = 'prods_op_mogreps-uk_20140101_03_02_003.nc'
    if not path.exists(filename):
        # obj_name = make_data_object_name('mogreps-uk', 2014, 1, 1, 3, 2, 3)
        download_data_object('mogreps-uk', filename)

    listofcubes = iris.load(filename)
    air_temps = listofcubes.extract('air_temperature')
    surface_temp = air_temps[0]
    dim_time, dim_lat, dim_long = "time", "grid_latitude", "grid_longitude"

    time_cords = surface_temp.coord(dim_time).points
    time_since = str(surface_temp.coord(dim_time).units)
    lat_cords = surface_temp.coord(dim_lat).points
    long_cords = surface_temp.coord(dim_long).points

    time_records = [str(unit.num2date(time_cords[i], time_since, unit.CALENDAR_STANDARD)) for i in
                    range(len(time_cords))]
    lat_records = [lat_cords[lat_recorded] for lat_recorded in range(len(lat_cords))]
    long_records = [long_cords[long_recorded] for long_recorded in range(len(long_cords))]

    print(len(time_records), len(lat_records), len(long_records))
    print(surface_temp.shape)
    data_size = len(surface_temp.shape)
    print(" File write start -->  ", datetime.datetime.now())
    with open(filename + '.curated', 'w') as filehandle:
        for t, time_record in enumerate(time_records):  # Iterate TIME - 4
            t_a = surface_temp[t] if data_size == 3 else surface_temp[t][0]
            for lat, lat_record in enumerate(lat_records):  # Iterate LAT - 548
                lat_a = t_a[lat]
                iter_start_time = datetime.datetime.now()
                lat_lines = list()
                for lng, long_record in enumerate(long_records):  # Iterate Long 421
                    data = str(lat_a[lng].data.min()) # TAKES_LONG_TIME
                    lat_lines.append(time_record + ',' + str(lat_record) + ',' + str(long_record) + ',' + data + '\n')
                filehandle.writelines(lat_lines)
                print(t, time_record, lat, lat_record, " time taken in seconds -> ",
                      (datetime.datetime.now() - iter_start_time).seconds)



if __name__ == "__main__":
    load_data()


4

1 回答 1

1

当您第一次使用 读取多维数据集iris.load时,实际数据数组不会加载到内存中(请参阅Iris 用户指南中的真实数据和惰性数据)。因为您在访问之前对多维数据集进行切片subcube.data,所以实际加载是针对每个切片单独进行的。因此,每次执行“TAKES_LONG_TIME”行时,您都将返回并访问 NetCDF 文件。

要在开始循环之前将所有内容加载到内存中,只需添加一行内容

surface_temp.data

这应该会加快速度,但可能并不理想,具体取决于您有多少可用内存。ta.data通过在 for 循环中选择不同的级别来实现数据(即或la.data),可以找到折衷方案。

于 2019-11-01T13:11:08.217 回答