3

我正在尝试对大型 GIS 数据集(10000 x 10000 数组)进行高斯平滑处理。我目前的方法是将整个数组加载到内存中,对其进行平滑处理,然后将其写回。它看起来像这样:

big_array = band_on_disk.ReadAsArray()
scipy.ndimage.gaussian_filter(big_array, sigma, output=smoothed_array)
output_band.WriteArray(smoothed_array)

对于大型栅格,我得到一个MemoryError,所以我想加载该数组的子块,但我不确定如何处理影响相邻子块的区域的高斯平滑。

关于如何修复上述算法的任何建议,以便它在更小的内存占用上工作,同时仍然正确地平滑整个数组?

4

1 回答 1

6

尝试使用内存映射文件。

适度的内存使用和可忍受的快速

如果您负担得起将其中一个数组存储在内存中,那么这将是可以忍受的快:

import numpy as np
from scipy.ndimage import gaussian_filter

# create some fake data, save it to disk, and free up its memory
shape = (10000,10000)
orig = np.random.random_sample(shape)
orig.tofile('orig.dat')
print 'saved original'
del orig

# allocate memory for the smoothed data
smoothed = np.zeros((10000,10000))

# memory-map the original data, so it isn't read into memory all at once
orig = np.memmap('orig.dat', np.float64, 'r', shape=shape)
print 'memmapped'

sigma = 10 # I have no idea what a reasonable value is here
gaussian_filter(orig, sigma, output = smoothed)
# save the smoothed data to disk
smoothed.tofile('smoothed.dat')

内存使用率低且非常慢

如果您不能一次将任一数组都放在内存中,则可以对原始数组和平滑数组进行内存映射。至少在我的机器上,这具有非常低的内存使用率,但速度非常慢。

您必须忽略此代码的第一部分,因为它会欺骗并立即创建原始数组,然后将其保存到磁盘。您可以将其替换为代码以加载您在磁盘上增量构建的数据。

import numpy as np
from scipy.ndimage import gaussian_filter

# create some fake data, save it to disk, and free up its memory
shape = (10000,10000)
orig = np.random.random_sample(shape)
orig.tofile('orig.dat')
print 'saved original'
del orig

# memory-map the original data, so it isn't read into memory all at once
orig = np.memmap('orig.dat', np.float64, 'r', shape=shape)
# create a memory mapped array for the smoothed data
smoothed = np.memmap('smoothed.dat', np.float64, 'w+', shape = shape)
print 'memmapped'

sigma = 10 # I have no idea what a reasonable value is here
gaussian_filter(orig, sigma, output = smoothed)
于 2012-10-11T18:29:24.967 回答