1

我在 Python 2.7(在 Window OS 64bit 上)中编写了一个函数,以便从参考多边形(Ref)和一个或多个 ESRI shapefile 格式的分段(Seg)多边形计算相交区域的平均值。代码非常慢,因为我有超过 2000 个参考多边形,并且对于每个 Ref_polygon,该函数每次都针对所有 Seg 多边形(超过 7000 个)运行。很抱歉,该函数是原型。

我想知道多处理是否可以帮助我提高循环速度或者有更多的性能解决方案。如果多处理可以成为可能的解决方案,我想知道优化以下功能的最佳方法

import numpy as np
import ogr
import osr,gdal
from shapely.geometry import Polygon
from shapely.geometry import Point
import osgeo.gdal
import osgeo.gdal as gdal

def AreaInter(reference,segmented,outFile):
     # open shapefile
     ref = osgeo.ogr.Open(reference)
     if ref is None:
          raise SystemExit('Unable to open %s' % reference)
     seg = osgeo.ogr.Open(segmented)
     if seg is None:
          raise SystemExit('Unable to open %s' % segmented)
     ref_layer = ref.GetLayer()
     seg_layer = seg.GetLayer()
     # create outfile
     if not os.path.split(outFile)[0]:
          file_path, file_name_ext = os.path.split(os.path.abspath(reference))
          outFile_filename = os.path.splitext(os.path.basename(outFile))[0]
          file_out = open(os.path.abspath("{0}\\{1}.txt".format(file_path, outFile_filename)), "w")
     else:
          file_path_name, file_ext = os.path.splitext(outFile)
          file_out = open(os.path.abspath("{0}.txt".format(file_path_name)), "w")
     # For each reference objects-i
     for index in xrange(ref_layer.GetFeatureCount()):
          ref_feature = ref_layer.GetFeature(index)
          # get FID (=Feature ID)
          FID = str(ref_feature.GetFID())
          ref_geometry = ref_feature.GetGeometryRef()
          pts = ref_geometry.GetGeometryRef(0)
          points = []
          for p in xrange(pts.GetPointCount()):
               points.append((pts.GetX(p), pts.GetY(p)))
          # convert in a shapely polygon
          ref_polygon = Polygon(points)
          # get the area
          ref_Area = ref_polygon.area
          # create an empty list               
          Area_seg, Area_intersect = ([] for _ in range(2))
          # For each segmented objects-j
          for segment in xrange(seg_layer.GetFeatureCount()):
               seg_feature = seg_layer.GetFeature(segment)
               seg_geometry = seg_feature.GetGeometryRef()
               pts = seg_geometry.GetGeometryRef(0)
               points = []
               for p in xrange(pts.GetPointCount()):
                    points.append((pts.GetX(p), pts.GetY(p)))
               seg_polygon = Polygon(points)
               seg_Area.append = seg_polygon.area
               # intersection (overlap) of reference object with the segmented object
               intersect_polygon = ref_polygon.intersection(seg_polygon)
               # area of intersection (= 0, No intersection)
               intersect_Area.append = intersect_polygon.area
          # Avarage for all segmented objects (because 1 or more segmented polygons can  intersect with reference polygon)
          seg_Area_average = numpy.average(seg_Area)
          intersect_Area_average = numpy.average(intersect_Area)
          file_out.write(" ".join(["%s" %i for i in [FID, ref_Area,seg_Area_average,intersect_Area_average]])+ "\n")
     file_out.close()
4

2 回答 2

6

您可以使用multiprocessing包,尤其是Pool类。首先创建一个函数来执行您想要在 for 循环中执行的所有操作,并且仅将索引作为参数:

def process_reference_object(index):
      ref_feature = ref_layer.GetFeature(index)
      # all your code goes here
      return (" ".join(["%s" %i for i in [FID, ref_Area,seg_Area_average,intersect_Area_average]])+ "\n")

请注意,这不会写入文件本身——这会很混乱,因为您将有多个进程同时写入同一个文件。相反,它返回需要写入的字符串。还要注意,这个函数中有一些对象,ref_layer或者ref_geometry需要以某种方式到达它——这取决于你如何去做(你可以把process_reference_object方法放在一个用它们初始化的类中,或者它可能像定义一样丑陋他们在全球范围内)。

然后,您创建一个进程资源池,并使用以下命令运行所有索引Pool.imap_unordered(它自己会根据需要将每个索引分配给不同的进程):

from multiprocessing import Pool
p = Pool()  # run multiple processes
for l in p.imap_unordered(process_reference_object, range(ref_layer.GetFeatureCount())):
    file_out.write(l)

这将在多个进程中并行处理您的引用对象的独立处理,并将它们写入文件(以任意顺序,注意)。

于 2013-01-07T19:10:42.550 回答
2

Threading can help to a degree, but first you should make sure you can't simplify the algorithm. If you're checking each of 2000 reference polygons against 7000 segmented polygons (perhaps I misunderstood), then you should start there. Stuff that runs at O(n2) is going to be slow, so maybe you can prune away things that will definitely not intersect or find some other way to speed things up. Otherwise, running multiple processes or threads will only improve things linearly when your data grows geometrically.

于 2013-01-07T19:12:05.030 回答