I currently have a shortest path algorithm that receives as inputs the Graph and a Node of origin, and returns the costs for all nodes in the graph plus the tree (precedents for each node). The graph is a dictionary, so are the costs AND the tree.
Since I have to compute the shortest path trees with origins in all nodes, it is only natural to do it in parallel (as the trees are independent of each other).
I'm doing it with the use of a pool of workers using the multiprocessing and appending the results to a list (so I want a list of dictionaries).
It runs without errors, but the interesting part is that the processing time does not change with the number of workers (NO CHANGE AT ALL).
Any insight on why does that happen will be mostly appreciated. Code follows below.
from LoadData import *
from ShortestPathTree import shortestPath
from time import clock, sleep
from multiprocessing import Pool, Process, cpu_count, Queue
def funcao(G,i):
costs, pred=shortestPath(G,i)
return pred
def main():
#loads the graph
graph="graph.graph"
G = load_graph(graph)
# loads the relevant nodes (CENTROIDS)
destinations="destinations.graph"
DEST = load_relevant_nodes(destinations)
f = open('output_parallel.out','w')
start=clock()
pool=Pool()
resultados=[]
def adder(value):
resultados.append(value)
#for i in range(len(DEST)):
for i in range(486):
pool.apply_async(funcao, args=(G,DEST[i]), callback=adder)
pool.close()
pool.join()
print clock()-start
print >> f, resultados
print >> f, 'seconds: '+ str(clock()-start)