You could use the numpy random module to generate random vectors, and use those vectors to seed the matrix. For example:
import numpy as np
N = 100
gamma = 0.7
connect = np.zeros((N,N),dtype=np.int32)
for i in range(0,N):
dval = np.diag((np.random.random_sample(size=(N-i))<gamma).astype(np.int32),i)
connect += dval
if (i>0):
connect += dval.T
does this diagonally using numpy.diag
, but you could do it row-wise to assemble the upper or lower triangular portion, then use addition to form the symmetrical matrix. I don't have a feeling for which might be faster.
EDIT:
In fact this row wise version is about 5 times faster than the diagonal version, which I guess shouldn't be all that surprising given the memory access patterns it uses compared to diagonal assembly.
N = 100
gamma = 0.7
connect = np.zeros((N,N),dtype=np.int32)
for i in range(0,N):
rval = (np.random.random_sample(size=(N-i))<gamma).astype(np.int32)
connect[i,i:] = rval
connect += np.triu(connect,1).T
EDIT 2
This is even simpler and about 4 times faster than the row-wise version above. Here a triangular matrix is formed directly from a full matrix of weights, then added to its transpose to produce the symmetric matrix:
N = 100
gamma = 0.7
a=np.triu((np.random.random_sample(size=(N,N))<gamma).astype(np.int32))
connect = a + np.triu(a,1).T
On the Linux system I tested it on, version 1 takes about 6.5 milliseconds, version 2 takes about 1.5 milliseconds, version 3 takes about 450 microseconds.