I was working on drafting/testing a technique I devised for solving differential equations for speed and efficiency.
It would require a storing, manipulating, resizing, and (at some point) probably diagonalizing very large sparse matrices. I would like to be able to have rows consisting of zeros and a few (say <5) ones, and add them a few at a time (on the order of the number of cpus being used.)
I thought 开发者_开发百科it would be useful to have gpu accelleration--so any suggestions as to the best way to take advange of that would be appreciated too (say pycuda, theano, etc.)
The most efficient storage method for symmetric sparse matrices is probably sparse skyline format (this is what Intel MKL uses, for example). AFAIK scipy.sparse
doesn't contain a sparse, symmetric matrix format. Pysparse, however, does. Using Pysparse, you can build the matrix incrementally using the link list format, then convert the matrix into sparse skyline format. Performance wise, I have usually found Pysparse to be superior to scipy with large sparse systems, and all the basic building blocks (matrix product, eigenvalue solver, direct solver, iterative solver) as present, although the range of routines is perhaps a little smaller than what is available in scipy.
You can use a dictionary and tuples to access the data:
>>> size = (4,4)
>>> mat = {}
>>> mat[0,1] = 3
>>> mat[2,3] = 5
>>> for i in range(size[0]):
for j in range(size[1]):
print mat.get((i,j), 0) ,
print
0 3 0 0
0 0 0 0
0 0 0 5
0 0 0 0
Of course you should make a class for that and add the methods you need:
class Sparse(dict):
pass
BTW, You can also use scipy.sparse
from the scipy lib
Use scipy.sparse.
精彩评论