开发者

Is there any efficient way to dynamically change the compress_matrix in boost?

开发者 https://www.devze.com 2023-01-25 09:41 出处:网络
I am using ublas::Compressed Matrix to work with UMFPACK, a sparse linear solver. Since I am doing a simulation, so every time the linear system is constructed slightly differently that might involve

I am using ublas::Compressed Matrix to work with UMFPACK, a sparse linear solver. Since I am doing a simulation, so every time the linear system is constructed slightly differently that might involve enlarging/shrinking the coefficient matrix and some sparse matrix multiplications. The scale of the linear system is around 25k.

Even there is a binding patch for boost to work with UMFPACK, I still need to change the matrix from time to time, sometimes even figuring out the number of non-zero values would be time-consuming(ideally, I have to give the number of non-zero values 开发者_高级运维when I initialize a matrix). Also, I use ublas::range to append columns/rows dynamically.

So my question is: is there any efficient way to do this? Right now it's too slow for me. Transposing a matrix with dimension like 15k costs nearly 6s and appending about 12k rows is fast(because I guess it's a row-major matrix), but appending the same number of columns to the matrix can cost up to 20s(i guess for the same reason as above, so even I used a column-major matrix the total time required would be the same).

Kinda getting desperate here. Any suggestion is welcome.

Cheers.


I am not familiar with your packages, but why do you (ideally) have to specify the number of non-zero elements in your matrix? Can't you over-specify and then reduce in size?

I'm also confused by why adding columns should cost so much. A sparse format should be able to handle that. I would conclude that one of two things are happening. Either your matrix is somehow being converted to a non-sparse matrix before being converted back (seems horrible, and impossible in any decent sparse matrix package) or the code for insertion is quadratic, because it repeatedly inserts values, shifting over all the others each time.

The latter seems likely. I would try to roll my own "insert column" code which takes the current sparse matrix, figures out how many more entries there are, allocates a bigger block, and copies in sequentially, inserting the new columns as you go. This is linear, and should essentially be instantaneous. I don't know whether that's sufficient to solve the entire problem, but it should be a start.

Furthermore, if the matrix has on the order of 25k entries, there is no reasonable answer to why copying it or transposing it should take more than a couple of milliseconds. I think you need to benchmark individual pieces of this problem and really determine exactly where the time is going, unless the above solution for adding the columns solves your problem.


How do you construct the matrix each time, are you interfacing from some kind of different software. In this case, time spent on interfacing is I guess is pretty low.

And you use -DNDEBUG flag, for uBlas, right?

I am not sure still what the problem is...

Best,Umut


Rather than constructing A by joining several different sets of values, have you considered keeping them in separate matrices and using the existing solver routines to construct your own overall solver? Basically, you would apply the appropriate decomposition (LU, QR, whatever) to one component matrix, run the corresponding update/transformation on the subsequent components, and repeat for each subsequent matrix. You would then use the factored component matrices to compute your solution. It's not clear whether the library you've been working with would support this directly, though, or whether you'd have to write some/all of the numerical routines yourself.


Did you try Eigen for this kind of problem? Recently they completed sparse matrices support.

0

精彩评论

暂无评论...
验证码 换一张
取 消