f = @(w) test(w, phis, t, r_coeff);
function test(w, phis, t, r_coeff)
M = size(phis, 2);
expt = exp(-t .* (phis * w'));
coeff = expt .* t .^ 2 ./ (1 + expt) .^ 2;
averaging_coef = 1.0 / M; % mean replace
G = bsxfun(@times, phis', coeff' * averaging_coef) * phis + 2 * r_coeff * eye(M);
end
- w - size is (1xM)
- phis - size is (NxM)
- t - size is (Nx1)
- r_coeff is const
Help me please to optimize this piece of code, because function f
runs a thousand times and N
is around 300-800, M
have mostly the same size. When I'm doing this * phis
performance goes down.
As you can see it depends only on w
- which I don't know.
It already is fairly well "optimized". Just because you want it to run faster does not make that possible.
You can buy/find/borrow/lease a faster/bigger computer. You can choose to solve smaller problems. You can just let it run overnight. Or you can change your problem to be something simpler and approximate, that runs more quickly.
This is what happens in research problems. They expand to the limits of your capability and just a bit more, because solving something easy is not worth a paper or a thesis. Computers make it easy to gather much data too, so problems get surprisingly large very quickly.
A special, rare skill in mathematics and modeling is knowing how to simplify your problem, deleting terms that are not truly important, while retaining the same basic behavior that you wish to study. This often involves linear approximations to nonlinear terms.
精彩评论