I am working with some graph-like data mostly gathered in vectors or lists.
Most of the time I need to inspect the vectors开发者_运维技巧/lists by given indexes and do some logic to determine the resulting value for the current element.To be a bit more precise, consider such a fragment of code:
for (i in 1:(length1 - 1))
for (j in (i + 1):length2)
for (k in 1:length3) {
d1 <- data[i, k]
d2 <- data[j, k]
if (d1 != d2)
otherData[i, j, k] <- list(c(min(d1, d2), max(d1, d2)))
else
otherData[i, j, k] <- list(c(1, 1))
}
My question is:
- Would it be a good solution to:- create vectors of indexes, and then
- lapply inner functions (that take a vector of indexes) that see the outer (declared in the outer function) data objects and use the provided vector of indexes to perform the logic
Sample code (simpler, with no connection to the code above):
someFunc <- function(data) {
n <- length(data)
f <- function(i) {
return (doSthWith(data[i], i))
# do some logic with both the data and the index
}
return (sapply(1:n, f))
}
Another solution I came up with is to create a data.frame and make the indexes part of the data, so the lapply functions would basically have the indexes in the input row as well.
I will be very greatful for your thoughts on those approaches.
Well, you can do vectorized indexing, which should give you a chance for significant speedup. In general, instead of:
for(a in A) for(b in B) something(x[a,b])
You can do:
something_vectorized(x[as.matrix(expand.grid(A,B))])
*apply are basically loop wrappers, so you will gain at most clear code by converting loops to them.
EDIT: Small illustration to supplement the comment:
> system.time(replicate(100,sum(sapply(1:1000,function(x) x^2))))
user system elapsed
0.385 0.001 0.388
> system.time(replicate(100,sum((1:1000)^2)))
user system elapsed
0.002 0.001 0.003
精彩评论