开发者

Is there a built-in function for sampling a large delimited data set?

开发者 https://www.devze.com 2023-03-31 16:42 出处:网络
I have a few large data files I\'d like to sample when loading into R.I can load the entire data set, but it\'s really too large to work with.sample does roughly the right thing, but I\'d like to have

I have a few large data files I'd like to sample when loading into R. I can load the entire data set, but it's really too large to work with. sample does roughly the right thing, but I'd like to have to take random samples of the input while reading it.

I can imagine how to build that with a loop and readline and what-not but surely this has been done hundreds of times.

Is there somethi开发者_如何学Cng in CRAN or even base that can do this?


You can do that in one line of code using sqldf. See part 6e of example 6 on the sqldf home page.


No pre-built facilities. Best approach would be to use a database management program. (Seems as though this was addressed in either SO or Rhelp in the last week.)

Take a look at: Read csv from specific row , and especially note Grothendieck's comments. I consider him a "class A wizaRd". He's got first hand experience with sqldf. (The author IIRC.)

And another "huge files" problem with a Grothendieck solution that succeeded: R: how to rbind two huge data-frames without running out of memory


I wrote the following function that does close to what I want:

readBigBz2 <- function(fn, sample_size=1000) {
    f <- bzfile(fn, "r")
    rv <- c()
    repeat {
        lines <- readLines(f, sample_size)
        if (length(lines) == 0) break
        rv <- append(rv, sample(lines, 1))
    }
    close(f)
    rv
}

I may want to go with sqldf in the long-term, but this is a pretty efficient way of sampling the file itself. I just don't quite know how to wrap that around a connection for read.csv or similar.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号