In the example below, I have two rows where First, Last, and Address 1/2 match. Two different email addreses but with one extra custom field. What I would like to do is systemati开发者_运维技巧cally select the first row when presented with multiple rows that looks like this.
Lines <- "
First, Last, Address, Address 2, Email, Custom1, Custom2, Custom3
A, B, C, D, E@E.com,1,2,3
A, B, C, D, F@G.com,1,2,
"
con <- textConnection(Lines)
In words:
IF First & Last & Address 1 & Address 2 match, choose that one where custom3 is not NA.
How would I go about applying this to a large set of data?
Note: Speed is not important here, an example with plyr would be most useful.
Here are some non-plyr examples, because I can't stand being useful. ;-)
x <- read.csv(con)
close(con)
# Using split-apply-combine
do.call(rbind, lapply(split(x, x[,1:4]), function(X) X[!is.na(X$Custom3),]))
# Using ave
x[!ave(x$Custom3, x[,1:4], FUN=is.na),]
Here's a plyr
solution:
con <- textConnection(Lines <- "
First, Last, Address, Address 2, Email, Custom1, Custom2, Custom3
A, B, C, D, E@E.com,1,2,3
A, B, C, D, F@G.com,1,2,
")
x <- read.csv(con)
close(con)
ddply(x, .(First, Last, Address, Address.2), function(x2) x2[!is.na(x2$Custom3), c("Email", "Custom1", "Custom2", "Custom3")])
精彩评论