Antonio2100 Antonio2100 -4 years ago 158
R Question

How to improve performance of this linear interpolation in r

For a given column in a dataframe, I want to construct a new vector which for each point consists of the average of the points on either side. However for the last observation it will instead be the second to last. And for the first observation it will be second. I wrote this R code to solve the issue, however I am calling it repeatedly and it is extremely slow. Can someone give some tips on how to do it more efficiently? Thanks.

x1 <- c(rep('a',100),rep('b',100),rep('c',100))
x2 <- rnorm(300)
x <- data.frame(x1,x2)
names(x) <- c('col1','data1')

a.linear.interpolation <- function(x) {

a.dattab <- data.table(x)


#replace any NA values using LOCF / NOCB

#Adding a within group sequence number and a size of group field to facilitate
#row by row processing

#convert back to data.frame
#data.frame seems faster than data.table for this row by row type processing
a.df <- data.frame(a.dattab)

new.col <- vector(length=nrow(a.df))

for(i in seq(nrow(a.df))){
new.col[i] <- a.df[i+1,"data1"]
else if(a.df[i,"grpseq"]==a.df[i,"grpseq_max"]){
new.col[i] <- a.df[i-1,"data1"]
else {
new.col[i] <- (a.df[i-1,"data1"]+a.df[i+1,"data1"])/2


Answer Source

Apart from using rollmeans, the base R filter function can do this sort of thing as well. E.g.:

linint <- function(vec) {
  c(vec[2], filter(vec, c(0.5, 0, 0.5))[-c(1, length(vec))], vec[length(vec) - 1])

x <- c(1,3,6,10,1)
#[1]  3.0  3.5  6.5  3.5 10.0

And it's pretty quick, chewing through 10M cases in less than a second:

x <- rnorm(1e7)
#user  system elapsed 
#0.57    0.18    0.75 
Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download