I have seen that some guys had this problem, too, but I really didn't understand the given answers.
I did some linear mixed models starting with the "intercept only" model. Subsequently I wanted to add more variables. When I try to compare the models, the R out put was "models were not all fitted to the same size of dataset". What do I have to do to fit data at the same dataset?
The R syntax is:
mod_zero <- lmer(quality ~ 1 + (1|subject_id))
mod_one <- lmer(quality ~ ps + an + int + ch + boredom + (1|subject_id),dat)
The error is likely caused by the presence of missing data in one or more predictors in the second model. These observations are removed from the second model (thereby creating a different dataset which is a subset of the original data) and you cannot meaningfully compare two models that are fit to different datasets. To compare both models you'll have to fit the first model to a dataset without missing data on ps, an, int, ch, boredom. Try:
dat2 <- dat[which(complete.cases(dat[,c('ps', 'an', 'int', 'ch', 'boredom')])),] mod_zero <- lmer(quality ~ 1 + (1|subject_id), dat2) mod_one <- lmer(quality ~ ps + an + int + ch + boredom + (1|subject_id),dat2) anova(mod_zero, mod_one)
This solves the error, but you should ask yourself why there is missing data. Removing missing data could bias your results depending on the missing data mechanism. If you have a lot of missing data that is systematically related to your outcome variable this will bias your model estimates and you'll need to look into ways of reducing this bias (e.g. multiple imputation). Graham has written a lot of books and articles that explain different missing data mechanisms and solutions. Comparing the output of mod_zero on
dat2 may give a first indication of possible bias (although a similar output does not ensure the absence of bias).