Predict method for plmm class
Arguments
- object
An object of class
plmm
.- newX
Matrix of values at which predictions are to be made (not used for
type="coefficients"
or for some of thetype
settings inpredict
). This can be either a FBM object or a 'matrix' object. Note: Columns of this argument must be named!- type
A character argument indicating what type of prediction should be returned. Options are "lp," "coefficients," "vars," "nvars," and "blup." See details.
- lambda
A numeric vector of regularization parameter
lambda
values at which predictions are requested.- idx
Vector of indices of the penalty parameter
lambda
at which predictions are required. By default, all indices are returned.- X
Original design matrix (not including intercept column) from object. Required only if
type == 'blup'
and object is too large to be returned inplmm
object.- y
Original continuous outcome vector from object. Required only if
type == 'blup'
.- ...
Additional optional arguments
Details
Define beta-hat as the coefficients estimated at the value of lambda that minimizes cross-validation error (CVE). Then options for type
are as follows:
'response' (default): uses the product of newX and beta-hat to predict new values of the outcome. This does not incorporate the correlation structure of the data. For the stats folks out there, this is simply the linear predictor.
'blup' (acronym for Best Linear Unbiased Predictor): adds to the 'response' a value that represents the esetimated random effect. This addition is a way of incorporating the estimated correlation structure of data into our prediction of the outcome.
'coefficients': returns the estimated beta-hat
'vars': returns the indices of variables (e.g., SNPs) with nonzero coefficients at each value of lambda. EXCLUDES intercept.
'nvars': returns the number of variables (e.g., SNPs) with nonzero coefficients at each value of lambda. EXCLUDES intercept.
Examples
set.seed(123)
train_idx <- sample(1:nrow(admix$X), 100)
# Note: ^ shuffling is important here! Keeps test and train groups comparable.
train <- list(X = admix$X[train_idx,], y = admix$y[train_idx])
train_design <- create_design(X = train$X, y = train$y)
test <- list(X = admix$X[-train_idx,], y = admix$y[-train_idx])
fit <- plmm(design = train_design)
# make predictions for all lambda values
pred1 <- predict(object = fit, newX = test$X, type = "lp")
# look at mean squared prediction error
mspe <- apply(pred1, 2, function(c){crossprod(test$y - c)/length(c)})
min(mspe)
#> [1] 2.93678
# compare the MSPE of our model to a null model, for reference
# null model = intercept only -> y_hat is always mean(y)
crossprod(mean(test$y) - test$y)/length(test$y)
#> [,1]
#> [1,] 6.381748