diff --git a/DESCRIPTION b/DESCRIPTION index e43897a..a5a04c4 100644 --- a/DESCRIPTION +++ b/DESCRIPTION @@ -1,6 +1,6 @@ Package: MADMMplasso Title: Multi Variate Multi Response 'ADMM' with Interaction Effects -Version: 0.0.0.9012 +Version: 0.0.0.9013 Authors@R: c( person( diff --git a/R/MADMMplasso.R b/R/MADMMplasso.R index eb68524..356339a 100644 --- a/R/MADMMplasso.R +++ b/R/MADMMplasso.R @@ -3,28 +3,27 @@ #' @description This function fits a multi-response pliable lasso model over a path of regularization values. #' @param X N by p matrix of predictors #' @param Z N by K matrix of modifying variables. The elements of Z may represent quantitative or categorical variables, or a mixture of the two. -#' Categorical varables should be coded by 0-1 dummy variables: for a k-level variable, one can use either k or k-1 dummy variables. +#' Categorical variables should be coded by 0-1 dummy variables: for a k-level variable, one can use either k or k-1 dummy variables. #' @param y N by D matrix of responses. The X and Z variables are centered in the function. We recommend that X and Z also be standardized before the call -#' @param maxgrid number of lambda_3 values desired (default 50) -#' @param nlambda number of lambda_3 values desired (default 50). Similar to maxgrid but can have a value less than or equal to maxgrid. -#' @param alpha mixing parameter- default 0.5. When the goal is to include more interactions, alpha should be very small and vice versa. -#' @param max_it maximum number of iterations in the ADMM algorithm for one lambda. Default 50000 -#' @param rho the Lagrange variable for the ADMM (default 5 ). This value is updated during the ADMM call based on a certain condition. -#' @param e.abs absolute error for the admm. default is 1E-3 -#' @param e.rel relative error for the admm-default is 1E-3 +#' @param maxgrid number of lambda_3 values desired +#' @param nlambda number of lambda_3 values desired. Similar to maxgrid but can have a value less than or equal to maxgrid. +#' @param alpha mixing parameter. When the goal is to include more interactions, alpha should be very small and vice versa. +#' @param max_it maximum number of iterations in the ADMM algorithm for one lambda +#' @param rho the Lagrange variable for the ADMM. This value is updated during the ADMM call based on a certain condition. +#' @param e.abs absolute error for the ADMM +#' @param e.rel relative error for the ADMM #' @param gg penalty term for the tree structure. This is a 2x2 matrix values in the first row representing the maximum to the minimum values for lambda_1 and the second row representing the maximum to the minimum values for lambda_2. In the current setting, we set both maximum and the minimum to be same because cross validation is not carried across the lambda_1 and lambda_2. However, setting different values will work during the model fit. -#' @param my_lambda user specified lambda_3 values. Default NULL -#' @param lambda_min the smallest value for lambda_3 , as a fraction of max(lambda_3), the (data derived (lammax)) entry value (i.e. the smallest value for which all coefficients are zero). Default is 0.001 if N>p, and 0.01 if N< p. -#' @param max_it maximum number of iterations in loop for one lambda during the ADMM optimization. Default 50000 -#' @param my_print Should information form each ADMM iteration be printed along the way? Default FALSE. This prints the dual and primal residuals -#' @param alph an overrelaxation parameter in \[1, 1.8\]. Default 1. The implementation is borrowed from Stephen Boyd's \href{https://stanford.edu/~boyd/papers/admm/lasso/lasso.html}{MATLAB code} +#' @param my_lambda user specified lambda_3 values +#' @param lambda_min the smallest value for lambda_3 , as a fraction of max(lambda_3), the (data derived (lammax)) entry value (i.e. the smallest value for which all coefficients are zero) +#' @param max_it maximum number of iterations in loop for one lambda during the ADMM optimization +#' @param my_print Should information form each ADMM iteration be printed along the way? This prints the dual and primal residuals +#' @param alph an overrelaxation parameter in \[1, 1.8\]. The implementation is borrowed from Stephen Boyd's \href{https://stanford.edu/~boyd/papers/admm/lasso/lasso.html}{MATLAB code} #' @param tree The results from the hierarchical clustering of the response matrix. The easy way to obtain this is by using the function (tree_parms) which gives a default clustering. However, user decide on a specific structure and then input a tree that follows such structure. -#' @param parallel should parallel processing be used or not? Defaults to `TRUE`. If set to `TRUE`, pal should be set to `FALSE`. -#' @param pal Should the lapply function be applied for an alternative quicker optimization when there no parallel package available? Default is `FALSE`. -#' @param tol threshold for the non-zero coefficients. Default 1E-4 -#' @param cl The number of cpu to be used for parallel processing. default 4 -#' @param legacy If \code{TRUE}, use the R version of the algorithm. Defaults to -#' C++. +#' @param parallel should parallel processing be used or not? If set to `TRUE`, pal should be set to `FALSE`. +#' @param pal Should the lapply function be applied for an alternative quicker optimization when there no parallel package available? +#' @param tol threshold for the non-zero coefficients +#' @param cl The number of CPUs to be used for parallel processing +#' @param legacy If \code{TRUE}, use the R version of the algorithm #' @return predicted values for the MADMMplasso object with the following components: #' path: a table containing the summary of the model for each lambda_3. #' diff --git a/R/admm_MADMMplasso.R b/R/admm_MADMMplasso.R index 6bb0775..e96e357 100644 --- a/R/admm_MADMMplasso.R +++ b/R/admm_MADMMplasso.R @@ -1,35 +1,19 @@ #' @title Fit the ADMM part of model for the given lambda values #' @description This function fits a multi-response pliable lasso model over a path of regularization values. -#' @param X N by p matrix of predictors -#' @param Z N by nz matrix of modifying variables. The elements of z -#' may represent quantitative or categorical variables, or a mixture of the two. -#' Categorical varables should be coded by 0-1 dummy variables: for a k-level -#' variable, one can use either k or k-1 dummy variables. -#' @param y N by D matrix of responses. The X and Z variables are centered in the function. We recommend that X and Z also be standardized before the call +#' @inheritParams MADMMplasso +#' @inheritParams cv_MADMMplasso #' @param beta0 a vector of length ncol(y) of estimated beta_0 coefficients #' @param theta0 matrix of the initial theta_0 coefficients ncol(Z) by ncol(y) #' @param beta a matrix of the initial beta coefficients ncol(X) by ncol(y) #' @param beta_hat a matrix of the initial beta and theta coefficients (ncol(X)+ncol(X) by ncol(Z)) by ncol(y) #' @param theta an array of initial theta coefficients ncol(X) by ncol(Z) by ncol(y) #' @param rho1 the Lagrange variable for the ADMM which is usually included as rho in the MADMMplasso call. -#' @param max_it maximum number of iterations in loop for one lambda during the ADMM optimization. This is usually included in the MADMMplasso call #' @param W_hat N by (p+(p by nz)) of the main and interaction predictors. This generated internally when MADMMplasso is called or by using the function generate_my_w. #' @param XtY a matrix formed by multiplying the transpose of X by y. #' @param N nrow(X) -#' @param e.abs absolute error for the admm. This is included int the call of MADMMplasso. -#' @param e.rel relative error for the admm. This is included int the call of MADMMplasso. -#' @param alpha mixing parameter, usually obtained from the MADMMplasso call. When the goal is to include more interactions, alpha should be very small and vice versa. -#' @param lambda a vector lambda_3 values for the admm call with length ncol(y). This is usually calculated in the MADMMplasso call. In our current setting, we use the same the lambda_3 value for all responses. -#' @param alph an overrelaxation parameter in \[1, 1.8\], usually obtained from the MADMMplasso call. #' @param svd.w singular value decomposition of W -#' @param tree The results from the hierarchical clustering of the response matrix. -#' The easy way to obtain this is by using the function (tree_parms) which gives a default clustering. -#' However, user decide on a specific structure and then input a tree that follows such structure. -#' @param my_print Should information form each ADMM iteration be printed along the way? Default TRUE. This prints the dual and primal residuals #' @param invmat A list of length ncol(y), each containing the C_d part of equation 32 in the paper #' @param gg penalty terms for the tree structure for lambda_1 and lambda_2 for the admm call. -#' @param legacy If \code{TRUE}, use the R version of the algorithm. Defaults to -#' C++. #' @return predicted values for the ADMM part #' beta0: estimated beta_0 coefficients having a size of 1 by ncol(y) @@ -48,7 +32,7 @@ #' @export -admm_MADMMplasso <- function(beta0, theta0, beta, beta_hat, theta, rho1, X, Z, max_it, W_hat, XtY, y, N, e.abs, e.rel, alpha, lambda, alph, svd.w, tree, my_print = TRUE, invmat, gg = 0.2, legacy = FALSE) { +admm_MADMMplasso <- function(beta0, theta0, beta, beta_hat, theta, rho1, X, Z, max_it, W_hat, XtY, y, N, e.abs, e.rel, alpha, lambda, alph, svd.w, tree, my_print, invmat, gg = 0.2, legacy = FALSE) { if (!legacy) { out <- admm_MADMMplasso_cpp( beta0, theta0, beta, beta_hat, theta, rho1, X, Z, max_it, W_hat, XtY, y, diff --git a/R/cv_MADMMplasso.R b/R/cv_MADMMplasso.R index 2f71801..f46202e 100644 --- a/R/cv_MADMMplasso.R +++ b/R/cv_MADMMplasso.R @@ -1,33 +1,13 @@ -#' @title Carries out cross-validation for a MADMMplasso model over a path of regularization values -#' @description Carries out cross-validation for a MADMMplasso model over a path of regularization values +#' @title Carries out cross-validation for a MADMMplasso model over a path of regularization values +#' @description Carries out cross-validation for a MADMMplasso model over a path of regularization values +#' @inheritParams MADMMplasso #' @param fit object returned by the MADMMplasso function -#' @param X N by p matrix of predictors -#' @param Z N by K matrix of modifying variables. The elements of Z may -#' represent quantitative or categorical variables, or a mixture of the two. -#' Categorical variables should be coded by 0-1 dummy variables: for a k-level -#' variable, one can use either k or k-1 dummy variables. -#' @param y N by D-matrix of responses. The X and Z variables are centered in -#' the function. We recommend that x and z also be standardized before the call #' @param nfolds number of cross-validation folds #' @param foldid vector with values in 1:K, indicating folds for K-fold CV. Default NULL -#' @param alpha mixing parameter- default 0.5. This value should be same as the one used for the MADMMplasso call. -#' @param lambda user specified lambda_3 values. Default fit$Lambdas. -#' @param max_it maximum number of iterations in loop for one lambda during the ADMM optimization. Default 50000 -#' @param e.abs absolute error for the admm. default is 1E-3 -#' @param e.rel relative error for the admm-default is 1E-3 -#' @param nlambda number of lambda_3 values desired (default 50). Similar to maxgrid but can have a value less than or equal to maxgrid. -#' @param rho the Lagrange variable for the ADMM (default 5 ). This value is updated during the ADMM call based on a certain condition. -#' @param my_print Should information form each ADMM iteration be printed along the way? Default FALSE. This prints the dual and primal residuals -#' @param alph an overelaxation parameter in \[1, 1.8\]. Default 1. The implementation is borrowed from Stephen Boyd's \href{https://stanford.edu/~boyd/papers/admm/lasso/lasso.html}{MATLAB code} -#' @param parallel should parallel processing be used during the admm call or not? Default True. If set to true, pal should be set `FALSE`. -#' @param pal Should the lapply function be applied for an alternative quicker optimization when there no parallel package available. Default is `FALSE`. -#' @param gg penalty term for the tree structure obtained from the fit. +#' @param lambda user specified lambda_3 values. +#' @param rho the Lagrange variable for the ADMM. This value is updated during the ADMM call based on a certain condition. #' @param TT The results from the hierarchical clustering of the response matrix. #' This should same as the parameter tree used during the MADMMplasso call. -#' @param tol threshold for the non-zero coefficients. Default 1E-4 -#' @param cl The number of cpu to be used for parallel processing. default 2 -#' @param legacy If \code{TRUE}, use the R version of the algorithm. Defaults to -#' C++. #' @return results containing the CV values #' @example inst/examples/cv_MADMMplasso_example.R #' @export diff --git a/man/MADMMplasso.Rd b/man/MADMMplasso.Rd index 2b546d7..d49ffb3 100644 --- a/man/MADMMplasso.Rd +++ b/man/MADMMplasso.Rd @@ -34,46 +34,45 @@ MADMMplasso( \item{X}{N by p matrix of predictors} \item{Z}{N by K matrix of modifying variables. The elements of Z may represent quantitative or categorical variables, or a mixture of the two. -Categorical varables should be coded by 0-1 dummy variables: for a k-level variable, one can use either k or k-1 dummy variables.} +Categorical variables should be coded by 0-1 dummy variables: for a k-level variable, one can use either k or k-1 dummy variables.} \item{y}{N by D matrix of responses. The X and Z variables are centered in the function. We recommend that X and Z also be standardized before the call} -\item{alpha}{mixing parameter- default 0.5. When the goal is to include more interactions, alpha should be very small and vice versa.} +\item{alpha}{mixing parameter. When the goal is to include more interactions, alpha should be very small and vice versa.} -\item{my_lambda}{user specified lambda_3 values. Default NULL} +\item{my_lambda}{user specified lambda_3 values} -\item{lambda_min}{the smallest value for lambda_3 , as a fraction of max(lambda_3), the (data derived (lammax)) entry value (i.e. the smallest value for which all coefficients are zero). Default is 0.001 if N>p, and 0.01 if N< p.} +\item{lambda_min}{the smallest value for lambda_3 , as a fraction of max(lambda_3), the (data derived (lammax)) entry value (i.e. the smallest value for which all coefficients are zero)} -\item{max_it}{maximum number of iterations in loop for one lambda during the ADMM optimization. Default 50000} +\item{max_it}{maximum number of iterations in loop for one lambda during the ADMM optimization} -\item{e.abs}{absolute error for the admm. default is 1E-3} +\item{e.abs}{absolute error for the ADMM} -\item{e.rel}{relative error for the admm-default is 1E-3} +\item{e.rel}{relative error for the ADMM} -\item{maxgrid}{number of lambda_3 values desired (default 50)} +\item{maxgrid}{number of lambda_3 values desired} -\item{nlambda}{number of lambda_3 values desired (default 50). Similar to maxgrid but can have a value less than or equal to maxgrid.} +\item{nlambda}{number of lambda_3 values desired. Similar to maxgrid but can have a value less than or equal to maxgrid.} -\item{rho}{the Lagrange variable for the ADMM (default 5 ). This value is updated during the ADMM call based on a certain condition.} +\item{rho}{the Lagrange variable for the ADMM. This value is updated during the ADMM call based on a certain condition.} -\item{my_print}{Should information form each ADMM iteration be printed along the way? Default FALSE. This prints the dual and primal residuals} +\item{my_print}{Should information form each ADMM iteration be printed along the way? This prints the dual and primal residuals} -\item{alph}{an overrelaxation parameter in [1, 1.8]. Default 1. The implementation is borrowed from Stephen Boyd's \href{https://stanford.edu/~boyd/papers/admm/lasso/lasso.html}{MATLAB code}} +\item{alph}{an overrelaxation parameter in [1, 1.8]. The implementation is borrowed from Stephen Boyd's \href{https://stanford.edu/~boyd/papers/admm/lasso/lasso.html}{MATLAB code}} \item{tree}{The results from the hierarchical clustering of the response matrix. The easy way to obtain this is by using the function (tree_parms) which gives a default clustering. However, user decide on a specific structure and then input a tree that follows such structure.} -\item{parallel}{should parallel processing be used or not? Defaults to \code{TRUE}. If set to \code{TRUE}, pal should be set to \code{FALSE}.} +\item{parallel}{should parallel processing be used or not? If set to \code{TRUE}, pal should be set to \code{FALSE}.} -\item{pal}{Should the lapply function be applied for an alternative quicker optimization when there no parallel package available? Default is \code{FALSE}.} +\item{pal}{Should the lapply function be applied for an alternative quicker optimization when there no parallel package available?} \item{gg}{penalty term for the tree structure. This is a 2x2 matrix values in the first row representing the maximum to the minimum values for lambda_1 and the second row representing the maximum to the minimum values for lambda_2. In the current setting, we set both maximum and the minimum to be same because cross validation is not carried across the lambda_1 and lambda_2. However, setting different values will work during the model fit.} -\item{tol}{threshold for the non-zero coefficients. Default 1E-4} +\item{tol}{threshold for the non-zero coefficients} -\item{cl}{The number of cpu to be used for parallel processing. default 4} +\item{cl}{The number of CPUs to be used for parallel processing} -\item{legacy}{If \code{TRUE}, use the R version of the algorithm. Defaults to -C++.} +\item{legacy}{If \code{TRUE}, use the R version of the algorithm} } \value{ predicted values for the MADMMplasso object with the following components: diff --git a/man/admm_MADMMplasso.Rd b/man/admm_MADMMplasso.Rd index 18996a8..67c5c1a 100644 --- a/man/admm_MADMMplasso.Rd +++ b/man/admm_MADMMplasso.Rd @@ -25,7 +25,7 @@ admm_MADMMplasso( alph, svd.w, tree, - my_print = TRUE, + my_print, invmat, gg = 0.2, legacy = FALSE @@ -46,12 +46,10 @@ admm_MADMMplasso( \item{X}{N by p matrix of predictors} -\item{Z}{N by nz matrix of modifying variables. The elements of z -may represent quantitative or categorical variables, or a mixture of the two. -Categorical varables should be coded by 0-1 dummy variables: for a k-level -variable, one can use either k or k-1 dummy variables.} +\item{Z}{N by K matrix of modifying variables. The elements of Z may represent quantitative or categorical variables, or a mixture of the two. +Categorical variables should be coded by 0-1 dummy variables: for a k-level variable, one can use either k or k-1 dummy variables.} -\item{max_it}{maximum number of iterations in loop for one lambda during the ADMM optimization. This is usually included in the MADMMplasso call} +\item{max_it}{maximum number of iterations in loop for one lambda during the ADMM optimization} \item{W_hat}{N by (p+(p by nz)) of the main and interaction predictors. This generated internally when MADMMplasso is called or by using the function generate_my_w.} @@ -61,30 +59,27 @@ variable, one can use either k or k-1 dummy variables.} \item{N}{nrow(X)} -\item{e.abs}{absolute error for the admm. This is included int the call of MADMMplasso.} +\item{e.abs}{absolute error for the ADMM} -\item{e.rel}{relative error for the admm. This is included int the call of MADMMplasso.} +\item{e.rel}{relative error for the ADMM} -\item{alpha}{mixing parameter, usually obtained from the MADMMplasso call. When the goal is to include more interactions, alpha should be very small and vice versa.} +\item{alpha}{mixing parameter. When the goal is to include more interactions, alpha should be very small and vice versa.} -\item{lambda}{a vector lambda_3 values for the admm call with length ncol(y). This is usually calculated in the MADMMplasso call. In our current setting, we use the same the lambda_3 value for all responses.} +\item{lambda}{user specified lambda_3 values.} -\item{alph}{an overrelaxation parameter in [1, 1.8], usually obtained from the MADMMplasso call.} +\item{alph}{an overrelaxation parameter in [1, 1.8]. The implementation is borrowed from Stephen Boyd's \href{https://stanford.edu/~boyd/papers/admm/lasso/lasso.html}{MATLAB code}} \item{svd.w}{singular value decomposition of W} -\item{tree}{The results from the hierarchical clustering of the response matrix. -The easy way to obtain this is by using the function (tree_parms) which gives a default clustering. -However, user decide on a specific structure and then input a tree that follows such structure.} +\item{tree}{The results from the hierarchical clustering of the response matrix. The easy way to obtain this is by using the function (tree_parms) which gives a default clustering. However, user decide on a specific structure and then input a tree that follows such structure.} -\item{my_print}{Should information form each ADMM iteration be printed along the way? Default TRUE. This prints the dual and primal residuals} +\item{my_print}{Should information form each ADMM iteration be printed along the way? This prints the dual and primal residuals} \item{invmat}{A list of length ncol(y), each containing the C_d part of equation 32 in the paper} \item{gg}{penalty terms for the tree structure for lambda_1 and lambda_2 for the admm call.} -\item{legacy}{If \code{TRUE}, use the R version of the algorithm. Defaults to -C++.} +\item{legacy}{If \code{TRUE}, use the R version of the algorithm} } \value{ predicted values for the ADMM part diff --git a/man/cv_MADMMplasso.Rd b/man/cv_MADMMplasso.Rd index fe47051..fe034da 100644 --- a/man/cv_MADMMplasso.Rd +++ b/man/cv_MADMMplasso.Rd @@ -2,8 +2,7 @@ % Please edit documentation in R/cv_MADMMplasso.R \name{cv_MADMMplasso} \alias{cv_MADMMplasso} -\title{Carries out cross-validation for a MADMMplasso model over a path of regularization values -@description Carries out cross-validation for a MADMMplasso model over a path of regularization values} +\title{Carries out cross-validation for a MADMMplasso model over a path of regularization values} \usage{ cv_MADMMplasso( fit, @@ -37,56 +36,51 @@ cv_MADMMplasso( \item{X}{N by p matrix of predictors} -\item{Z}{N by K matrix of modifying variables. The elements of Z may -represent quantitative or categorical variables, or a mixture of the two. -Categorical variables should be coded by 0-1 dummy variables: for a k-level -variable, one can use either k or k-1 dummy variables.} +\item{Z}{N by K matrix of modifying variables. The elements of Z may represent quantitative or categorical variables, or a mixture of the two. +Categorical variables should be coded by 0-1 dummy variables: for a k-level variable, one can use either k or k-1 dummy variables.} -\item{y}{N by D-matrix of responses. The X and Z variables are centered in -the function. We recommend that x and z also be standardized before the call} +\item{y}{N by D matrix of responses. The X and Z variables are centered in the function. We recommend that X and Z also be standardized before the call} -\item{alpha}{mixing parameter- default 0.5. This value should be same as the one used for the MADMMplasso call.} +\item{alpha}{mixing parameter. When the goal is to include more interactions, alpha should be very small and vice versa.} -\item{lambda}{user specified lambda_3 values. Default fit$Lambdas.} +\item{lambda}{user specified lambda_3 values.} -\item{max_it}{maximum number of iterations in loop for one lambda during the ADMM optimization. Default 50000} +\item{max_it}{maximum number of iterations in loop for one lambda during the ADMM optimization} -\item{e.abs}{absolute error for the admm. default is 1E-3} +\item{e.abs}{absolute error for the ADMM} -\item{e.rel}{relative error for the admm-default is 1E-3} +\item{e.rel}{relative error for the ADMM} -\item{nlambda}{number of lambda_3 values desired (default 50). Similar to maxgrid but can have a value less than or equal to maxgrid.} +\item{nlambda}{number of lambda_3 values desired. Similar to maxgrid but can have a value less than or equal to maxgrid.} -\item{rho}{the Lagrange variable for the ADMM (default 5 ). This value is updated during the ADMM call based on a certain condition.} +\item{rho}{the Lagrange variable for the ADMM. This value is updated during the ADMM call based on a certain condition.} -\item{my_print}{Should information form each ADMM iteration be printed along the way? Default FALSE. This prints the dual and primal residuals} +\item{my_print}{Should information form each ADMM iteration be printed along the way? This prints the dual and primal residuals} -\item{alph}{an overelaxation parameter in [1, 1.8]. Default 1. The implementation is borrowed from Stephen Boyd's \href{https://stanford.edu/~boyd/papers/admm/lasso/lasso.html}{MATLAB code}} +\item{alph}{an overrelaxation parameter in [1, 1.8]. The implementation is borrowed from Stephen Boyd's \href{https://stanford.edu/~boyd/papers/admm/lasso/lasso.html}{MATLAB code}} \item{foldid}{vector with values in 1:K, indicating folds for K-fold CV. Default NULL} -\item{parallel}{should parallel processing be used during the admm call or not? Default True. If set to true, pal should be set \code{FALSE}.} +\item{parallel}{should parallel processing be used or not? If set to \code{TRUE}, pal should be set to \code{FALSE}.} -\item{pal}{Should the lapply function be applied for an alternative quicker optimization when there no parallel package available. Default is \code{FALSE}.} +\item{pal}{Should the lapply function be applied for an alternative quicker optimization when there no parallel package available?} -\item{gg}{penalty term for the tree structure obtained from the fit.} +\item{gg}{penalty term for the tree structure. This is a 2x2 matrix values in the first row representing the maximum to the minimum values for lambda_1 and the second row representing the maximum to the minimum values for lambda_2. In the current setting, we set both maximum and the minimum to be same because cross validation is not carried across the lambda_1 and lambda_2. However, setting different values will work during the model fit.} \item{TT}{The results from the hierarchical clustering of the response matrix. This should same as the parameter tree used during the MADMMplasso call.} -\item{tol}{threshold for the non-zero coefficients. Default 1E-4} +\item{tol}{threshold for the non-zero coefficients} -\item{cl}{The number of cpu to be used for parallel processing. default 2} +\item{cl}{The number of CPUs to be used for parallel processing} -\item{legacy}{If \code{TRUE}, use the R version of the algorithm. Defaults to -C++.} +\item{legacy}{If \code{TRUE}, use the R version of the algorithm} } \value{ results containing the CV values } \description{ -Carries out cross-validation for a MADMMplasso model over a path of regularization values -@description Carries out cross-validation for a MADMMplasso model over a path of regularization values +Carries out cross-validation for a MADMMplasso model over a path of regularization values } \examples{ # nolint start: indentation_linter