You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jul 19, 2022. It is now read-only.
Following conventions for the objective functions in the ElasticNet are used:
sklearn: of = 1/2N C + \alpha \mu P_1 + 0.5 \alpha (1-\mu) P_2
McConaughy: of = C + \lambda \rho P_1 + \lambda(1-\rho) P_2
here, C is the (squared) two norm of the residuals and P_1 and P_2 are the regularization term.
McConaughy probably also means a factor of 1/N in front of the C, otherwise the amount of regularization would scale with the number of features which doesn't make any sense.
Assuming this factor, the following formulae should be applied when mapping the regularization parameters from sparseregs interface to that of sklearn.
\alpha = \lambda ( 1-\rho/2)
\mu = \rho/(2-\rho)
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Following conventions for the objective functions in the ElasticNet are used:
sklearn:
of = 1/2N C + \alpha \mu P_1 + 0.5 \alpha (1-\mu) P_2
McConaughy:
of = C + \lambda \rho P_1 + \lambda(1-\rho) P_2
here, C is the (squared) two norm of the residuals and P_1 and P_2 are the regularization term.
McConaughy probably also means a factor of 1/N in front of the C, otherwise the amount of regularization would scale with the number of features which doesn't make any sense.
Assuming this factor, the following formulae should be applied when mapping the regularization parameters from sparseregs interface to that of sklearn.
The text was updated successfully, but these errors were encountered: