is (setting to 'random') often leads to significantly faster convergence especially when tol is higher than 1e-4. Attributes ---------- alpha_ : float The amount of penalization chosen by cross validation. l1_ratio_ : float The compromise between l1 and l2 penalization chosen by cross validation. coef_ : ndarray of shape (n_features,) or (n_targets, n_features) Parameter vector (w in the cost function formula). intercept_ : float or ndarray of shape (n_targets, n_features) Independent term in the decision function. mse_path_ : ndarray of shape (n_l1_ratio, n_alpha, n_folds) Mean square error for the test set on each fold, varying l1_ratio and alpha. alphas_ : ndarray of shape (n_alphas,) or (n_l1_ratio, n_alphas) The grid of alphas used for fitting, for each l1_ratio. dual_gap_ : float The dual gaps at the end of the optimization for the optimal alpha. n_iter_ : int Number of iterations run by the coordinate descent solver to reach the specified tolerance for the optimal alpha. n_features_in_ : int Number of features seen during :term:`fit`. .. versionadded:: 0.24 feature_names_in_ : ndarray of shape (`n_features_in_`,) Names of features seen during :term:`fit`. Defined only when `X` has feature names that are all strings. .. versionadded:: 1.0 See Also -------- enet_path : Compute elastic net path with coordinate descent. ElasticNet : Linear regression with combined L1 and L2 priors as regularizer. Notes ----- In `fit`, once the best parameters `l1_ratio` and `alpha` are found through cross-validation, the model is fit again using the entire training set. To avoid unnecessary memory duplication the `X` argument of the `fit` method should be directly passed as a Fortran-contiguous numpy array. The parameter `l1_ratio` corresponds to alpha in the glmnet R package while alpha corresponds to the lambda parameter in glmnet. More specifically, the optimization objective is:: 1 / (2 * n_samples) * ||y - Xw||^2_2 + alpha * l1_ratio * ||w||_1 + 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2 If you are interested in controlling the L1 and L2 penalty separately, keep in mind that this is equivalent to:: a * L1 + b * L2 for:: alpha = a + b and l1_ratio = a / (a + b). For an example, see :ref:`examples/linear_model/plot_lasso_model_selection.py `. Examples -------- >>> from sklearn.linear_model import ElasticNetCV >>> from sklearn.datasets import make_regression >>> X, y = make_regression(n_features=2, random_state=0) >>> regr = ElasticNetCV(cv=5, random_state=0) >>> regr.fit(X, y) ElasticNetCV(cv=5, random_state=0) >>> print(regr.alpha_) 0.199... >>> print(regr.intercept_) 0.398... >>> print(regr.predict([[0, 0]])) [0.398...] r]