sklearn.linear_model.lars_path_gram
-
sklearn.linear_model.lars_path_gram(Xy, Gram, *, n_samples, max_iter=500, alpha_min=0, method='lar', copy_X=True, eps=2.220446049250313e-16, copy_Gram=True, verbose=0, return_path=True, return_n_iter=False, positive=False)[source] -
lars_path in the sufficient stats mode [1]
The optimization objective for the case method=’lasso’ is:
(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
in the case of method=’lars’, the objective function is only known in the form of an implicit equation (see discussion in [1])
Read more in the User Guide.
- Parameters
-
-
Xyarray-like of shape (n_samples,) or (n_samples, n_targets) -
Xy = np.dot(X.T, y).
-
Gramarray-like of shape (n_features, n_features) -
Gram = np.dot(X.T * X).
-
n_samplesint or float -
Equivalent size of sample.
-
max_iterint, default=500 -
Maximum number of iterations to perform, set to infinity for no limit.
-
alpha_minfloat, default=0 -
Minimum correlation along the path. It corresponds to the regularization parameter alpha parameter in the Lasso.
-
method{‘lar’, ‘lasso’}, default=’lar’ -
Specifies the returned model. Select
'lar'for Least Angle Regression,'lasso'for the Lasso. -
copy_Xbool, default=True -
If
False,Xis overwritten. -
epsfloat, default=np.finfo(float).eps -
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the
tolparameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization. -
copy_Grambool, default=True -
If
False,Gramis overwritten. -
verboseint, default=0 -
Controls output verbosity.
-
return_pathbool, default=True -
If
return_path==Truereturns the entire path, else returns only the last point of the path. -
return_n_iterbool, default=False -
Whether to return the number of iterations.
-
positivebool, default=False -
Restrict coefficients to be >= 0. This option is only allowed with method ‘lasso’. Note that the model coefficients will not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (
alphas_[alphas_ > 0.].min()when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent lasso_path function.
-
- Returns
-
-
alphasarray-like of shape (n_alphas + 1,) -
Maximum of covariances (in absolute value) at each iteration.
n_alphasis eithermax_iter,n_featuresor the number of nodes in the path withalpha >= alpha_min, whichever is smaller. -
activearray-like of shape (n_alphas,) -
Indices of active variables at the end of the path.
-
coefsarray-like of shape (n_features, n_alphas + 1) -
Coefficients along the path
-
n_iterint -
Number of iterations run. Returned only if return_n_iter is set to True.
-
See also
-
lars_path -
lasso_path -
lasso_path_gram -
LassoLars -
Lars -
LassoLarsCV -
LarsCV -
sklearn.decomposition.sparse_encode
References
-
1 -
“Least Angle Regression”, Efron et al. http://statweb.stanford.edu/~tibs/ftp/lars.pdf
-
2 -
3
© 2007–2020 The scikit-learn developers
Licensed under the 3-clause BSD License.
https://scikit-learn.org/0.24/modules/generated/sklearn.linear_model.lars_path_gram.html