Skip to content

Commit 03a40ac

Browse files
committed
Pushing the docs to dev/ for branch: master, commit 613a3338f37a32577573055fdb5178359b50e9ce
1 parent 5547c7f commit 03a40ac

File tree

1,207 files changed

+4063
-4063
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,207 files changed

+4063
-4063
lines changed

dev/_downloads/22b5d928415782a90ca8864871748096/plot_sparse_logistic_regression_mnist.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
"""
22
=====================================================
3-
MNIST classfification using multinomial logistic + L1
3+
MNIST classification using multinomial logistic + L1
44
=====================================================
55
66
Here we fit a multinomial logistic regression with L1 penalty on a subset of
Binary file not shown.

dev/_downloads/d033936f4550c8b2cfc0de9448c57fc3/plot_sparse_logistic_regression_mnist.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n=====================================================\nMNIST classfification using multinomial logistic + L1\n=====================================================\n\nHere we fit a multinomial logistic regression with L1 penalty on a subset of\nthe MNIST digits classification task. We use the SAGA algorithm for this\npurpose: this a solver that is fast when the number of samples is significantly\nlarger than the number of features and is able to finely optimize non-smooth\nobjective functions which is the case with the l1-penalty. Test accuracy\nreaches > 0.8, while weight vectors remains *sparse* and therefore more easily\n*interpretable*.\n\nNote that this accuracy of this l1-penalized linear model is significantly\nbelow what can be reached by an l2-penalized linear model or a non-linear\nmulti-layer perceptron model on this dataset.\n"
18+
"\n=====================================================\nMNIST classification using multinomial logistic + L1\n=====================================================\n\nHere we fit a multinomial logistic regression with L1 penalty on a subset of\nthe MNIST digits classification task. We use the SAGA algorithm for this\npurpose: this a solver that is fast when the number of samples is significantly\nlarger than the number of features and is able to finely optimize non-smooth\nobjective functions which is the case with the l1-penalty. Test accuracy\nreaches > 0.8, while weight vectors remains *sparse* and therefore more easily\n*interpretable*.\n\nNote that this accuracy of this l1-penalized linear model is significantly\nbelow what can be reached by an l2-penalized linear model or a non-linear\nmulti-layer perceptron model on this dataset.\n"
1919
]
2020
},
2121
{
Binary file not shown.

dev/_downloads/scikit-learn-docs.pdf

29.1 KB
Binary file not shown.

dev/_images/iris.png

0 Bytes
-107 Bytes
-107 Bytes
96 Bytes
96 Bytes

0 commit comments

Comments
 (0)