Skip to content

Commit 311032a

Browse files
committed
Pushing the docs to dev/ for branch: master, commit e0f0b3de2f92a0a57839cda7858a016465fb413a
1 parent 8a98c73 commit 311032a

File tree

1,415 files changed

+13244
-6904
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,415 files changed

+13244
-6904
lines changed

dev/_downloads/00701bf1048deb8daeb5ad086596d260/plot_lasso_lars.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Lasso path using LARS\n\n\nComputes Lasso Path along the regularization parameter using the LARS\nalgorithm on the diabetes dataset. Each color represents a different\nfeature of the coefficient vector, and this is displayed as a function\nof the regularization parameter.\n\n\n"
18+
"\n# Lasso path using LARS\n\n\nComputes Lasso Path along the regularization parameter using the LARS\nalgorithm on the diabetes dataset. Each color represents a different\nfeature of the coefficient vector, and this is displayed as a function\nof the regularization parameter.\n"
1919
]
2020
},
2121
{

dev/_downloads/00727cbc15047062964b3f55fc4571b7/plot_label_propagation_digits_active_learning.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Label Propagation digits active learning\n\n\nDemonstrates an active learning technique to learn handwritten digits\nusing label propagation.\n\nWe start by training a label propagation model with only 10 labeled points,\nthen we select the top five most uncertain points to label. Next, we train\nwith 15 labeled points (original 10 + 5 new ones). We repeat this process\nfour times to have a model trained with 30 labeled examples. Note you can\nincrease this to label more than 30 by changing `max_iterations`. Labeling\nmore than 30 can be useful to get a sense for the speed of convergence of\nthis active learning technique.\n\nA plot will appear showing the top 5 most uncertain digits for each iteration\nof training. These may or may not contain mistakes, but we will train the next\nmodel with their true labels.\n\n"
18+
"\n# Label Propagation digits active learning\n\n\nDemonstrates an active learning technique to learn handwritten digits\nusing label propagation.\n\nWe start by training a label propagation model with only 10 labeled points,\nthen we select the top five most uncertain points to label. Next, we train\nwith 15 labeled points (original 10 + 5 new ones). We repeat this process\nfour times to have a model trained with 30 labeled examples. Note you can\nincrease this to label more than 30 by changing `max_iterations`. Labeling\nmore than 30 can be useful to get a sense for the speed of convergence of\nthis active learning technique.\n\nA plot will appear showing the top 5 most uncertain digits for each iteration\nof training. These may or may not contain mistakes, but we will train the next\nmodel with their true labels.\n"
1919
]
2020
},
2121
{

dev/_downloads/021e92302b91d52cc0533a6d8c7016e7/plot_tree_regression.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Decision Tree Regression\n\n\nA 1D regression with decision tree.\n\nThe `decision trees <tree>` is\nused to fit a sine curve with addition noisy observation. As a result, it\nlearns local linear regressions approximating the sine curve.\n\nWe can see that if the maximum depth of the tree (controlled by the\n`max_depth` parameter) is set too high, the decision trees learn too fine\ndetails of the training data and learn from the noise, i.e. they overfit.\n\n"
18+
"\n# Decision Tree Regression\n\n\nA 1D regression with decision tree.\n\nThe `decision trees <tree>` is\nused to fit a sine curve with addition noisy observation. As a result, it\nlearns local linear regressions approximating the sine curve.\n\nWe can see that if the maximum depth of the tree (controlled by the\n`max_depth` parameter) is set too high, the decision trees learn too fine\ndetails of the training data and learn from the noise, i.e. they overfit.\n"
1919
]
2020
},
2121
{

dev/_downloads/0285d04f43da1e9f71cd93c44a262e4a/plot_sparse_coding.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Sparse coding with a precomputed dictionary\n\n\nTransform a signal as a sparse combination of Ricker wavelets. This example\nvisually compares different sparse coding methods using the\n:class:`sklearn.decomposition.SparseCoder` estimator. The Ricker (also known\nas Mexican hat or the second derivative of a Gaussian) is not a particularly\ngood kernel to represent piecewise constant signals like this one. It can\ntherefore be seen how much adding different widths of atoms matters and it\ntherefore motivates learning the dictionary to best fit your type of signals.\n\nThe richer dictionary on the right is not larger in size, heavier subsampling\nis performed in order to stay on the same order of magnitude.\n\n"
18+
"\n# Sparse coding with a precomputed dictionary\n\n\nTransform a signal as a sparse combination of Ricker wavelets. This example\nvisually compares different sparse coding methods using the\n:class:`sklearn.decomposition.SparseCoder` estimator. The Ricker (also known\nas Mexican hat or the second derivative of a Gaussian) is not a particularly\ngood kernel to represent piecewise constant signals like this one. It can\ntherefore be seen how much adding different widths of atoms matters and it\ntherefore motivates learning the dictionary to best fit your type of signals.\n\nThe richer dictionary on the right is not larger in size, heavier subsampling\nis performed in order to stay on the same order of magnitude.\n"
1919
]
2020
},
2121
{

dev/_downloads/02c23dfdb2c05d59dbdb168b1fdcb3d8/plot_feature_agglomeration_vs_univariate_selection.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n==============================================\nFeature agglomeration vs. univariate selection\n==============================================\n\nThis example compares 2 dimensionality reduction strategies:\n\n- univariate feature selection with Anova\n\n- feature agglomeration with Ward hierarchical clustering\n\nBoth methods are compared in a regression problem using\na BayesianRidge as supervised estimator.\n\n"
18+
"\n==============================================\nFeature agglomeration vs. univariate selection\n==============================================\n\nThis example compares 2 dimensionality reduction strategies:\n\n- univariate feature selection with Anova\n\n- feature agglomeration with Ward hierarchical clustering\n\nBoth methods are compared in a regression problem using\na BayesianRidge as supervised estimator.\n"
1919
]
2020
},
2121
{

dev/_downloads/0314576f362d54c36e37904494f4c7d9/plot_gpc_iris.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n=====================================================\nGaussian process classification (GPC) on iris dataset\n=====================================================\n\nThis example illustrates the predicted probability of GPC for an isotropic\nand anisotropic RBF kernel on a two-dimensional version for the iris-dataset.\nThe anisotropic RBF kernel obtains slightly higher log-marginal-likelihood by\nassigning different length-scales to the two feature dimensions.\n\n"
18+
"\n=====================================================\nGaussian process classification (GPC) on iris dataset\n=====================================================\n\nThis example illustrates the predicted probability of GPC for an isotropic\nand anisotropic RBF kernel on a two-dimensional version for the iris-dataset.\nThe anisotropic RBF kernel obtains slightly higher log-marginal-likelihood by\nassigning different length-scales to the two feature dimensions.\n"
1919
]
2020
},
2121
{

dev/_downloads/039de292122a5274c38d249b744c9b55/plot_pca_vs_lda.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Comparison of LDA and PCA 2D projection of Iris dataset\n\n\nThe Iris dataset represents 3 kind of Iris flowers (Setosa, Versicolour\nand Virginica) with 4 attributes: sepal length, sepal width, petal length\nand petal width.\n\nPrincipal Component Analysis (PCA) applied to this data identifies the\ncombination of attributes (principal components, or directions in the\nfeature space) that account for the most variance in the data. Here we\nplot the different samples on the 2 first principal components.\n\nLinear Discriminant Analysis (LDA) tries to identify attributes that\naccount for the most variance *between classes*. In particular,\nLDA, in contrast to PCA, is a supervised method, using known class labels.\n\n"
18+
"\n# Comparison of LDA and PCA 2D projection of Iris dataset\n\n\nThe Iris dataset represents 3 kind of Iris flowers (Setosa, Versicolour\nand Virginica) with 4 attributes: sepal length, sepal width, petal length\nand petal width.\n\nPrincipal Component Analysis (PCA) applied to this data identifies the\ncombination of attributes (principal components, or directions in the\nfeature space) that account for the most variance in the data. Here we\nplot the different samples on the 2 first principal components.\n\nLinear Discriminant Analysis (LDA) tries to identify attributes that\naccount for the most variance *between classes*. In particular,\nLDA, in contrast to PCA, is a supervised method, using known class labels.\n"
1919
]
2020
},
2121
{

dev/_downloads/053f85f102c07abae7dc20a87b112911/plot_gpr_noisy_targets.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n=========================================================\nGaussian Processes regression: basic introductory example\n=========================================================\n\nA simple one-dimensional regression example computed in two different ways:\n\n1. A noise-free case\n2. A noisy case with known noise-level per datapoint\n\nIn both cases, the kernel's parameters are estimated using the maximum\nlikelihood principle.\n\nThe figures illustrate the interpolating property of the Gaussian Process\nmodel as well as its probabilistic nature in the form of a pointwise 95%\nconfidence interval.\n\nNote that the parameter ``alpha`` is applied as a Tikhonov\nregularization of the assumed covariance between the training points.\n\n"
18+
"\n=========================================================\nGaussian Processes regression: basic introductory example\n=========================================================\n\nA simple one-dimensional regression example computed in two different ways:\n\n1. A noise-free case\n2. A noisy case with known noise-level per datapoint\n\nIn both cases, the kernel's parameters are estimated using the maximum\nlikelihood principle.\n\nThe figures illustrate the interpolating property of the Gaussian Process\nmodel as well as its probabilistic nature in the form of a pointwise 95%\nconfidence interval.\n\nNote that the parameter ``alpha`` is applied as a Tikhonov\nregularization of the assumed covariance between the training points.\n"
1919
]
2020
},
2121
{

dev/_downloads/05af92ebbf361ad6b838e9d4e34901da/plot_sgd_penalties.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n==============\nSGD: Penalties\n==============\n\nContours of where the penalty is equal to 1\nfor the three penalties L1, L2 and elastic-net.\n\nAll of the above are supported by\n:class:`sklearn.linear_model.stochastic_gradient`.\n\n\n"
18+
"\n==============\nSGD: Penalties\n==============\n\nContours of where the penalty is equal to 1\nfor the three penalties L1, L2 and elastic-net.\n\nAll of the above are supported by\n:class:`sklearn.linear_model.stochastic_gradient`.\n"
1919
]
2020
},
2121
{

dev/_downloads/06d1bf4510bd82b665c44c2bd2c364ae/plot_feature_selection_pipeline.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Pipeline Anova SVM\n\n\nSimple usage of Pipeline that runs successively a univariate\nfeature selection with anova and then a SVM of the selected features.\n\nUsing a sub-pipeline, the fitted coefficients can be mapped back into\nthe original feature space.\n\n"
18+
"\n# Pipeline Anova SVM\n\n\nSimple usage of Pipeline that runs successively a univariate\nfeature selection with anova and then a SVM of the selected features.\n\nUsing a sub-pipeline, the fitted coefficients can be mapped back into\nthe original feature space.\n"
1919
]
2020
},
2121
{

0 commit comments

Comments
 (0)