Skip to content

Commit 5b86246

Browse files
committed
Pushing the docs to dev/ for branch: main, commit 0f43434e0c6906929dc2cfbd5e848e0d77df4f5e
1 parent 7687d83 commit 5b86246

File tree

1,325 files changed

+5755
-5755
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,325 files changed

+5755
-5755
lines changed

dev/.buildinfo

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
# Sphinx build info version 1
22
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
3-
config: 388ca4e58da662c2c7775f653cc4c463
3+
config: 6b45f82bc808975a706211de4fe3aebe
44
tags: 645f666f9bcd5a90fca523b33c5a78b7
Binary file not shown.

dev/_downloads/1e0968da80ca868bbdf21c1d0547f68c/plot_lle_digits.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -87,14 +87,14 @@
8787
},
8888
"outputs": [],
8989
"source": [
90-
"from sklearn.decomposition import TruncatedSVD\nfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysis\nfrom sklearn.ensemble import RandomTreesEmbedding\nfrom sklearn.manifold import (\n MDS,\n TSNE,\n Isomap,\n LocallyLinearEmbedding,\n SpectralEmbedding,\n)\nfrom sklearn.neighbors import NeighborhoodComponentsAnalysis\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.random_projection import SparseRandomProjection\n\nembeddings = {\n \"Random projection embedding\": SparseRandomProjection(\n n_components=2, random_state=42\n ),\n \"Truncated SVD embedding\": TruncatedSVD(n_components=2),\n \"Linear Discriminant Analysis embedding\": LinearDiscriminantAnalysis(\n n_components=2\n ),\n \"Isomap embedding\": Isomap(n_neighbors=n_neighbors, n_components=2),\n \"Standard LLE embedding\": LocallyLinearEmbedding(\n n_neighbors=n_neighbors, n_components=2, method=\"standard\"\n ),\n \"Modified LLE embedding\": LocallyLinearEmbedding(\n n_neighbors=n_neighbors, n_components=2, method=\"modified\"\n ),\n \"Hessian LLE embedding\": LocallyLinearEmbedding(\n n_neighbors=n_neighbors, n_components=2, method=\"hessian\"\n ),\n \"LTSA LLE embedding\": LocallyLinearEmbedding(\n n_neighbors=n_neighbors, n_components=2, method=\"ltsa\"\n ),\n \"MDS embedding\": MDS(\n n_components=2, n_init=1, max_iter=120, n_jobs=2, normalized_stress=\"auto\"\n ),\n \"Random Trees embedding\": make_pipeline(\n RandomTreesEmbedding(n_estimators=200, max_depth=5, random_state=0),\n TruncatedSVD(n_components=2),\n ),\n \"Spectral embedding\": SpectralEmbedding(\n n_components=2, random_state=0, eigen_solver=\"arpack\"\n ),\n \"t-SNE embeedding\": TSNE(\n n_components=2,\n n_iter=500,\n n_iter_without_progress=150,\n n_jobs=2,\n random_state=0,\n ),\n \"NCA embedding\": NeighborhoodComponentsAnalysis(\n n_components=2, init=\"pca\", random_state=0\n ),\n}"
90+
"from sklearn.decomposition import TruncatedSVD\nfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysis\nfrom sklearn.ensemble import RandomTreesEmbedding\nfrom sklearn.manifold import (\n MDS,\n TSNE,\n Isomap,\n LocallyLinearEmbedding,\n SpectralEmbedding,\n)\nfrom sklearn.neighbors import NeighborhoodComponentsAnalysis\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.random_projection import SparseRandomProjection\n\nembeddings = {\n \"Random projection embedding\": SparseRandomProjection(\n n_components=2, random_state=42\n ),\n \"Truncated SVD embedding\": TruncatedSVD(n_components=2),\n \"Linear Discriminant Analysis embedding\": LinearDiscriminantAnalysis(\n n_components=2\n ),\n \"Isomap embedding\": Isomap(n_neighbors=n_neighbors, n_components=2),\n \"Standard LLE embedding\": LocallyLinearEmbedding(\n n_neighbors=n_neighbors, n_components=2, method=\"standard\"\n ),\n \"Modified LLE embedding\": LocallyLinearEmbedding(\n n_neighbors=n_neighbors, n_components=2, method=\"modified\"\n ),\n \"Hessian LLE embedding\": LocallyLinearEmbedding(\n n_neighbors=n_neighbors, n_components=2, method=\"hessian\"\n ),\n \"LTSA LLE embedding\": LocallyLinearEmbedding(\n n_neighbors=n_neighbors, n_components=2, method=\"ltsa\"\n ),\n \"MDS embedding\": MDS(\n n_components=2, n_init=1, max_iter=120, n_jobs=2, normalized_stress=\"auto\"\n ),\n \"Random Trees embedding\": make_pipeline(\n RandomTreesEmbedding(n_estimators=200, max_depth=5, random_state=0),\n TruncatedSVD(n_components=2),\n ),\n \"Spectral embedding\": SpectralEmbedding(\n n_components=2, random_state=0, eigen_solver=\"arpack\"\n ),\n \"t-SNE embedding\": TSNE(\n n_components=2,\n n_iter=500,\n n_iter_without_progress=150,\n n_jobs=2,\n random_state=0,\n ),\n \"NCA embedding\": NeighborhoodComponentsAnalysis(\n n_components=2, init=\"pca\", random_state=0\n ),\n}"
9191
]
9292
},
9393
{
9494
"cell_type": "markdown",
9595
"metadata": {},
9696
"source": [
97-
"Once we declared all the methodes of interest, we can run and perform the projection\nof the original data. We will store the projected data as well as the computational\ntime needed to perform each projection.\n\n"
97+
"Once we declared all the methods of interest, we can run and perform the projection\nof the original data. We will store the projected data as well as the computational\ntime needed to perform each projection.\n\n"
9898
]
9999
},
100100
{

dev/_downloads/264f6891fa2130246a013d5f089e7b2e/plot_svm_kernels.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@
7676
"cell_type": "markdown",
7777
"metadata": {},
7878
"source": [
79-
"Training a :class:`~sklearn.svm.SVC` on a linear kernel results in an\nuntransformed feature space, where the hyperplane and the margins are\nstraight lines. Due to the lack of expressivity of the linear kernel, the\ntrained classes do not perfectly capture the training data.\n\n### Polynomial kernel\nThe polynomial kernel changes the notion of similarity. The kernel function\nis defined as:\n\n\\begin{align}K(\\mathbf{x}_1, \\mathbf{x}_2) = (\\gamma \\cdot \\\n \\mathbf{x}_1^\\top\\mathbf{x}_2 + r)^d\\end{align}\n\nwhere ${d}$ is the degree (`degree`) of the polynomial, ${\\gamma}$\n(`gamma`) controls the influence of each individual training sample on the\ndecision boundary and ${r}$ is the bias term (`coef0`) that shifts the\ndata up or down. Here, we use the default value for the degree of the\npolynomal in the kernel funcion (`degree=3`). When `coef0=0` (the default),\nthe data is only transformed, but no additional dimension is added. Using a\npolynomial kernel is equivalent to creating\n:class:`~sklearn.preprocessing.PolynomialFeatures` and then fitting a\n:class:`~sklearn.svm.SVC` with a linear kernel on the transformed data,\nalthough this alternative approach would be computationally expensive for most\ndatasets.\n\n"
79+
"Training a :class:`~sklearn.svm.SVC` on a linear kernel results in an\nuntransformed feature space, where the hyperplane and the margins are\nstraight lines. Due to the lack of expressivity of the linear kernel, the\ntrained classes do not perfectly capture the training data.\n\n### Polynomial kernel\nThe polynomial kernel changes the notion of similarity. The kernel function\nis defined as:\n\n\\begin{align}K(\\mathbf{x}_1, \\mathbf{x}_2) = (\\gamma \\cdot \\\n \\mathbf{x}_1^\\top\\mathbf{x}_2 + r)^d\\end{align}\n\nwhere ${d}$ is the degree (`degree`) of the polynomial, ${\\gamma}$\n(`gamma`) controls the influence of each individual training sample on the\ndecision boundary and ${r}$ is the bias term (`coef0`) that shifts the\ndata up or down. Here, we use the default value for the degree of the\npolynomial in the kernel function (`degree=3`). When `coef0=0` (the default),\nthe data is only transformed, but no additional dimension is added. Using a\npolynomial kernel is equivalent to creating\n:class:`~sklearn.preprocessing.PolynomialFeatures` and then fitting a\n:class:`~sklearn.svm.SVC` with a linear kernel on the transformed data,\nalthough this alternative approach would be computationally expensive for most\ndatasets.\n\n"
8080
]
8181
},
8282
{

dev/_downloads/2e4791a177381a6102b21e44083615c8/plot_poisson_regression_non_normal_loss.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@
112112
"cell_type": "markdown",
113113
"metadata": {},
114114
"source": [
115-
"## (Generalized) linear models\n\nWe start by modeling the target variable with the (l2 penalized) least\nsquares linear regression model, more comonly known as Ridge regression. We\nuse a low penalization `alpha`, as we expect such a linear model to under-fit\non such a large dataset.\n\n"
115+
"## (Generalized) linear models\n\nWe start by modeling the target variable with the (l2 penalized) least\nsquares linear regression model, more commonly known as Ridge regression. We\nuse a low penalization `alpha`, as we expect such a linear model to under-fit\non such a large dataset.\n\n"
116116
]
117117
},
118118
{
Binary file not shown.

dev/_downloads/7011de1f31ecdc52f138d7e582a6a455/plot_voting_probas.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
"cell_type": "markdown",
55
"metadata": {},
66
"source": [
7-
"\n# Plot class probabilities calculated by the VotingClassifier\n\n.. currentmodule:: sklearn\n\nPlot the class probabilities of the first sample in a toy dataset predicted by\nthree different classifiers and averaged by the\n:class:`~ensemble.VotingClassifier`.\n\nFirst, three examplary classifiers are initialized\n(:class:`~linear_model.LogisticRegression`, :class:`~naive_bayes.GaussianNB`,\nand :class:`~ensemble.RandomForestClassifier`) and used to initialize a\nsoft-voting :class:`~ensemble.VotingClassifier` with weights `[1, 1, 5]`, which\nmeans that the predicted probabilities of the\n:class:`~ensemble.RandomForestClassifier` count 5 times as much as the weights\nof the other classifiers when the averaged probability is calculated.\n\nTo visualize the probability weighting, we fit each classifier on the training\nset and plot the predicted class probabilities for the first sample in this\nexample dataset.\n"
7+
"\n# Plot class probabilities calculated by the VotingClassifier\n\n.. currentmodule:: sklearn\n\nPlot the class probabilities of the first sample in a toy dataset predicted by\nthree different classifiers and averaged by the\n:class:`~ensemble.VotingClassifier`.\n\nFirst, three exemplary classifiers are initialized\n(:class:`~linear_model.LogisticRegression`, :class:`~naive_bayes.GaussianNB`,\nand :class:`~ensemble.RandomForestClassifier`) and used to initialize a\nsoft-voting :class:`~ensemble.VotingClassifier` with weights `[1, 1, 5]`, which\nmeans that the predicted probabilities of the\n:class:`~ensemble.RandomForestClassifier` count 5 times as much as the weights\nof the other classifiers when the averaged probability is calculated.\n\nTo visualize the probability weighting, we fit each classifier on the training\nset and plot the predicted class probabilities for the first sample in this\nexample dataset.\n"
88
]
99
},
1010
{

dev/_downloads/7012baed63b9a27f121bae611b8285c2/plot_cyclical_feature_engineering.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -592,7 +592,7 @@
592592
"cell_type": "markdown",
593593
"metadata": {},
594594
"source": [
595-
"Those features are then combined with the ones already computed in the\nprevious spline-base pipeline. We can observe a nice performance improvemnt\nby modeling this pairwise interaction explicitly:\n\n"
595+
"Those features are then combined with the ones already computed in the\nprevious spline-base pipeline. We can observe a nice performance improvement\nby modeling this pairwise interaction explicitly:\n\n"
596596
]
597597
},
598598
{

dev/_downloads/8975399471ae75debd0b26fbe3013719/plot_svm_kernels.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -184,7 +184,7 @@ def plot_training_data_with_decision_boundary(kernel):
184184
# (`gamma`) controls the influence of each individual training sample on the
185185
# decision boundary and :math:`{r}` is the bias term (`coef0`) that shifts the
186186
# data up or down. Here, we use the default value for the degree of the
187-
# polynomal in the kernel funcion (`degree=3`). When `coef0=0` (the default),
187+
# polynomial in the kernel function (`degree=3`). When `coef0=0` (the default),
188188
# the data is only transformed, but no additional dimension is added. Using a
189189
# polynomial kernel is equivalent to creating
190190
# :class:`~sklearn.preprocessing.PolynomialFeatures` and then fitting a

dev/_downloads/93cd12369459b2e432d0a2665e19ef8a/plot_voting_probas.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
three different classifiers and averaged by the
1010
:class:`~ensemble.VotingClassifier`.
1111
12-
First, three examplary classifiers are initialized
12+
First, three exemplary classifiers are initialized
1313
(:class:`~linear_model.LogisticRegression`, :class:`~naive_bayes.GaussianNB`,
1414
and :class:`~ensemble.RandomForestClassifier`) and used to initialize a
1515
soft-voting :class:`~ensemble.VotingClassifier` with weights `[1, 1, 5]`, which

0 commit comments

Comments
 (0)