Skip to content

Commit 66babb2

Browse files
committed
Pushing the docs to dev/ for branch: master, commit 14f5302b7000e9096de93beef37dcdb08f55f128
1 parent be0a1cd commit 66babb2

File tree

1,249 files changed

+4743
-4655
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,249 files changed

+4743
-4655
lines changed

dev/_downloads/0469b1db532e2049dcabff76dcfa3407/plot_cv_indices.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,8 @@ def plot_cv_indices(cv, X, y, group, ax, n_splits, lw=10):
103103

104104

105105
###############################################################################
106-
# Let's see how it looks for the `KFold` cross-validation object:
106+
# Let's see how it looks for the :class:`~sklearn.model_selection.KFold`
107+
# cross-validation object:
107108

108109
fig, ax = plt.subplots()
109110
cv = KFold(n_splits)

dev/_downloads/0f2070eb0ba0c1cd77d1ae6069402bea/plot_random_multilabel_dataset.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Plot randomly generated multilabel dataset\n\n\nThis illustrates the `datasets.make_multilabel_classification` dataset\ngenerator. Each sample consists of counts of two features (up to 50 in\ntotal), which are differently distributed in each of two classes.\n\nPoints are labeled as follows, where Y means the class is present:\n\n ===== ===== ===== ======\n 1 2 3 Color\n ===== ===== ===== ======\n Y N N Red\n N Y N Blue\n N N Y Yellow\n Y Y N Purple\n Y N Y Orange\n Y Y N Green\n Y Y Y Brown\n ===== ===== ===== ======\n\nA star marks the expected sample for each class; its size reflects the\nprobability of selecting that class label.\n\nThe left and right examples highlight the ``n_labels`` parameter:\nmore of the samples in the right plot have 2 or 3 labels.\n\nNote that this two-dimensional example is very degenerate:\ngenerally the number of features would be much greater than the\n\"document length\", while here we have much larger documents than vocabulary.\nSimilarly, with ``n_classes > n_features``, it is much less likely that a\nfeature distinguishes a particular class.\n"
18+
"\n# Plot randomly generated multilabel dataset\n\n\nThis illustrates the :func:`~sklearn.datasets.make_multilabel_classification`\ndataset generator. Each sample consists of counts of two features (up to 50 in\ntotal), which are differently distributed in each of two classes.\n\nPoints are labeled as follows, where Y means the class is present:\n\n ===== ===== ===== ======\n 1 2 3 Color\n ===== ===== ===== ======\n Y N N Red\n N Y N Blue\n N N Y Yellow\n Y Y N Purple\n Y N Y Orange\n Y Y N Green\n Y Y Y Brown\n ===== ===== ===== ======\n\nA star marks the expected sample for each class; its size reflects the\nprobability of selecting that class label.\n\nThe left and right examples highlight the ``n_labels`` parameter:\nmore of the samples in the right plot have 2 or 3 labels.\n\nNote that this two-dimensional example is very degenerate:\ngenerally the number of features would be much greater than the\n\"document length\", while here we have much larger documents than vocabulary.\nSimilarly, with ``n_classes > n_features``, it is much less likely that a\nfeature distinguishes a particular class.\n"
1919
]
2020
},
2121
{

dev/_downloads/18e2721d4cbbd390f886d71f471ce223/plot_species_distribution_modeling.py

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,8 @@
99
mammals given past observations and 14 environmental
1010
variables. Since we have only positive examples (there are
1111
no unsuccessful observations), we cast this problem as a
12-
density estimation problem and use the `OneClassSVM` provided
13-
by the package `sklearn.svm` as our modeling tool.
14-
The dataset is provided by Phillips et. al. (2006).
12+
density estimation problem and use the :class:`sklearn.svm.OneClassSVM`
13+
as our modeling tool. The dataset is provided by Phillips et. al. (2006).
1514
If available, the example uses
1615
`basemap <https://matplotlib.org/basemap/>`_
1716
to plot the coast lines and national boundaries of South America.

dev/_downloads/2ae02325ffd71a3699f433ae3baecd85/plot_cv_predict.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Plotting Cross-Validated Predictions\n\n\nThis example shows how to use `cross_val_predict` to visualize prediction\nerrors.\n"
18+
"\n# Plotting Cross-Validated Predictions\n\n\nThis example shows how to use\n:func:`~sklearn.model_selection.cross_val_predict` to visualize prediction\nerrors.\n"
1919
]
2020
},
2121
{

dev/_downloads/336608b7fc391cd88b2587817f48ffdd/plot_cv_predict.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,8 @@
33
Plotting Cross-Validated Predictions
44
====================================
55
6-
This example shows how to use `cross_val_predict` to visualize prediction
6+
This example shows how to use
7+
:func:`~sklearn.model_selection.cross_val_predict` to visualize prediction
78
errors.
89
910
"""
Binary file not shown.

dev/_downloads/7e4c1ab4eea03643011a811d08c1ab91/plot_voting_probas.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Plot class probabilities calculated by the VotingClassifier\n\n\nPlot the class probabilities of the first sample in a toy dataset\npredicted by three different classifiers and averaged by the\n`VotingClassifier`.\n\nFirst, three examplary classifiers are initialized (`LogisticRegression`,\n`GaussianNB`, and `RandomForestClassifier`) and used to initialize a\nsoft-voting `VotingClassifier` with weights `[1, 1, 5]`, which means that\nthe predicted probabilities of the `RandomForestClassifier` count 5 times\nas much as the weights of the other classifiers when the averaged probability\nis calculated.\n\nTo visualize the probability weighting, we fit each classifier on the training\nset and plot the predicted class probabilities for the first sample in this\nexample dataset.\n"
18+
"\n# Plot class probabilities calculated by the VotingClassifier\n\n\n.. currentmodule:: sklearn\n\nPlot the class probabilities of the first sample in a toy dataset predicted by\nthree different classifiers and averaged by the\n:class:`~ensemble.VotingClassifier`.\n\nFirst, three examplary classifiers are initialized\n(:class:`~linear_model.LogisticRegression`, :class:`~naive_bayes.GaussianNB`,\nand :class:`~ensemble.RandomForestClassifier`) and used to initialize a\nsoft-voting :class:`~ensemble.VotingClassifier` with weights `[1, 1, 5]`, which\nmeans that the predicted probabilities of the\n:class:`~ensemble.RandomForestClassifier` count 5 times as much as the weights\nof the other classifiers when the averaged probability is calculated.\n\nTo visualize the probability weighting, we fit each classifier on the training\nset and plot the predicted class probabilities for the first sample in this\nexample dataset.\n"
1919
]
2020
},
2121
{

dev/_downloads/9d49033f7775e9c8e115aa32c938e827/plot_cv_indices.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@
6969
"cell_type": "markdown",
7070
"metadata": {},
7171
"source": [
72-
"Let's see how it looks for the `KFold` cross-validation object:\n\n"
72+
"Let's see how it looks for the :class:`~sklearn.model_selection.KFold`\ncross-validation object:\n\n"
7373
]
7474
},
7575
{

dev/_downloads/a9a92784a7617f5a14aa93d32f95dff7/plot_voting_regressor.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Plot individual and voting regression predictions\n\n\nPlot individual and averaged regression predictions for Boston dataset.\n\nFirst, three exemplary regressors are initialized (`GradientBoostingRegressor`,\n`RandomForestRegressor`, and `LinearRegression`) and used to initialize a\n`VotingRegressor`.\n\nThe red starred dots are the averaged predictions.\n"
18+
"\n# Plot individual and voting regression predictions\n\n\n.. currentmodule:: sklearn\n\nPlot individual and averaged regression predictions for Boston dataset.\n\nFirst, three exemplary regressors are initialized\n(:class:`~ensemble.GradientBoostingRegressor`,\n:class:`~ensemble.RandomForestRegressor`, and\n:class:`~linear_model.LinearRegression`) and used to initialize a\n:class:`~ensemble.VotingRegressor`.\n\nThe red starred dots are the averaged predictions.\n"
1919
]
2020
},
2121
{

dev/_downloads/acb1430b51f399d6660add7428cadb67/plot_voting_regressor.py

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,15 @@
33
Plot individual and voting regression predictions
44
=================================================
55
6+
.. currentmodule:: sklearn
7+
68
Plot individual and averaged regression predictions for Boston dataset.
79
8-
First, three exemplary regressors are initialized (`GradientBoostingRegressor`,
9-
`RandomForestRegressor`, and `LinearRegression`) and used to initialize a
10-
`VotingRegressor`.
10+
First, three exemplary regressors are initialized
11+
(:class:`~ensemble.GradientBoostingRegressor`,
12+
:class:`~ensemble.RandomForestRegressor`, and
13+
:class:`~linear_model.LinearRegression`) and used to initialize a
14+
:class:`~ensemble.VotingRegressor`.
1115
1216
The red starred dots are the averaged predictions.
1317

0 commit comments

Comments
 (0)