Skip to content

Commit 19762de

Browse files
committed
Pushing the docs to dev/ for branch: master, commit 00bc681e1045541fdcecfd3b22e33cfce07ffec8
1 parent ac9e838 commit 19762de

File tree

1,233 files changed

+3770
-3770
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,233 files changed

+3770
-3770
lines changed

dev/_downloads/2108844cb1b17bae9a6b4c0b0fb3b211/plot_isolation_forest.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
IsolationForest example
44
==========================================
55
6-
An example using :class:`sklearn.ensemble.IsolationForest` for anomaly
6+
An example using :class:`~sklearn.ensemble.IsolationForest` for anomaly
77
detection.
88
99
The IsolationForest 'isolates' observations by randomly selecting a feature

dev/_downloads/21fba8cadc21699d2b4699b4ccdad10f/plot_beta_divergence.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Beta-divergence loss functions\n\n\nA plot that compares the various Beta-divergence loss functions supported by\nthe Multiplicative-Update ('mu') solver in :class:`sklearn.decomposition.NMF`.\n"
18+
"\n# Beta-divergence loss functions\n\n\nA plot that compares the various Beta-divergence loss functions supported by\nthe Multiplicative-Update ('mu') solver in :class:`~sklearn.decomposition.NMF`.\n"
1919
]
2020
},
2121
{

dev/_downloads/2a14e362a70d246e83fa6a89ca069cee/plot_sparse_coding.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
66
Transform a signal as a sparse combination of Ricker wavelets. This example
77
visually compares different sparse coding methods using the
8-
:class:`sklearn.decomposition.SparseCoder` estimator. The Ricker (also known
8+
:class:`~sklearn.decomposition.SparseCoder` estimator. The Ricker (also known
99
as Mexican hat or the second derivative of a Gaussian) is not a particularly
1010
good kernel to represent piecewise constant signals like this one. It can
1111
therefore be seen how much adding different widths of atoms matters and it
Binary file not shown.

dev/_downloads/3c9b7bcd0b16f172ac12ffad61f3b5f0/plot_stack_predictors.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@
5151
"cell_type": "markdown",
5252
"metadata": {},
5353
"source": [
54-
"Make pipeline to preprocess the data\n#############################################################################\n\n Before we can use Ames dataset we still need to do some preprocessing.\n First, the dataset has many missing values. To impute them, we will exchange\n categorical missing values with the new category 'missing' while the\n numerical missing values with the 'mean' of the column. We will also encode\n the categories with either :class:`sklearn.preprocessing.OneHotEncoder\n <sklearn.preprocessing.OneHotEncoder>` or\n :class:`sklearn.preprocessing.OrdinalEncoder\n <sklearn.preprocessing.OrdinalEncoder>` depending for which type of model we\n will use them (linear or non-linear model). To falicitate this preprocessing\n we will make two pipelines.\n You can skip this section if your data is ready to use and does\n not need preprocessing\n\n"
54+
"Make pipeline to preprocess the data\n#############################################################################\n\n Before we can use Ames dataset we still need to do some preprocessing.\n First, the dataset has many missing values. To impute them, we will exchange\n categorical missing values with the new category 'missing' while the\n numerical missing values with the 'mean' of the column. We will also encode\n the categories with either :class:`~sklearn.preprocessing.OneHotEncoder\n <sklearn.preprocessing.OneHotEncoder>` or\n :class:`~sklearn.preprocessing.OrdinalEncoder\n <sklearn.preprocessing.OrdinalEncoder>` depending for which type of model we\n will use them (linear or non-linear model). To facilitate this preprocessing\n we will make two pipelines.\n You can skip this section if your data is ready to use and does\n not need preprocessing\n\n"
5555
]
5656
},
5757
{

dev/_downloads/6d09465eed1ee4ede505244049097627/plot_beta_divergence.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
==============================
55
66
A plot that compares the various Beta-divergence loss functions supported by
7-
the Multiplicative-Update ('mu') solver in :class:`sklearn.decomposition.NMF`.
7+
the Multiplicative-Update ('mu') solver in :class:`~sklearn.decomposition.NMF`.
88
"""
99
import numpy as np
1010
import matplotlib.pyplot as plt

dev/_downloads/8452fc8dfe9850cfdaa1b758e5a2748b/plot_gradient_boosting_early_stopping.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Early stopping of Gradient Boosting\n\n\nGradient boosting is an ensembling technique where several weak learners\n(regression trees) are combined to yield a powerful single model, in an\niterative fashion.\n\nEarly stopping support in Gradient Boosting enables us to find the least number\nof iterations which is sufficient to build a model that generalizes well to\nunseen data.\n\nThe concept of early stopping is simple. We specify a ``validation_fraction``\nwhich denotes the fraction of the whole dataset that will be kept aside from\ntraining to assess the validation loss of the model. The gradient boosting\nmodel is trained using the training set and evaluated using the validation set.\nWhen each additional stage of regression tree is added, the validation set is\nused to score the model. This is continued until the scores of the model in\nthe last ``n_iter_no_change`` stages do not improve by atleast `tol`. After\nthat the model is considered to have converged and further addition of stages\nis \"stopped early\".\n\nThe number of stages of the final model is available at the attribute\n``n_estimators_``.\n\nThis example illustrates how the early stopping can used in the\n:class:`sklearn.ensemble.GradientBoostingClassifier` model to achieve\nalmost the same accuracy as compared to a model built without early stopping\nusing many fewer estimators. This can significantly reduce training time,\nmemory usage and prediction latency.\n"
18+
"\n# Early stopping of Gradient Boosting\n\n\nGradient boosting is an ensembling technique where several weak learners\n(regression trees) are combined to yield a powerful single model, in an\niterative fashion.\n\nEarly stopping support in Gradient Boosting enables us to find the least number\nof iterations which is sufficient to build a model that generalizes well to\nunseen data.\n\nThe concept of early stopping is simple. We specify a ``validation_fraction``\nwhich denotes the fraction of the whole dataset that will be kept aside from\ntraining to assess the validation loss of the model. The gradient boosting\nmodel is trained using the training set and evaluated using the validation set.\nWhen each additional stage of regression tree is added, the validation set is\nused to score the model. This is continued until the scores of the model in\nthe last ``n_iter_no_change`` stages do not improve by atleast `tol`. After\nthat the model is considered to have converged and further addition of stages\nis \"stopped early\".\n\nThe number of stages of the final model is available at the attribute\n``n_estimators_``.\n\nThis example illustrates how the early stopping can used in the\n:class:`~sklearn.ensemble.GradientBoostingClassifier` model to achieve\nalmost the same accuracy as compared to a model built without early stopping\nusing many fewer estimators. This can significantly reduce training time,\nmemory usage and prediction latency.\n"
1919
]
2020
},
2121
{

dev/_downloads/be911e971b87fe80b6899069dbcfb737/plot_gradient_boosting_early_stopping.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@
2525
``n_estimators_``.
2626
2727
This example illustrates how the early stopping can used in the
28-
:class:`sklearn.ensemble.GradientBoostingClassifier` model to achieve
28+
:class:`~sklearn.ensemble.GradientBoostingClassifier` model to achieve
2929
almost the same accuracy as compared to a model built without early stopping
3030
using many fewer estimators. This can significantly reduce training time,
3131
memory usage and prediction latency.

dev/_downloads/c536eb92f539255e80e2b3ef5200e7a1/plot_gradient_boosting_regression.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Gradient Boosting regression\n\n\nThis example demonstrates Gradient Boosting to produce a predictive\nmodel from an ensemble of weak predictive models. Gradient boosting can be used\nfor regression and classification problems. Here, we will train a model to\ntackle a diabetes regression task. We will obtain the results from\n:class:`~sklearn.ensemble.GradientBoostingRegressor` with least squares loss\nand 500 regression trees of depth 4.\n\nNote: For larger datasets (n_samples >= 10000), please refer to\n:class:`sklearn.ensemble.HistGradientBoostingRegressor`.\n"
18+
"\n# Gradient Boosting regression\n\n\nThis example demonstrates Gradient Boosting to produce a predictive\nmodel from an ensemble of weak predictive models. Gradient boosting can be used\nfor regression and classification problems. Here, we will train a model to\ntackle a diabetes regression task. We will obtain the results from\n:class:`~sklearn.ensemble.GradientBoostingRegressor` with least squares loss\nand 500 regression trees of depth 4.\n\nNote: For larger datasets (n_samples >= 10000), please refer to\n:class:`~sklearn.ensemble.HistGradientBoostingRegressor`.\n"
1919
]
2020
},
2121
{

dev/_downloads/c6ccb1a9c5f82321f082e9767a2706f3/plot_stack_predictors.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -76,11 +76,11 @@ def load_ames_housing():
7676
# First, the dataset has many missing values. To impute them, we will exchange
7777
# categorical missing values with the new category 'missing' while the
7878
# numerical missing values with the 'mean' of the column. We will also encode
79-
# the categories with either :class:`sklearn.preprocessing.OneHotEncoder
79+
# the categories with either :class:`~sklearn.preprocessing.OneHotEncoder
8080
# <sklearn.preprocessing.OneHotEncoder>` or
81-
# :class:`sklearn.preprocessing.OrdinalEncoder
81+
# :class:`~sklearn.preprocessing.OrdinalEncoder
8282
# <sklearn.preprocessing.OrdinalEncoder>` depending for which type of model we
83-
# will use them (linear or non-linear model). To falicitate this preprocessing
83+
# will use them (linear or non-linear model). To facilitate this preprocessing
8484
# we will make two pipelines.
8585
# You can skip this section if your data is ready to use and does
8686
# not need preprocessing

0 commit comments

Comments
 (0)