Skip to content

Commit a362c04

Browse files
committed
Pushing the docs to dev/ for branch: main, commit ee5a1b69d1dfa99635a10f0a5b54ec263cedf866
1 parent 72f3ec7 commit a362c04

File tree

1,263 files changed

+4514
-4514
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,263 files changed

+4514
-4514
lines changed

dev/_downloads/023324c27491610e7c0ccff87c59abf9/plot_kernel_pca.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,7 @@
152152
# :class:`~sklearn.decomposition.KernelPCA`.
153153
#
154154
# Indeed, :meth:`~sklearn.decomposition.KernelPCA.inverse_transform` cannot
155-
# rely on an analytical back-projection and thus an extact reconstruction.
155+
# rely on an analytical back-projection and thus an exact reconstruction.
156156
# Instead, a :class:`~sklearn.kernel_ridge.KernelRidge` is internally trained
157157
# to learn a mapping from the kernalized PCA basis to the original feature
158158
# space. This method therefore comes with an approximation introducing small

dev/_downloads/067cd5d39b097d2c49dd98f563dac13a/plot_iterative_imputer_variants_comparison.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Imputing missing values with variants of IterativeImputer\n\n.. currentmodule:: sklearn\n\nThe :class:`~impute.IterativeImputer` class is very flexible - it can be\nused with a variety of estimators to do round-robin regression, treating every\nvariable as an output in turn.\n\nIn this example we compare some estimators for the purpose of missing feature\nimputation with :class:`~impute.IterativeImputer`:\n\n* :class:`~linear_model.BayesianRidge`: regularized linear regression\n* :class:`~tree.RandomForestRegressor`: Forests of randomized trees regression\n* :func:`~pipeline.make_pipeline`(:class:`~kernel_approximation.Nystroem`,\n :class:`~linear_model.Ridge`): a pipeline with the expansion of a degree 2\n polynomial kernel and regularized linear regression\n* :class:`~neighbors.KNeighborsRegressor`: comparable to other KNN\n imputation approaches\n\nOf particular interest is the ability of\n:class:`~impute.IterativeImputer` to mimic the behavior of missForest, a\npopular imputation package for R.\n\nNote that :class:`~neighbors.KNeighborsRegressor` is different from KNN\nimputation, which learns from samples with missing values by using a distance\nmetric that accounts for missing values, rather than imputing them.\n\nThe goal is to compare different estimators to see which one is best for the\n:class:`~impute.IterativeImputer` when using a\n:class:`~linear_model.BayesianRidge` estimator on the California housing\ndataset with a single value randomly removed from each row.\n\nFor this particular pattern of missing values we see that\n:class:`~linear_model.BayesianRidge` and\n:class:`~ensemble.RandomForestRegressor` give the best results.\n\nIt shoud be noted that some estimators such as\n:class:`~ensemble.HistGradientBoostingRegressor` can natively deal with\nmissing features and are often recommended over building pipelines with\ncomplex and costly missing values imputation strategies.\n"
18+
"\n# Imputing missing values with variants of IterativeImputer\n\n.. currentmodule:: sklearn\n\nThe :class:`~impute.IterativeImputer` class is very flexible - it can be\nused with a variety of estimators to do round-robin regression, treating every\nvariable as an output in turn.\n\nIn this example we compare some estimators for the purpose of missing feature\nimputation with :class:`~impute.IterativeImputer`:\n\n* :class:`~linear_model.BayesianRidge`: regularized linear regression\n* :class:`~tree.RandomForestRegressor`: Forests of randomized trees regression\n* :func:`~pipeline.make_pipeline`(:class:`~kernel_approximation.Nystroem`,\n :class:`~linear_model.Ridge`): a pipeline with the expansion of a degree 2\n polynomial kernel and regularized linear regression\n* :class:`~neighbors.KNeighborsRegressor`: comparable to other KNN\n imputation approaches\n\nOf particular interest is the ability of\n:class:`~impute.IterativeImputer` to mimic the behavior of missForest, a\npopular imputation package for R.\n\nNote that :class:`~neighbors.KNeighborsRegressor` is different from KNN\nimputation, which learns from samples with missing values by using a distance\nmetric that accounts for missing values, rather than imputing them.\n\nThe goal is to compare different estimators to see which one is best for the\n:class:`~impute.IterativeImputer` when using a\n:class:`~linear_model.BayesianRidge` estimator on the California housing\ndataset with a single value randomly removed from each row.\n\nFor this particular pattern of missing values we see that\n:class:`~linear_model.BayesianRidge` and\n:class:`~ensemble.RandomForestRegressor` give the best results.\n\nIt should be noted that some estimators such as\n:class:`~ensemble.HistGradientBoostingRegressor` can natively deal with\nmissing features and are often recommended over building pipelines with\ncomplex and costly missing values imputation strategies.\n"
1919
]
2020
},
2121
{
Binary file not shown.

dev/_downloads/1b8827af01c9a70017a4739bcf2e21a8/plot_gpr_co2.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@
3232
#
3333
# We will derive a dataset from the Mauna Loa Observatory that collected air
3434
# samples. We are interested in estimating the concentration of CO2 and
35-
# extrapolate it for futher year. First, we load the original dataset available
35+
# extrapolate it for further year. First, we load the original dataset available
3636
# in OpenML.
3737
from sklearn.datasets import fetch_openml
3838

@@ -208,7 +208,7 @@
208208
gaussian_process.kernel_
209209

210210
# %%
211-
# Thus, most of the target signal, with the mean substracted, is explained by a
211+
# Thus, most of the target signal, with the mean subtracted, is explained by a
212212
# long-term rising trend for ~45 ppm and a length-scale of ~52 years. The
213213
# periodic component has an amplitude of ~2.6ppm, a decay time of ~90 years and
214214
# a length-scale of ~1.5. The long decay time indicates that we have a

dev/_downloads/4c773264381a88c3d3933952c6040058/plot_swissroll.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@
5858
"cell_type": "markdown",
5959
"metadata": {},
6060
"source": [
61-
"Computing the LLE and t-SNE embeddings, we find that LLE seems to unroll the\nSwiss Roll pretty effectively. t-SNE on the other hand, is able\nto preserve the general structure of the data, but, poorly represents the\ncontinous nature of our original data. Instead, it seems to unnecessarily\nclump sections of points together.\n\n"
61+
"Computing the LLE and t-SNE embeddings, we find that LLE seems to unroll the\nSwiss Roll pretty effectively. t-SNE on the other hand, is able\nto preserve the general structure of the data, but, poorly represents the\ncontinuous nature of our original data. Instead, it seems to unnecessarily\nclump sections of points together.\n\n"
6262
]
6363
},
6464
{

dev/_downloads/51833337bfc73d152b44902e5baa50ff/plot_lasso_lars_ic.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@
8080
"cell_type": "markdown",
8181
"metadata": {},
8282
"source": [
83-
"To be in line with the defintion in [ZHT2007]_, we need to rescale the\nAIC and the BIC. Indeed, Zou et al. are ignoring some constant terms\ncompared to the original definition of AIC derived from the maximum\nlog-likelihood of a linear model. You can refer to\n`mathematical detail section for the User Guide <lasso_lars_ic>`.\n\n"
83+
"To be in line with the definition in [ZHT2007]_, we need to rescale the\nAIC and the BIC. Indeed, Zou et al. are ignoring some constant terms\ncompared to the original definition of AIC derived from the maximum\nlog-likelihood of a linear model. You can refer to\n`mathematical detail section for the User Guide <lasso_lars_ic>`.\n\n"
8484
]
8585
},
8686
{

dev/_downloads/54823a4305997fc1281f34ce676fb43e/plot_iterative_imputer_variants_comparison.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@
3737
:class:`~linear_model.BayesianRidge` and
3838
:class:`~ensemble.RandomForestRegressor` give the best results.
3939
40-
It shoud be noted that some estimators such as
40+
It should be noted that some estimators such as
4141
:class:`~ensemble.HistGradientBoostingRegressor` can natively deal with
4242
missing features and are often recommended over building pipelines with
4343
complex and costly missing values imputation strategies.

dev/_downloads/6ebaebc92484478ceb119d24fe9df21c/plot_pipeline_display.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@
7676
"cell_type": "markdown",
7777
"metadata": {},
7878
"source": [
79-
"## Displaying a Pipeline Chaining Multiple Preprocessing Steps & Classifier\n This section constructs a :class:`~sklearn.pipeline.Pipeline` with multiple\n preprocessing steps, :class:`~sklearn.preprocessing.PolynomialFeatures` and\n :class:`~sklearn.preprocessing.StandardScaler`, and a classifer step,\n :class:`~sklearn.linear_model.LogisticRegression`, and displays its visual\n representation.\n\n"
79+
"## Displaying a Pipeline Chaining Multiple Preprocessing Steps & Classifier\n This section constructs a :class:`~sklearn.pipeline.Pipeline` with multiple\n preprocessing steps, :class:`~sklearn.preprocessing.PolynomialFeatures` and\n :class:`~sklearn.preprocessing.StandardScaler`, and a classifier step,\n :class:`~sklearn.linear_model.LogisticRegression`, and displays its visual\n representation.\n\n"
8080
]
8181
},
8282
{
Binary file not shown.

dev/_downloads/7f0a2318ad82288d649c688011f52618/plot_swissroll.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@
3737
# Computing the LLE and t-SNE embeddings, we find that LLE seems to unroll the
3838
# Swiss Roll pretty effectively. t-SNE on the other hand, is able
3939
# to preserve the general structure of the data, but, poorly represents the
40-
# continous nature of our original data. Instead, it seems to unnecessarily
40+
# continuous nature of our original data. Instead, it seems to unnecessarily
4141
# clump sections of points together.
4242

4343
sr_lle, sr_err = manifold.locally_linear_embedding(

0 commit comments

Comments
 (0)