Skip to content

Commit 415ef2f

Browse files
committed
Pushing the docs to dev/ for branch: master, commit 7ee8f97e94044e28d4ba5c0299e5544b4331fd22
1 parent 28a5761 commit 415ef2f

File tree

1,060 files changed

+3206
-3206
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,060 files changed

+3206
-3206
lines changed
0 Bytes
Binary file not shown.
0 Bytes
Binary file not shown.

dev/_downloads/plot_calibration_curve.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Probability Calibration curves\n\n\nWhen performing classification one often wants to predict not only the class\nlabel, but also the associated probability. This probability gives some\nkind of confidence on the prediction. This example demonstrates how to display\nhow well calibrated the predicted probabilities are and how to calibrate an\nuncalibrated classifier.\n\nThe experiment is performed on an artificial dataset for binary classification\nwith 100.000 samples (1.000 of them are used for model fitting) with 20\nfeatures. Of the 20 features, only 2 are informative and 10 are redundant. The\nfirst figure shows the estimated probabilities obtained with logistic\nregression, Gaussian naive Bayes, and Gaussian naive Bayes with both isotonic\ncalibration and sigmoid calibration. The calibration performance is evaluated\nwith Brier score, reported in the legend (the smaller the better). One can\nobserve here that logistic regression is well calibrated while raw Gaussian\nnaive Bayes performs very badly. This is because of the redundant features\nwhich violate the assumption of feature-independence and result in an overly\nconfident classifier, which is indicated by the typical transposed-sigmoid\ncurve.\n\nCalibration of the probabilities of Gaussian naive Bayes with isotonic\nregression can fix this issue as can be seen from the nearly diagonal\ncalibration curve. Sigmoid calibration also improves the brier score slightly,\nalbeit not as strongly as the non-parametric isotonic regression. This can be\nattributed to the fact that we have plenty of calibration data such that the\ngreater flexibility of the non-parametric model can be exploited.\n\nThe second figure shows the calibration curve of a linear support-vector\nclassifier (LinearSVC). LinearSVC shows the opposite behavior as Gaussian\nnaive Bayes: the calibration curve has a sigmoid curve, which is typical for\nan under-confident classifier. In the case of LinearSVC, this is caused by the\nmargin property of the hinge loss, which lets the model focus on hard samples\nthat are close to the decision boundary (the support vectors).\n\nBoth kinds of calibration can fix this issue and yield nearly identical\nresults. This shows that sigmoid calibration can deal with situations where\nthe calibration curve of the base classifier is sigmoid (e.g., for LinearSVC)\nbut not where it is transposed-sigmoid (e.g., Gaussian naive Bayes).\n\n"
18+
"\n# Probability Calibration curves\n\n\nWhen performing classification one often wants to predict not only the class\nlabel, but also the associated probability. This probability gives some\nkind of confidence on the prediction. This example demonstrates how to display\nhow well calibrated the predicted probabilities are and how to calibrate an\nuncalibrated classifier.\n\nThe experiment is performed on an artificial dataset for binary classification\nwith 100,000 samples (1,000 of them are used for model fitting) with 20\nfeatures. Of the 20 features, only 2 are informative and 10 are redundant. The\nfirst figure shows the estimated probabilities obtained with logistic\nregression, Gaussian naive Bayes, and Gaussian naive Bayes with both isotonic\ncalibration and sigmoid calibration. The calibration performance is evaluated\nwith Brier score, reported in the legend (the smaller the better). One can\nobserve here that logistic regression is well calibrated while raw Gaussian\nnaive Bayes performs very badly. This is because of the redundant features\nwhich violate the assumption of feature-independence and result in an overly\nconfident classifier, which is indicated by the typical transposed-sigmoid\ncurve.\n\nCalibration of the probabilities of Gaussian naive Bayes with isotonic\nregression can fix this issue as can be seen from the nearly diagonal\ncalibration curve. Sigmoid calibration also improves the brier score slightly,\nalbeit not as strongly as the non-parametric isotonic regression. This can be\nattributed to the fact that we have plenty of calibration data such that the\ngreater flexibility of the non-parametric model can be exploited.\n\nThe second figure shows the calibration curve of a linear support-vector\nclassifier (LinearSVC). LinearSVC shows the opposite behavior as Gaussian\nnaive Bayes: the calibration curve has a sigmoid curve, which is typical for\nan under-confident classifier. In the case of LinearSVC, this is caused by the\nmargin property of the hinge loss, which lets the model focus on hard samples\nthat are close to the decision boundary (the support vectors).\n\nBoth kinds of calibration can fix this issue and yield nearly identical\nresults. This shows that sigmoid calibration can deal with situations where\nthe calibration curve of the base classifier is sigmoid (e.g., for LinearSVC)\nbut not where it is transposed-sigmoid (e.g., Gaussian naive Bayes).\n\n"
1919
]
2020
},
2121
{

dev/_downloads/plot_calibration_curve.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
uncalibrated classifier.
1111
1212
The experiment is performed on an artificial dataset for binary classification
13-
with 100.000 samples (1.000 of them are used for model fitting) with 20
13+
with 100,000 samples (1,000 of them are used for model fitting) with 20
1414
features. Of the 20 features, only 2 are informative and 10 are redundant. The
1515
first figure shows the estimated probabilities obtained with logistic
1616
regression, Gaussian naive Bayes, and Gaussian naive Bayes with both isotonic

dev/_downloads/scikit-learn-docs.pdf

11 KB
Binary file not shown.

0 commit comments

Comments
 (0)