Skip to content

Commit d96bd83

Browse files
committed
Pushing the docs to dev/ for branch: main, commit 72bd9f335f75bfd6386226bd6822d007c0cf77b5
1 parent 636e166 commit d96bd83

File tree

1,221 files changed

+4371
-4372
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,221 files changed

+4371
-4372
lines changed
Binary file not shown.
Binary file not shown.

dev/_downloads/b367e30cc681ed484e0148f4ce9eccb0/plot_calibration_multiclass.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@
9494
"cell_type": "markdown",
9595
"metadata": {},
9696
"source": [
97-
"In the figure above, each vertex of the simplex represents\na perfectly predicted class (e.g., 1, 0, 0). The mid point\ninside the simplex represents predicting the three classes with equal\nprobability (i.e., 1/3, 1/3, 1/3). Each arrow starts at the\nuncalibrated probabilities and end with the arrow head at the calibrated\nprobability. The color of the arrow represents the true class of that test\nsample.\n\nThe uncalibrated classifier is overly confident in its predictions and\nincurs a large `log loss <log_loss>`. The calibrated classifier incurs\na lower `log loss <log_loss>` due to two factors. First, notice in the\nfigure above that the arrows generally point away from the edges of the\nsimplex, where the probability of one class is 0. Second, a large proportion\nof the arrows point towards the true class, e.g., green arrows (samples where\nthe true class is 'green') generally point towards the green vertex. This\nresults in fewer over-confident, 0 predicted probabilities and at the same\ntime an increase in the the predicted probabilities of the correct class.\nThus, the calibrated classifier produces more accurate predicted probablities\nthat incur a lower `log loss <log_loss>`\n\nWe can show this objectively by comparing the `log loss <log_loss>` of\nthe uncalibrated and calibrated classifiers on the predictions of the 1000\ntest samples. Note that an alternative would have been to increase the number\nof base estimators (trees) of the\n:class:`~sklearn.ensemble.RandomForestClassifier` which would have resulted\nin a similar decrease in `log loss <log_loss>`.\n\n"
97+
"In the figure above, each vertex of the simplex represents\na perfectly predicted class (e.g., 1, 0, 0). The mid point\ninside the simplex represents predicting the three classes with equal\nprobability (i.e., 1/3, 1/3, 1/3). Each arrow starts at the\nuncalibrated probabilities and end with the arrow head at the calibrated\nprobability. The color of the arrow represents the true class of that test\nsample.\n\nThe uncalibrated classifier is overly confident in its predictions and\nincurs a large `log loss <log_loss>`. The calibrated classifier incurs\na lower `log loss <log_loss>` due to two factors. First, notice in the\nfigure above that the arrows generally point away from the edges of the\nsimplex, where the probability of one class is 0. Second, a large proportion\nof the arrows point towards the true class, e.g., green arrows (samples where\nthe true class is 'green') generally point towards the green vertex. This\nresults in fewer over-confident, 0 predicted probabilities and at the same\ntime an increase in the predicted probabilities of the correct class.\nThus, the calibrated classifier produces more accurate predicted probablities\nthat incur a lower `log loss <log_loss>`\n\nWe can show this objectively by comparing the `log loss <log_loss>` of\nthe uncalibrated and calibrated classifiers on the predictions of the 1000\ntest samples. Note that an alternative would have been to increase the number\nof base estimators (trees) of the\n:class:`~sklearn.ensemble.RandomForestClassifier` which would have resulted\nin a similar decrease in `log loss <log_loss>`.\n\n"
9898
]
9999
},
100100
{

dev/_downloads/f4a2350e7cc794cdb19840052e96a1e7/plot_calibration_multiclass.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -197,7 +197,7 @@ class of an instance (red: class 1, green: class 2, blue: class 3).
197197
# of the arrows point towards the true class, e.g., green arrows (samples where
198198
# the true class is 'green') generally point towards the green vertex. This
199199
# results in fewer over-confident, 0 predicted probabilities and at the same
200-
# time an increase in the the predicted probabilities of the correct class.
200+
# time an increase in the predicted probabilities of the correct class.
201201
# Thus, the calibrated classifier produces more accurate predicted probablities
202202
# that incur a lower :ref:`log loss <log_loss>`
203203
#

dev/_downloads/scikit-learn-docs.zip

-1.7 KB
Binary file not shown.

0 commit comments

Comments
 (0)