Skip to content

Commit 52ad788

Browse files
committed
Pushing the docs to dev/ for branch: main, commit f16c987592d01890cf61679f7f323bc15cbd1269
1 parent 4c61dda commit 52ad788

File tree

1,255 files changed

+4490
-4490
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,255 files changed

+4490
-4490
lines changed
Binary file not shown.
Binary file not shown.

dev/_downloads/b367e30cc681ed484e0148f4ce9eccb0/plot_calibration_multiclass.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@
9494
"cell_type": "markdown",
9595
"metadata": {},
9696
"source": [
97-
"In the figure above, each vertex of the simplex represents\na perfectly predicted class (e.g., 1, 0, 0). The mid point\ninside the simplex represents predicting the three classes with equal\nprobability (i.e., 1/3, 1/3, 1/3). Each arrow starts at the\nuncalibrated probabilities and end with the arrow head at the calibrated\nprobability. The color of the arrow represents the true class of that test\nsample.\n\nThe uncalibrated classifier is overly confident in its predictions and\nincurs a large `log loss <log_loss>`. The calibrated classifier incurs\na lower `log loss <log_loss>` due to two factors. First, notice in the\nfigure above that the arrows generally point away from the edges of the\nsimplex, where the probability of one class is 0. Second, a large proportion\nof the arrows point towards the true class, e.g., green arrows (samples where\nthe true class is 'green') generally point towards the green vertex. This\nresults in fewer over-confident, 0 predicted probabilities and at the same\ntime an increase in the predicted probabilities of the correct class.\nThus, the calibrated classifier produces more accurate predicted probablities\nthat incur a lower `log loss <log_loss>`\n\nWe can show this objectively by comparing the `log loss <log_loss>` of\nthe uncalibrated and calibrated classifiers on the predictions of the 1000\ntest samples. Note that an alternative would have been to increase the number\nof base estimators (trees) of the\n:class:`~sklearn.ensemble.RandomForestClassifier` which would have resulted\nin a similar decrease in `log loss <log_loss>`.\n\n"
97+
"In the figure above, each vertex of the simplex represents\na perfectly predicted class (e.g., 1, 0, 0). The mid point\ninside the simplex represents predicting the three classes with equal\nprobability (i.e., 1/3, 1/3, 1/3). Each arrow starts at the\nuncalibrated probabilities and end with the arrow head at the calibrated\nprobability. The color of the arrow represents the true class of that test\nsample.\n\nThe uncalibrated classifier is overly confident in its predictions and\nincurs a large `log loss <log_loss>`. The calibrated classifier incurs\na lower `log loss <log_loss>` due to two factors. First, notice in the\nfigure above that the arrows generally point away from the edges of the\nsimplex, where the probability of one class is 0. Second, a large proportion\nof the arrows point towards the true class, e.g., green arrows (samples where\nthe true class is 'green') generally point towards the green vertex. This\nresults in fewer over-confident, 0 predicted probabilities and at the same\ntime an increase in the predicted probabilities of the correct class.\nThus, the calibrated classifier produces more accurate predicted probabilities\nthat incur a lower `log loss <log_loss>`\n\nWe can show this objectively by comparing the `log loss <log_loss>` of\nthe uncalibrated and calibrated classifiers on the predictions of the 1000\ntest samples. Note that an alternative would have been to increase the number\nof base estimators (trees) of the\n:class:`~sklearn.ensemble.RandomForestClassifier` which would have resulted\nin a similar decrease in `log loss <log_loss>`.\n\n"
9898
]
9999
},
100100
{

dev/_downloads/f4a2350e7cc794cdb19840052e96a1e7/plot_calibration_multiclass.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -198,7 +198,7 @@ class of an instance (red: class 1, green: class 2, blue: class 3).
198198
# the true class is 'green') generally point towards the green vertex. This
199199
# results in fewer over-confident, 0 predicted probabilities and at the same
200200
# time an increase in the predicted probabilities of the correct class.
201-
# Thus, the calibrated classifier produces more accurate predicted probablities
201+
# Thus, the calibrated classifier produces more accurate predicted probabilities
202202
# that incur a lower :ref:`log loss <log_loss>`
203203
#
204204
# We can show this objectively by comparing the :ref:`log loss <log_loss>` of

dev/_downloads/scikit-learn-docs.zip

-1.75 KB
Binary file not shown.
-135 Bytes
162 Bytes
260 Bytes
248 Bytes
-36 Bytes

0 commit comments

Comments
 (0)