Skip to content

Commit ee16a61

Browse files
committed
Pushing the docs to dev/ for branch: master, commit 0deaa3b4e1349ddb00fcfea293a85948c3622b0f
1 parent ec413d1 commit ee16a61

File tree

1,226 files changed

+4082
-3770
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,226 files changed

+4082
-3770
lines changed
Binary file not shown.

dev/_downloads/622fb50f5e367eda84eb7c32d306f659/plot_digits_classification.ipynb

Lines changed: 92 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Recognizing hand-written digits\n\n\nAn example showing how the scikit-learn can be used to recognize images of\nhand-written digits.\n\nThis example is commented in the\n`tutorial section of the user manual <introduction>`.\n"
18+
"\n# Recognizing hand-written digits\n\n\nThis example shows how scikit-learn can be used to recognize images of\nhand-written digits, from 0-9.\n"
1919
]
2020
},
2121
{
@@ -26,7 +26,97 @@
2626
},
2727
"outputs": [],
2828
"source": [
29-
"print(__doc__)\n\n# Author: Gael Varoquaux <gael dot varoquaux at normalesup dot org>\n# License: BSD 3 clause\n\n# Standard scientific Python imports\nimport matplotlib.pyplot as plt\n\n# Import datasets, classifiers and performance metrics\nfrom sklearn import datasets, svm, metrics\nfrom sklearn.model_selection import train_test_split\n\n# The digits dataset\ndigits = datasets.load_digits()\n\n# The data that we are interested in is made of 8x8 images of digits, let's\n# have a look at the first 4 images, stored in the `images` attribute of the\n# dataset. If we were working from image files, we could load them using\n# matplotlib.pyplot.imread. Note that each image must have the same size. For these\n# images, we know which digit they represent: it is given in the 'target' of\n# the dataset.\n_, axes = plt.subplots(2, 4)\nimages_and_labels = list(zip(digits.images, digits.target))\nfor ax, (image, label) in zip(axes[0, :], images_and_labels[:4]):\n ax.set_axis_off()\n ax.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')\n ax.set_title('Training: %i' % label)\n\n# To apply a classifier on this data, we need to flatten the image, to\n# turn the data in a (samples, feature) matrix:\nn_samples = len(digits.images)\ndata = digits.images.reshape((n_samples, -1))\n\n# Create a classifier: a support vector classifier\nclassifier = svm.SVC(gamma=0.001)\n\n# Split data into train and test subsets\nX_train, X_test, y_train, y_test = train_test_split(\n data, digits.target, test_size=0.5, shuffle=False)\n\n# We learn the digits on the first half of the digits\nclassifier.fit(X_train, y_train)\n\n# Now predict the value of the digit on the second half:\npredicted = classifier.predict(X_test)\n\nimages_and_predictions = list(zip(digits.images[n_samples // 2:], predicted))\nfor ax, (image, prediction) in zip(axes[1, :], images_and_predictions[:4]):\n ax.set_axis_off()\n ax.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')\n ax.set_title('Prediction: %i' % prediction)\n\nprint(\"Classification report for classifier %s:\\n%s\\n\"\n % (classifier, metrics.classification_report(y_test, predicted)))\ndisp = metrics.plot_confusion_matrix(classifier, X_test, y_test)\ndisp.figure_.suptitle(\"Confusion Matrix\")\nprint(\"Confusion matrix:\\n%s\" % disp.confusion_matrix)\n\nplt.show()"
29+
"print(__doc__)\n\n# Author: Gael Varoquaux <gael dot varoquaux at normalesup dot org>\n# License: BSD 3 clause\n\n# Standard scientific Python imports\nimport matplotlib.pyplot as plt\n\n# Import datasets, classifiers and performance metrics\nfrom sklearn import datasets, svm, metrics\nfrom sklearn.model_selection import train_test_split"
30+
]
31+
},
32+
{
33+
"cell_type": "markdown",
34+
"metadata": {},
35+
"source": [
36+
"Digits dataset\n--------------\n\nThe digits dataset consists of 8x8\npixel images of digits. The ``images`` attribute of the dataset stores\n8x8 arrays of grayscale values for each image. We will use these arrays to\nvisualize the first 4 images. The ``target`` attribute of the dataset stores\nthe digit each image represents and this is included in the title of the 4\nplots below.\n\nNote: if we were working from image files (e.g., 'png' files), we would load\nthem using :func:`matplotlib.pyplot.imread`.\n\n"
37+
]
38+
},
39+
{
40+
"cell_type": "code",
41+
"execution_count": null,
42+
"metadata": {
43+
"collapsed": false
44+
},
45+
"outputs": [],
46+
"source": [
47+
"digits = datasets.load_digits()\n\n_, axes = plt.subplots(nrows=1, ncols=4, figsize=(10, 3))\nfor ax, image, label in zip(axes, digits.images, digits.target):\n ax.set_axis_off()\n ax.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')\n ax.set_title('Training: %i' % label)"
48+
]
49+
},
50+
{
51+
"cell_type": "markdown",
52+
"metadata": {},
53+
"source": [
54+
"Classification\n--------------\n\nTo apply a classifier on this data, we need to flatten the images, turning\neach 2-D array of grayscale values from shape ``(8, 8)`` into shape\n``(64,)``. Subsequently, the entire dataset will be of shape\n``(n_samples, n_features)``, where ``n_samples`` is the number of images and\n``n_features`` is the total number of pixels in each image.\n\nWe can then split the data into train and test subsets and fit a support\nvector classifier on the train samples. The fitted classifier can\nsubsequently be used to predict the value of the digit for the samples\nin the test subset.\n\n"
55+
]
56+
},
57+
{
58+
"cell_type": "code",
59+
"execution_count": null,
60+
"metadata": {
61+
"collapsed": false
62+
},
63+
"outputs": [],
64+
"source": [
65+
"# flatten the images\nn_samples = len(digits.images)\ndata = digits.images.reshape((n_samples, -1))\n\n# Create a classifier: a support vector classifier\nclf = svm.SVC(gamma=0.001)\n\n# Split data into 50% train and 50% test subsets\nX_train, X_test, y_train, y_test = train_test_split(\n data, digits.target, test_size=0.5, shuffle=False)\n\n# Learn the digits on the train subset\nclf.fit(X_train, y_train)\n\n# Predict the value of the digit on the test subset\npredicted = clf.predict(X_test)"
66+
]
67+
},
68+
{
69+
"cell_type": "markdown",
70+
"metadata": {},
71+
"source": [
72+
"Below we visualize the first 4 test samples and show their predicted\ndigit value in the title.\n\n"
73+
]
74+
},
75+
{
76+
"cell_type": "code",
77+
"execution_count": null,
78+
"metadata": {
79+
"collapsed": false
80+
},
81+
"outputs": [],
82+
"source": [
83+
"_, axes = plt.subplots(nrows=1, ncols=4, figsize=(10, 3))\nfor ax, image, prediction in zip(axes, digits.images, predicted):\n ax.set_axis_off()\n ax.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')\n ax.set_title(f'Prediction: {prediction}')"
84+
]
85+
},
86+
{
87+
"cell_type": "markdown",
88+
"metadata": {},
89+
"source": [
90+
":func:`~sklearn.metrics.classification_report` builds a text report showing\nthe main classification metrics.\n\n"
91+
]
92+
},
93+
{
94+
"cell_type": "code",
95+
"execution_count": null,
96+
"metadata": {
97+
"collapsed": false
98+
},
99+
"outputs": [],
100+
"source": [
101+
"print(f\"Classification report for classifier {clf}:\\n\"\n f\"{metrics.classification_report(y_test, predicted)}\\n\")"
102+
]
103+
},
104+
{
105+
"cell_type": "markdown",
106+
"metadata": {},
107+
"source": [
108+
"We can also plot a `confusion matrix <confusion_matrix>` of the\ntrue digit values and the predicted digit values.\n\n"
109+
]
110+
},
111+
{
112+
"cell_type": "code",
113+
"execution_count": null,
114+
"metadata": {
115+
"collapsed": false
116+
},
117+
"outputs": [],
118+
"source": [
119+
"disp = metrics.plot_confusion_matrix(clf, X_test, y_test)\ndisp.figure_.suptitle(\"Confusion Matrix\")\nprint(f\"Confusion matrix:\\n{disp.confusion_matrix}\")\n\nplt.show()"
30120
]
31121
}
32122
],

dev/_downloads/b1e3674706d6abde2dae4b6cfa71be67/plot_digits_classification.py

Lines changed: 61 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -3,13 +3,10 @@
33
Recognizing hand-written digits
44
================================
55
6-
An example showing how the scikit-learn can be used to recognize images of
7-
hand-written digits.
8-
9-
This example is commented in the
10-
:ref:`tutorial section of the user manual <introduction>`.
11-
6+
This example shows how scikit-learn can be used to recognize images of
7+
hand-written digits, from 0-9.
128
"""
9+
1310
print(__doc__)
1411

1512
# Author: Gael Varoquaux <gael dot varoquaux at normalesup dot org>
@@ -22,50 +19,83 @@
2219
from sklearn import datasets, svm, metrics
2320
from sklearn.model_selection import train_test_split
2421

25-
# The digits dataset
22+
###############################################################################
23+
# Digits dataset
24+
# --------------
25+
#
26+
# The digits dataset consists of 8x8
27+
# pixel images of digits. The ``images`` attribute of the dataset stores
28+
# 8x8 arrays of grayscale values for each image. We will use these arrays to
29+
# visualize the first 4 images. The ``target`` attribute of the dataset stores
30+
# the digit each image represents and this is included in the title of the 4
31+
# plots below.
32+
#
33+
# Note: if we were working from image files (e.g., 'png' files), we would load
34+
# them using :func:`matplotlib.pyplot.imread`.
35+
2636
digits = datasets.load_digits()
2737

28-
# The data that we are interested in is made of 8x8 images of digits, let's
29-
# have a look at the first 4 images, stored in the `images` attribute of the
30-
# dataset. If we were working from image files, we could load them using
31-
# matplotlib.pyplot.imread. Note that each image must have the same size. For these
32-
# images, we know which digit they represent: it is given in the 'target' of
33-
# the dataset.
34-
_, axes = plt.subplots(2, 4)
35-
images_and_labels = list(zip(digits.images, digits.target))
36-
for ax, (image, label) in zip(axes[0, :], images_and_labels[:4]):
38+
_, axes = plt.subplots(nrows=1, ncols=4, figsize=(10, 3))
39+
for ax, image, label in zip(axes, digits.images, digits.target):
3740
ax.set_axis_off()
3841
ax.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
3942
ax.set_title('Training: %i' % label)
4043

41-
# To apply a classifier on this data, we need to flatten the image, to
42-
# turn the data in a (samples, feature) matrix:
44+
###############################################################################
45+
# Classification
46+
# --------------
47+
#
48+
# To apply a classifier on this data, we need to flatten the images, turning
49+
# each 2-D array of grayscale values from shape ``(8, 8)`` into shape
50+
# ``(64,)``. Subsequently, the entire dataset will be of shape
51+
# ``(n_samples, n_features)``, where ``n_samples`` is the number of images and
52+
# ``n_features`` is the total number of pixels in each image.
53+
#
54+
# We can then split the data into train and test subsets and fit a support
55+
# vector classifier on the train samples. The fitted classifier can
56+
# subsequently be used to predict the value of the digit for the samples
57+
# in the test subset.
58+
59+
# flatten the images
4360
n_samples = len(digits.images)
4461
data = digits.images.reshape((n_samples, -1))
4562

4663
# Create a classifier: a support vector classifier
47-
classifier = svm.SVC(gamma=0.001)
64+
clf = svm.SVC(gamma=0.001)
4865

49-
# Split data into train and test subsets
66+
# Split data into 50% train and 50% test subsets
5067
X_train, X_test, y_train, y_test = train_test_split(
5168
data, digits.target, test_size=0.5, shuffle=False)
5269

53-
# We learn the digits on the first half of the digits
54-
classifier.fit(X_train, y_train)
70+
# Learn the digits on the train subset
71+
clf.fit(X_train, y_train)
5572

56-
# Now predict the value of the digit on the second half:
57-
predicted = classifier.predict(X_test)
73+
# Predict the value of the digit on the test subset
74+
predicted = clf.predict(X_test)
5875

59-
images_and_predictions = list(zip(digits.images[n_samples // 2:], predicted))
60-
for ax, (image, prediction) in zip(axes[1, :], images_and_predictions[:4]):
76+
###############################################################################
77+
# Below we visualize the first 4 test samples and show their predicted
78+
# digit value in the title.
79+
80+
_, axes = plt.subplots(nrows=1, ncols=4, figsize=(10, 3))
81+
for ax, image, prediction in zip(axes, digits.images, predicted):
6182
ax.set_axis_off()
6283
ax.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
63-
ax.set_title('Prediction: %i' % prediction)
84+
ax.set_title(f'Prediction: {prediction}')
85+
86+
###############################################################################
87+
# :func:`~sklearn.metrics.classification_report` builds a text report showing
88+
# the main classification metrics.
89+
90+
print(f"Classification report for classifier {clf}:\n"
91+
f"{metrics.classification_report(y_test, predicted)}\n")
92+
93+
###############################################################################
94+
# We can also plot a :ref:`confusion matrix <confusion_matrix>` of the
95+
# true digit values and the predicted digit values.
6496

65-
print("Classification report for classifier %s:\n%s\n"
66-
% (classifier, metrics.classification_report(y_test, predicted)))
67-
disp = metrics.plot_confusion_matrix(classifier, X_test, y_test)
97+
disp = metrics.plot_confusion_matrix(clf, X_test, y_test)
6898
disp.figure_.suptitle("Confusion Matrix")
69-
print("Confusion matrix:\n%s" % disp.confusion_matrix)
99+
print(f"Confusion matrix:\n{disp.confusion_matrix}")
70100

71101
plt.show()
Binary file not shown.

dev/_downloads/scikit-learn-docs.pdf

-2.89 KB
Binary file not shown.

dev/_images/iris.png

0 Bytes

0 commit comments

Comments
 (0)