|
15 | 15 | "cell_type": "markdown",
|
16 | 16 | "metadata": {},
|
17 | 17 | "source": [
|
18 |
| - "\n# Partial Dependence Plots\n\n\nPartial dependence plots show the dependence between the target function [2]_\nand a set of 'target' features, marginalizing over the\nvalues of all other features (the complement features). Due to the limits\nof human perception the size of the target feature set must be small (usually,\none or two) thus the target features are usually chosen among the most\nimportant features.\n\nThis example shows how to obtain partial dependence plots from a\n:class:`~sklearn.neural_network.MLPRegressor` and a\n:class:`~sklearn.ensemble.GradientBoostingRegressor` trained on the\nCalifornia housing dataset. The example is taken from [1]_.\n\nThe plots show four 1-way and two 1-way partial dependence plots (ommitted for\n:class:`~sklearn.neural_network.MLPRegressor` due to computation time).\nThe target variables for the one-way PDP are: median income (`MedInc`),\naverage occupants per household (`AvgOccup`), median house age (`HouseAge`),\nand average rooms per household (`AveRooms`).\n\nWe can clearly see that the median house price shows a linear relationship\nwith the median income (top left) and that the house price drops when the\naverage occupants per household increases (top middle).\nThe top right plot shows that the house age in a district does not have\na strong influence on the (median) house price; so does the average rooms\nper household.\nThe tick marks on the x-axis represent the deciles of the feature values\nin the training data.\n\nWe also observe that :class:`~sklearn.neural_network.MLPRegressor` has much\nsmoother predictions than\n:class:`~sklearn.ensemble.GradientBoostingRegressor`. For the plots to be\ncomparable, it is necessary to subtract the average value of the target\n``y``: The 'recursion' method, used by default for\n:class:`~sklearn.ensemble.GradientBoostingRegressor`, does not account for\nthe initial predictor (in our case the average target). Setting the target\naverage to 0 avoids this bias.\n\nPartial dependence plots with two target features enable us to visualize\ninteractions among them. The two-way partial dependence plot shows the\ndependence of median house price on joint values of house age and average\noccupants per household. We can clearly see an interaction between the\ntwo features: for an average occupancy greater than two, the house price is\nnearly independent of the house age, whereas for values less than two there\nis a strong dependence on age.\n\nOn a third figure, we have plotted the same partial dependence plot, this time\nin 3 dimensions.\n\n.. [1] T. Hastie, R. Tibshirani and J. Friedman,\n \"Elements of Statistical Learning Ed. 2\", Springer, 2009.\n\n.. [2] For classification you can think of it as the regression score before\n the link function.\n\n" |
| 18 | + "\n# Partial Dependence Plots\n\n\nPartial dependence plots show the dependence between the target function [2]_\nand a set of 'target' features, marginalizing over the\nvalues of all other features (the complement features). Due to the limits\nof human perception the size of the target feature set must be small (usually,\none or two) thus the target features are usually chosen among the most\nimportant features.\n\nThis example shows how to obtain partial dependence plots from a\n:class:`~sklearn.neural_network.MLPRegressor` and a\n:class:`~sklearn.ensemble.HistGradientBoostingRegressor` trained on the\nCalifornia housing dataset. The example is taken from [1]_.\n\nThe plots show four 1-way and two 1-way partial dependence plots (ommitted for\n:class:`~sklearn.neural_network.MLPRegressor` due to computation time).\nThe target variables for the one-way PDP are: median income (`MedInc`),\naverage occupants per household (`AvgOccup`), median house age (`HouseAge`),\nand average rooms per household (`AveRooms`).\n\n.. [1] T. Hastie, R. Tibshirani and J. Friedman,\n \"Elements of Statistical Learning Ed. 2\", Springer, 2009.\n\n.. [2] For classification you can think of it as the regression score before\n the link function.\n\n" |
19 | 19 | ]
|
20 | 20 | },
|
21 | 21 | {
|
|
26 | 26 | },
|
27 | 27 | "outputs": [],
|
28 | 28 | "source": [
|
29 |
| - "print(__doc__)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\nfrom sklearn.inspection import partial_dependence\nfrom sklearn.inspection import plot_partial_dependence\nfrom sklearn.ensemble import GradientBoostingRegressor\nfrom sklearn.neural_network import MLPRegressor\nfrom sklearn.datasets.california_housing import fetch_california_housing\n\n\ndef main():\n cal_housing = fetch_california_housing()\n\n X, y = cal_housing.data, cal_housing.target\n names = cal_housing.feature_names\n\n # Center target to avoid gradient boosting init bias: gradient boosting\n # with the 'recursion' method does not account for the initial estimator\n # (here the average target, by default)\n y -= y.mean()\n\n print(\"Training MLPRegressor...\")\n est = MLPRegressor(activation='logistic')\n est.fit(X, y)\n print('Computing partial dependence plots...')\n # We don't compute the 2-way PDP (5, 1) here, because it is a lot slower\n # with the brute method.\n features = [0, 5, 1, 2]\n plot_partial_dependence(est, X, features, feature_names=names,\n n_jobs=3, grid_resolution=50)\n fig = plt.gcf()\n fig.suptitle('Partial dependence of house value on non-___location features\\n'\n 'for the California housing dataset, with MLPRegressor')\n plt.subplots_adjust(top=0.9) # tight_layout causes overlap with suptitle\n\n print(\"Training GradientBoostingRegressor...\")\n est = GradientBoostingRegressor(n_estimators=100, max_depth=4,\n learning_rate=0.1, loss='huber',\n random_state=1)\n est.fit(X, y)\n print('Computing partial dependence plots...')\n features = [0, 5, 1, 2, (5, 1)]\n plot_partial_dependence(est, X, features, feature_names=names,\n n_jobs=3, grid_resolution=50)\n fig = plt.gcf()\n fig.suptitle('Partial dependence of house value on non-___location features\\n'\n 'for the California housing dataset, with Gradient Boosting')\n plt.subplots_adjust(top=0.9)\n\n print('Custom 3d plot via ``partial_dependence``')\n fig = plt.figure()\n\n target_feature = (1, 5)\n pdp, axes = partial_dependence(est, X, target_feature,\n grid_resolution=50)\n XX, YY = np.meshgrid(axes[0], axes[1])\n Z = pdp[0].T\n ax = Axes3D(fig)\n surf = ax.plot_surface(XX, YY, Z, rstride=1, cstride=1,\n cmap=plt.cm.BuPu, edgecolor='k')\n ax.set_xlabel(names[target_feature[0]])\n ax.set_ylabel(names[target_feature[1]])\n ax.set_zlabel('Partial dependence')\n # pretty init view\n ax.view_init(elev=22, azim=122)\n plt.colorbar(surf)\n plt.suptitle('Partial dependence of house value on median\\n'\n 'age and average occupancy, with Gradient Boosting')\n plt.subplots_adjust(top=0.9)\n\n plt.show()\n\n\n# Needed on Windows because plot_partial_dependence uses multiprocessing\nif __name__ == '__main__':\n main()" |
| 29 | + "print(__doc__)\n\nfrom time import time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import QuantileTransformer\nfrom sklearn.pipeline import make_pipeline\n\nfrom sklearn.inspection import partial_dependence\nfrom sklearn.inspection import plot_partial_dependence\nfrom sklearn.experimental import enable_hist_gradient_boosting # noqa\nfrom sklearn.ensemble import HistGradientBoostingRegressor\nfrom sklearn.neural_network import MLPRegressor\nfrom sklearn.datasets.california_housing import fetch_california_housing" |
| 30 | + ] |
| 31 | + }, |
| 32 | + { |
| 33 | + "cell_type": "markdown", |
| 34 | + "metadata": {}, |
| 35 | + "source": [ |
| 36 | + "California Housing data preprocessing\n-------------------------------------\n\nCenter target to avoid gradient boosting init bias: gradient boosting\nwith the 'recursion' method does not account for the initial estimator\n(here the average target, by default)\n\n\n" |
| 37 | + ] |
| 38 | + }, |
| 39 | + { |
| 40 | + "cell_type": "code", |
| 41 | + "execution_count": null, |
| 42 | + "metadata": { |
| 43 | + "collapsed": false |
| 44 | + }, |
| 45 | + "outputs": [], |
| 46 | + "source": [ |
| 47 | + "cal_housing = fetch_california_housing()\nnames = cal_housing.feature_names\nX, y = cal_housing.data, cal_housing.target\n\ny -= y.mean()\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1,\n random_state=0)" |
| 48 | + ] |
| 49 | + }, |
| 50 | + { |
| 51 | + "cell_type": "markdown", |
| 52 | + "metadata": {}, |
| 53 | + "source": [ |
| 54 | + "Partial Dependence computation for multi-layer perceptron\n---------------------------------------------------------\n\nLet's fit a MLPRegressor and compute single-variable partial dependence\nplots\n\n" |
| 55 | + ] |
| 56 | + }, |
| 57 | + { |
| 58 | + "cell_type": "code", |
| 59 | + "execution_count": null, |
| 60 | + "metadata": { |
| 61 | + "collapsed": false |
| 62 | + }, |
| 63 | + "outputs": [], |
| 64 | + "source": [ |
| 65 | + "print(\"Training MLPRegressor...\")\ntic = time()\nest = make_pipeline(QuantileTransformer(),\n MLPRegressor(hidden_layer_sizes=(50, 50),\n learning_rate_init=0.01,\n max_iter=200,\n early_stopping=True,\n n_iter_no_change=10,\n validation_fraction=0.1))\nest.fit(X_train, y_train)\nprint(\"done in {:.3f}s\".format(time() - tic))\nprint(\"Test R2 score: {:.2f}\".format(est.score(X_test, y_test)))\n\nprint('Computing partial dependence plots...')\ntic = time()\n# We don't compute the 2-way PDP (5, 1) here, because it is a lot slower\n# with the brute method.\nfeatures = [0, 5, 1, 2]\nplot_partial_dependence(est, X_train, features, feature_names=names,\n n_jobs=3, grid_resolution=20)\nprint(\"done in {:.3f}s\".format(time() - tic))\nfig = plt.gcf()\nfig.suptitle('Partial dependence of house value on non-___location features\\n'\n 'for the California housing dataset, with MLPRegressor')\nfig.tight_layout(rect=[0, 0.03, 1, 0.95])" |
| 66 | + ] |
| 67 | + }, |
| 68 | + { |
| 69 | + "cell_type": "markdown", |
| 70 | + "metadata": {}, |
| 71 | + "source": [ |
| 72 | + "Partial Dependence computation for Gradient Boosting\n----------------------------------------------------\n\nLet's now fit a GradientBoostingRegressor and compute the partial dependence\nplots either or one or two variables at a time.\n\n" |
| 73 | + ] |
| 74 | + }, |
| 75 | + { |
| 76 | + "cell_type": "code", |
| 77 | + "execution_count": null, |
| 78 | + "metadata": { |
| 79 | + "collapsed": false |
| 80 | + }, |
| 81 | + "outputs": [], |
| 82 | + "source": [ |
| 83 | + "print(\"Training GradientBoostingRegressor...\")\ntic = time()\nest = HistGradientBoostingRegressor(max_iter=100, max_leaf_nodes=64,\n learning_rate=0.1, random_state=1)\nest.fit(X_train, y_train)\nprint(\"done in {:.3f}s\".format(time() - tic))\nprint(\"Test R2 score: {:.2f}\".format(est.score(X_test, y_test)))\n\nprint('Computing partial dependence plots...')\ntic = time()\nfeatures = [0, 5, 1, 2, (5, 1)]\nplot_partial_dependence(est, X_train, features, feature_names=names,\n n_jobs=3, grid_resolution=20)\nprint(\"done in {:.3f}s\".format(time() - tic))\nfig = plt.gcf()\nfig.suptitle('Partial dependence of house value on non-___location features\\n'\n 'for the California housing dataset, with Gradient Boosting')\nfig.tight_layout(rect=[0, 0.03, 1, 0.95])" |
| 84 | + ] |
| 85 | + }, |
| 86 | + { |
| 87 | + "cell_type": "markdown", |
| 88 | + "metadata": {}, |
| 89 | + "source": [ |
| 90 | + "Analysis of the plots\n---------------------\n\nWe can clearly see that the median house price shows a linear relationship\nwith the median income (top left) and that the house price drops when the\naverage occupants per household increases (top middle).\nThe top right plot shows that the house age in a district does not have\na strong influence on the (median) house price; so does the average rooms\nper household.\nThe tick marks on the x-axis represent the deciles of the feature values\nin the training data.\n\nWe also observe that :class:`~sklearn.neural_network.MLPRegressor` has much\nsmoother predictions than\n:class:`~sklearn.ensemble.HistGradientBoostingRegressor`. For the plots to be\ncomparable, it is necessary to subtract the average value of the target\n``y``: The 'recursion' method, used by default for\n:class:`~sklearn.ensemble.HistGradientBoostingRegressor`, does not account\nfor the initial predictor (in our case the average target). Setting the\ntarget average to 0 avoids this bias.\n\nPartial dependence plots with two target features enable us to visualize\ninteractions among them. The two-way partial dependence plot shows the\ndependence of median house price on joint values of house age and average\noccupants per household. We can clearly see an interaction between the\ntwo features: for an average occupancy greater than two, the house price is\nnearly independent of the house age, whereas for values less than two there\nis a strong dependence on age.\n\n" |
| 91 | + ] |
| 92 | + }, |
| 93 | + { |
| 94 | + "cell_type": "markdown", |
| 95 | + "metadata": {}, |
| 96 | + "source": [ |
| 97 | + "3D interaction plots\n--------------------\n\nLet's make the same partial dependence plot for the 2 features interaction,\nthis time in 3 dimensions.\n\n" |
| 98 | + ] |
| 99 | + }, |
| 100 | + { |
| 101 | + "cell_type": "code", |
| 102 | + "execution_count": null, |
| 103 | + "metadata": { |
| 104 | + "collapsed": false |
| 105 | + }, |
| 106 | + "outputs": [], |
| 107 | + "source": [ |
| 108 | + "fig = plt.figure()\n\ntarget_feature = (1, 5)\npdp, axes = partial_dependence(est, X_train, target_feature,\n grid_resolution=20)\nXX, YY = np.meshgrid(axes[0], axes[1])\nZ = pdp[0].T\nax = Axes3D(fig)\nsurf = ax.plot_surface(XX, YY, Z, rstride=1, cstride=1,\n cmap=plt.cm.BuPu, edgecolor='k')\nax.set_xlabel(names[target_feature[0]])\nax.set_ylabel(names[target_feature[1]])\nax.set_zlabel('Partial dependence')\n# pretty init view\nax.view_init(elev=22, azim=122)\nplt.colorbar(surf)\nplt.suptitle('Partial dependence of house value on median\\n'\n 'age and average occupancy, with Gradient Boosting')\nplt.subplots_adjust(top=0.9)\n\nplt.show()" |
30 | 109 | ]
|
31 | 110 | }
|
32 | 111 | ],
|
|
0 commit comments