|
15 | 15 | "cell_type": "markdown",
|
16 | 16 | "metadata": {},
|
17 | 17 | "source": [
|
18 |
| - "\n# Gaussian process regression (GPR) with noise-level estimation\n\nThis example illustrates that GPR with a sum-kernel including a WhiteKernel can\nestimate the noise level of data. An illustration of the\nlog-marginal-likelihood (LML) landscape shows that there exist two local\nmaxima of LML. The first corresponds to a model with a high noise level and a\nlarge length scale, which explains all variations in the data by noise. The\nsecond one has a smaller noise level and shorter length scale, which explains\nmost of the variation by the noise-free functional relationship. The second\nmodel has a higher likelihood; however, depending on the initial value for the\nhyperparameters, the gradient-based optimization might also converge to the\nhigh-noise solution. It is thus important to repeat the optimization several\ntimes for different initializations.\n" |
| 18 | + "\n# Gaussian process regression (GPR) with noise-level estimation\n\nThis example shows the ability of the\n:class:`~sklearn.gaussian_process.kernels.WhiteKernel` to estimate the noise\nlevel in the data. Moreover, we show the importance of kernel hyperparameters\ninitialization.\n" |
19 | 19 | ]
|
20 | 20 | },
|
21 | 21 | {
|
|
26 | 26 | },
|
27 | 27 | "outputs": [],
|
28 | 28 | "source": [
|
29 |
| - "# Authors: Jan Hendrik Metzen < [email protected]>\n#\n# License: BSD 3 clause\n\nimport numpy as np\n\nfrom matplotlib import pyplot as plt\nfrom matplotlib.colors import LogNorm\n\nfrom sklearn.gaussian_process import GaussianProcessRegressor\nfrom sklearn.gaussian_process.kernels import RBF, WhiteKernel\n\n\nrng = np.random.RandomState(0)\nX = rng.uniform(0, 5, 20)[:, np.newaxis]\ny = 0.5 * np.sin(3 * X[:, 0]) + rng.normal(0, 0.5, X.shape[0])\n\n# First run\nplt.figure()\nkernel = 1.0 * RBF(length_scale=100.0, length_scale_bounds=(1e-2, 1e3)) + WhiteKernel(\n noise_level=1, noise_level_bounds=(1e-10, 1e1)\n)\ngp = GaussianProcessRegressor(kernel=kernel, alpha=0.0).fit(X, y)\nX_ = np.linspace(0, 5, 100)\ny_mean, y_cov = gp.predict(X_[:, np.newaxis], return_cov=True)\nplt.plot(X_, y_mean, \"k\", lw=3, zorder=9)\nplt.fill_between(\n X_,\n y_mean - np.sqrt(np.diag(y_cov)),\n y_mean + np.sqrt(np.diag(y_cov)),\n alpha=0.5,\n color=\"k\",\n)\nplt.plot(X_, 0.5 * np.sin(3 * X_), \"r\", lw=3, zorder=9)\nplt.scatter(X[:, 0], y, c=\"r\", s=50, zorder=10, edgecolors=(0, 0, 0))\nplt.title(\n \"Initial: %s\\nOptimum: %s\\nLog-Marginal-Likelihood: %s\"\n % (kernel, gp.kernel_, gp.log_marginal_likelihood(gp.kernel_.theta))\n)\nplt.tight_layout()\n\n# Second run\nplt.figure()\nkernel = 1.0 * RBF(length_scale=1.0, length_scale_bounds=(1e-2, 1e3)) + WhiteKernel(\n noise_level=1e-5, noise_level_bounds=(1e-10, 1e1)\n)\ngp = GaussianProcessRegressor(kernel=kernel, alpha=0.0).fit(X, y)\nX_ = np.linspace(0, 5, 100)\ny_mean, y_cov = gp.predict(X_[:, np.newaxis], return_cov=True)\nplt.plot(X_, y_mean, \"k\", lw=3, zorder=9)\nplt.fill_between(\n X_,\n y_mean - np.sqrt(np.diag(y_cov)),\n y_mean + np.sqrt(np.diag(y_cov)),\n alpha=0.5,\n color=\"k\",\n)\nplt.plot(X_, 0.5 * np.sin(3 * X_), \"r\", lw=3, zorder=9)\nplt.scatter(X[:, 0], y, c=\"r\", s=50, zorder=10, edgecolors=(0, 0, 0))\nplt.title(\n \"Initial: %s\\nOptimum: %s\\nLog-Marginal-Likelihood: %s\"\n % (kernel, gp.kernel_, gp.log_marginal_likelihood(gp.kernel_.theta))\n)\nplt.tight_layout()\n\n# Plot LML landscape\nplt.figure()\ntheta0 = np.logspace(-2, 3, 49)\ntheta1 = np.logspace(-2, 0, 50)\nTheta0, Theta1 = np.meshgrid(theta0, theta1)\nLML = [\n [\n gp.log_marginal_likelihood(np.log([0.36, Theta0[i, j], Theta1[i, j]]))\n for i in range(Theta0.shape[0])\n ]\n for j in range(Theta0.shape[1])\n]\nLML = np.array(LML).T\n\nvmin, vmax = (-LML).min(), (-LML).max()\nvmax = 50\nlevel = np.around(np.logspace(np.log10(vmin), np.log10(vmax), 50), decimals=1)\nplt.contour(Theta0, Theta1, -LML, levels=level, norm=LogNorm(vmin=vmin, vmax=vmax))\nplt.colorbar()\nplt.xscale(\"log\")\nplt.yscale(\"log\")\nplt.xlabel(\"Length-scale\")\nplt.ylabel(\"Noise-level\")\nplt.title(\"Log-marginal-likelihood\")\nplt.tight_layout()\n\nplt.show()" |
| 29 | + "# Authors: Jan Hendrik Metzen <[email protected]>\n# Guillaume Lemaitre <[email protected]>\n# License: BSD 3 clause" |
| 30 | + ] |
| 31 | + }, |
| 32 | + { |
| 33 | + "cell_type": "markdown", |
| 34 | + "metadata": {}, |
| 35 | + "source": [ |
| 36 | + "## Data generation\n\nWe will work in a setting where `X` will contain a single feature. We create a\nfunction that will generate the target to be predicted. We will add an\noption to add some noise to the generated target.\n\n" |
| 37 | + ] |
| 38 | + }, |
| 39 | + { |
| 40 | + "cell_type": "code", |
| 41 | + "execution_count": null, |
| 42 | + "metadata": { |
| 43 | + "collapsed": false |
| 44 | + }, |
| 45 | + "outputs": [], |
| 46 | + "source": [ |
| 47 | + "import numpy as np\n\n\ndef target_generator(X, add_noise=False):\n target = 0.5 + np.sin(3 * X)\n if add_noise:\n rng = np.random.RandomState(1)\n target += rng.normal(0, 0.3, size=target.shape)\n return target.squeeze()" |
| 48 | + ] |
| 49 | + }, |
| 50 | + { |
| 51 | + "cell_type": "markdown", |
| 52 | + "metadata": {}, |
| 53 | + "source": [ |
| 54 | + "Let's have a look to the target generator where we will not add any noise to\nobserve the signal that we would like to predict.\n\n" |
| 55 | + ] |
| 56 | + }, |
| 57 | + { |
| 58 | + "cell_type": "code", |
| 59 | + "execution_count": null, |
| 60 | + "metadata": { |
| 61 | + "collapsed": false |
| 62 | + }, |
| 63 | + "outputs": [], |
| 64 | + "source": [ |
| 65 | + "X = np.linspace(0, 5, num=30).reshape(-1, 1)\ny = target_generator(X, add_noise=False)" |
| 66 | + ] |
| 67 | + }, |
| 68 | + { |
| 69 | + "cell_type": "code", |
| 70 | + "execution_count": null, |
| 71 | + "metadata": { |
| 72 | + "collapsed": false |
| 73 | + }, |
| 74 | + "outputs": [], |
| 75 | + "source": [ |
| 76 | + "import matplotlib.pyplot as plt\n\nplt.plot(X, y, label=\"Expected signal\")\nplt.legend()\nplt.xlabel(\"X\")\n_ = plt.ylabel(\"y\")" |
| 77 | + ] |
| 78 | + }, |
| 79 | + { |
| 80 | + "cell_type": "markdown", |
| 81 | + "metadata": {}, |
| 82 | + "source": [ |
| 83 | + "The target is transforming the input `X` using a sine function. Now, we will\ngenerate few noisy training samples. To illustrate the noise level, we will\nplot the true signal together with the noisy training samples.\n\n" |
| 84 | + ] |
| 85 | + }, |
| 86 | + { |
| 87 | + "cell_type": "code", |
| 88 | + "execution_count": null, |
| 89 | + "metadata": { |
| 90 | + "collapsed": false |
| 91 | + }, |
| 92 | + "outputs": [], |
| 93 | + "source": [ |
| 94 | + "rng = np.random.RandomState(0)\nX_train = rng.uniform(0, 5, size=20).reshape(-1, 1)\ny_train = target_generator(X_train, add_noise=True)" |
| 95 | + ] |
| 96 | + }, |
| 97 | + { |
| 98 | + "cell_type": "code", |
| 99 | + "execution_count": null, |
| 100 | + "metadata": { |
| 101 | + "collapsed": false |
| 102 | + }, |
| 103 | + "outputs": [], |
| 104 | + "source": [ |
| 105 | + "plt.plot(X, y, label=\"Expected signal\")\nplt.scatter(\n x=X_train[:, 0],\n y=y_train,\n color=\"black\",\n alpha=0.4,\n label=\"Observations\",\n)\nplt.legend()\nplt.xlabel(\"X\")\n_ = plt.ylabel(\"y\")" |
| 106 | + ] |
| 107 | + }, |
| 108 | + { |
| 109 | + "cell_type": "markdown", |
| 110 | + "metadata": {}, |
| 111 | + "source": [ |
| 112 | + "## Optimisation of kernel hyperparameters in GPR\n\nNow, we will create a\n:class:`~sklearn.gaussian_process.GaussianProcessRegressor`\nusing an additive kernel adding a\n:class:`~sklearn.gaussian_process.kernels.RBF` and\n:class:`~sklearn.gaussian_process.kernels.WhiteKernel` kernels.\nThe :class:`~sklearn.gaussian_process.kernels.WhiteKernel` is a kernel that\nwill able to estimate the amount of noise present in the data while the\n:class:`~sklearn.gaussian_process.kernels.RBF` will serve at fitting the\nnon-linearity between the data and the target.\n\nHowever, we will show that the hyperparameter space contains several local\nminima. It will highlights the importance of initial hyperparameter values.\n\nWe will create a model using a kernel with a high noise level and a large\nlength scale, which will explain all variations in the data by noise.\n\n" |
| 113 | + ] |
| 114 | + }, |
| 115 | + { |
| 116 | + "cell_type": "code", |
| 117 | + "execution_count": null, |
| 118 | + "metadata": { |
| 119 | + "collapsed": false |
| 120 | + }, |
| 121 | + "outputs": [], |
| 122 | + "source": [ |
| 123 | + "from sklearn.gaussian_process import GaussianProcessRegressor\nfrom sklearn.gaussian_process.kernels import RBF, WhiteKernel\n\nkernel = 1.0 * RBF(length_scale=1e1, length_scale_bounds=(1e-2, 1e3)) + WhiteKernel(\n noise_level=1, noise_level_bounds=(1e-5, 1e1)\n)\ngpr = GaussianProcessRegressor(kernel=kernel, alpha=0.0)\ngpr.fit(X_train, y_train)\ny_mean, y_std = gpr.predict(X, return_std=True)" |
| 124 | + ] |
| 125 | + }, |
| 126 | + { |
| 127 | + "cell_type": "code", |
| 128 | + "execution_count": null, |
| 129 | + "metadata": { |
| 130 | + "collapsed": false |
| 131 | + }, |
| 132 | + "outputs": [], |
| 133 | + "source": [ |
| 134 | + "plt.plot(X, y, label=\"Expected signal\")\nplt.scatter(x=X_train[:, 0], y=y_train, color=\"black\", alpha=0.4, label=\"Observsations\")\nplt.errorbar(X, y_mean, y_std)\nplt.legend()\nplt.xlabel(\"X\")\nplt.ylabel(\"y\")\n_ = plt.title(\n f\"Initial: {kernel}\\nOptimum: {gpr.kernel_}\\nLog-Marginal-Likelihood: \"\n f\"{gpr.log_marginal_likelihood(gpr.kernel_.theta)}\",\n fontsize=8,\n)" |
| 135 | + ] |
| 136 | + }, |
| 137 | + { |
| 138 | + "cell_type": "markdown", |
| 139 | + "metadata": {}, |
| 140 | + "source": [ |
| 141 | + "We see that the optimum kernel found still have a high noise level and\nan even larger length scale. Furthermore, we observe that the\nmodel does not provide faithful predictions.\n\nNow, we will initialize the\n:class:`~sklearn.gaussian_process.kernels.RBF` with a\nlarger `length_scale` and the\n:class:`~sklearn.gaussian_process.kernels.WhiteKernel`\nwith a smaller noise level lower bound.\n\n" |
| 142 | + ] |
| 143 | + }, |
| 144 | + { |
| 145 | + "cell_type": "code", |
| 146 | + "execution_count": null, |
| 147 | + "metadata": { |
| 148 | + "collapsed": false |
| 149 | + }, |
| 150 | + "outputs": [], |
| 151 | + "source": [ |
| 152 | + "kernel = 1.0 * RBF(length_scale=1e-1, length_scale_bounds=(1e-2, 1e3)) + WhiteKernel(\n noise_level=1e-2, noise_level_bounds=(1e-10, 1e1)\n)\ngpr = GaussianProcessRegressor(kernel=kernel, alpha=0.0)\ngpr.fit(X_train, y_train)\ny_mean, y_std = gpr.predict(X, return_std=True)" |
| 153 | + ] |
| 154 | + }, |
| 155 | + { |
| 156 | + "cell_type": "code", |
| 157 | + "execution_count": null, |
| 158 | + "metadata": { |
| 159 | + "collapsed": false |
| 160 | + }, |
| 161 | + "outputs": [], |
| 162 | + "source": [ |
| 163 | + "plt.plot(X, y, label=\"Expected signal\")\nplt.scatter(x=X_train[:, 0], y=y_train, color=\"black\", alpha=0.4, label=\"Observations\")\nplt.errorbar(X, y_mean, y_std)\nplt.legend()\nplt.xlabel(\"X\")\nplt.ylabel(\"y\")\n_ = plt.title(\n f\"Initial: {kernel}\\nOptimum: {gpr.kernel_}\\nLog-Marginal-Likelihood: \"\n f\"{gpr.log_marginal_likelihood(gpr.kernel_.theta)}\",\n fontsize=8,\n)" |
| 164 | + ] |
| 165 | + }, |
| 166 | + { |
| 167 | + "cell_type": "markdown", |
| 168 | + "metadata": {}, |
| 169 | + "source": [ |
| 170 | + "First, we see that the model's predictions are more precise than the\nprevious model's: this new model is able to estimate the noise-free\nfunctional relationship.\n\nLooking at the kernel hyperparameters, we see that the best combination found\nhas a smaller noise level and shorter length scale than the first model.\n\nWe can inspect the Log-Marginal-Likelihood (LML) of\n:class:`~sklearn.gaussian_process.GaussianProcessRegressor`\nfor different hyperparameters to get a sense of the local minima.\n\n" |
| 171 | + ] |
| 172 | + }, |
| 173 | + { |
| 174 | + "cell_type": "code", |
| 175 | + "execution_count": null, |
| 176 | + "metadata": { |
| 177 | + "collapsed": false |
| 178 | + }, |
| 179 | + "outputs": [], |
| 180 | + "source": [ |
| 181 | + "from matplotlib.colors import LogNorm\n\nlength_scale = np.logspace(-2, 4, num=50)\nnoise_level = np.logspace(-2, 1, num=50)\nlength_scale_grid, noise_level_grid = np.meshgrid(length_scale, noise_level)\n\nlog_marginal_likelihood = [\n gpr.log_marginal_likelihood(theta=np.log([0.36, scale, noise]))\n for scale, noise in zip(length_scale_grid.ravel(), noise_level_grid.ravel())\n]\nlog_marginal_likelihood = np.reshape(\n log_marginal_likelihood, newshape=noise_level_grid.shape\n)" |
| 182 | + ] |
| 183 | + }, |
| 184 | + { |
| 185 | + "cell_type": "code", |
| 186 | + "execution_count": null, |
| 187 | + "metadata": { |
| 188 | + "collapsed": false |
| 189 | + }, |
| 190 | + "outputs": [], |
| 191 | + "source": [ |
| 192 | + "vmin, vmax = (-log_marginal_likelihood).min(), 50\nlevel = np.around(np.logspace(np.log10(vmin), np.log10(vmax), num=50), decimals=1)\nplt.contour(\n length_scale_grid,\n noise_level_grid,\n -log_marginal_likelihood,\n levels=level,\n norm=LogNorm(vmin=vmin, vmax=vmax),\n)\nplt.colorbar()\nplt.xscale(\"log\")\nplt.yscale(\"log\")\nplt.xlabel(\"Length-scale\")\nplt.ylabel(\"Noise-level\")\nplt.title(\"Log-marginal-likelihood\")\nplt.show()" |
| 193 | + ] |
| 194 | + }, |
| 195 | + { |
| 196 | + "cell_type": "markdown", |
| 197 | + "metadata": {}, |
| 198 | + "source": [ |
| 199 | + "We see that there are two local minima that correspond to the combination\nof hyperparameters previously found. Depending on the initial values for the\nhyperparameters, the gradient-based optimization might converge whether or\nnot to the best model. It is thus important to repeat the optimization\nseveral times for different initializations.\n\n" |
30 | 200 | ]
|
31 | 201 | }
|
32 | 202 | ],
|
|
0 commit comments