Skip to content

Commit ae62d1a

Browse files
committed
Pushing the docs to dev/ for branch: master, commit 465f29f4f6f839ec12c75ea80056041e5a87189c
1 parent 4ffcb74 commit ae62d1a

File tree

1,199 files changed

+3574
-3574
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,199 files changed

+3574
-3574
lines changed

dev/_downloads/2b2bebba7f9fb4d03b9c12d63c8b44ad/plot_topics_extraction_with_nmf_lda.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,8 @@
33
Topic extraction with Non-negative Matrix Factorization and Latent Dirichlet Allocation
44
=======================================================================================
55
6-
This is an example of applying :class:`sklearn.decomposition.NMF` and
7-
:class:`sklearn.decomposition.LatentDirichletAllocation` on a corpus
6+
This is an example of applying :class:`~sklearn.decomposition.NMF` and
7+
:class:`~sklearn.decomposition.LatentDirichletAllocation` on a corpus
88
of documents and extract additive models of the topic structure of the
99
corpus. The output is a list of topics, each represented as a list of
1010
terms (weights are not shown).
Binary file not shown.

dev/_downloads/3daf4e9ab9d86061e19a11d997a09779/plot_tomography_l1_reconstruction.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n======================================================================\nCompressive sensing: tomography reconstruction with L1 prior (Lasso)\n======================================================================\n\nThis example shows the reconstruction of an image from a set of parallel\nprojections, acquired along different angles. Such a dataset is acquired in\n**computed tomography** (CT).\n\nWithout any prior information on the sample, the number of projections\nrequired to reconstruct the image is of the order of the linear size\n``l`` of the image (in pixels). For simplicity we consider here a sparse\nimage, where only pixels on the boundary of objects have a non-zero\nvalue. Such data could correspond for example to a cellular material.\nNote however that most images are sparse in a different basis, such as\nthe Haar wavelets. Only ``l/7`` projections are acquired, therefore it is\nnecessary to use prior information available on the sample (its\nsparsity): this is an example of **compressive sensing**.\n\nThe tomography projection operation is a linear transformation. In\naddition to the data-fidelity term corresponding to a linear regression,\nwe penalize the L1 norm of the image to account for its sparsity. The\nresulting optimization problem is called the `lasso`. We use the\nclass :class:`sklearn.linear_model.Lasso`, that uses the coordinate descent\nalgorithm. Importantly, this implementation is more computationally efficient\non a sparse matrix, than the projection operator used here.\n\nThe reconstruction with L1 penalization gives a result with zero error\n(all pixels are successfully labeled with 0 or 1), even if noise was\nadded to the projections. In comparison, an L2 penalization\n(:class:`sklearn.linear_model.Ridge`) produces a large number of labeling\nerrors for the pixels. Important artifacts are observed on the\nreconstructed image, contrary to the L1 penalization. Note in particular\nthe circular artifact separating the pixels in the corners, that have\ncontributed to fewer projections than the central disk.\n"
18+
"\n======================================================================\nCompressive sensing: tomography reconstruction with L1 prior (Lasso)\n======================================================================\n\nThis example shows the reconstruction of an image from a set of parallel\nprojections, acquired along different angles. Such a dataset is acquired in\n**computed tomography** (CT).\n\nWithout any prior information on the sample, the number of projections\nrequired to reconstruct the image is of the order of the linear size\n``l`` of the image (in pixels). For simplicity we consider here a sparse\nimage, where only pixels on the boundary of objects have a non-zero\nvalue. Such data could correspond for example to a cellular material.\nNote however that most images are sparse in a different basis, such as\nthe Haar wavelets. Only ``l/7`` projections are acquired, therefore it is\nnecessary to use prior information available on the sample (its\nsparsity): this is an example of **compressive sensing**.\n\nThe tomography projection operation is a linear transformation. In\naddition to the data-fidelity term corresponding to a linear regression,\nwe penalize the L1 norm of the image to account for its sparsity. The\nresulting optimization problem is called the `lasso`. We use the\nclass :class:`~sklearn.linear_model.Lasso`, that uses the coordinate descent\nalgorithm. Importantly, this implementation is more computationally efficient\non a sparse matrix, than the projection operator used here.\n\nThe reconstruction with L1 penalization gives a result with zero error\n(all pixels are successfully labeled with 0 or 1), even if noise was\nadded to the projections. In comparison, an L2 penalization\n(:class:`~sklearn.linear_model.Ridge`) produces a large number of labeling\nerrors for the pixels. Important artifacts are observed on the\nreconstructed image, contrary to the L1 penalization. Note in particular\nthe circular artifact separating the pixels in the corners, that have\ncontributed to fewer projections than the central disk.\n"
1919
]
2020
},
2121
{

dev/_downloads/69878e8e2864920aa874c5a68cecf1d3/plot_species_distribution_modeling.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
mammals given past observations and 14 environmental
1010
variables. Since we have only positive examples (there are
1111
no unsuccessful observations), we cast this problem as a
12-
density estimation problem and use the :class:`sklearn.svm.OneClassSVM`
12+
density estimation problem and use the :class:`~sklearn.svm.OneClassSVM`
1313
as our modeling tool. The dataset is provided by Phillips et. al. (2006).
1414
If available, the example uses
1515
`basemap <https://matplotlib.org/basemap/>`_

dev/_downloads/b26574ccf9c31e12ab2afd8d683f3279/plot_topics_extraction_with_nmf_lda.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Topic extraction with Non-negative Matrix Factorization and Latent Dirichlet Allocation\n\n\nThis is an example of applying :class:`sklearn.decomposition.NMF` and\n:class:`sklearn.decomposition.LatentDirichletAllocation` on a corpus\nof documents and extract additive models of the topic structure of the\ncorpus. The output is a list of topics, each represented as a list of\nterms (weights are not shown).\n\nNon-negative Matrix Factorization is applied with two different objective\nfunctions: the Frobenius norm, and the generalized Kullback-Leibler divergence.\nThe latter is equivalent to Probabilistic Latent Semantic Indexing.\n\nThe default parameters (n_samples / n_features / n_components) should make\nthe example runnable in a couple of tens of seconds. You can try to\nincrease the dimensions of the problem, but be aware that the time\ncomplexity is polynomial in NMF. In LDA, the time complexity is\nproportional to (n_samples * iterations).\n"
18+
"\n# Topic extraction with Non-negative Matrix Factorization and Latent Dirichlet Allocation\n\n\nThis is an example of applying :class:`~sklearn.decomposition.NMF` and\n:class:`~sklearn.decomposition.LatentDirichletAllocation` on a corpus\nof documents and extract additive models of the topic structure of the\ncorpus. The output is a list of topics, each represented as a list of\nterms (weights are not shown).\n\nNon-negative Matrix Factorization is applied with two different objective\nfunctions: the Frobenius norm, and the generalized Kullback-Leibler divergence.\nThe latter is equivalent to Probabilistic Latent Semantic Indexing.\n\nThe default parameters (n_samples / n_features / n_components) should make\nthe example runnable in a couple of tens of seconds. You can try to\nincrease the dimensions of the problem, but be aware that the time\ncomplexity is polynomial in NMF. In LDA, the time complexity is\nproportional to (n_samples * iterations).\n"
1919
]
2020
},
2121
{

dev/_downloads/b5d1ec88ae06ced89813c50d00effe51/plot_species_distribution_modeling.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Species distribution modeling\n\n\nModeling species' geographic distributions is an important\nproblem in conservation biology. In this example we\nmodel the geographic distribution of two south american\nmammals given past observations and 14 environmental\nvariables. Since we have only positive examples (there are\nno unsuccessful observations), we cast this problem as a\ndensity estimation problem and use the :class:`sklearn.svm.OneClassSVM`\nas our modeling tool. The dataset is provided by Phillips et. al. (2006).\nIf available, the example uses\n`basemap <https://matplotlib.org/basemap/>`_\nto plot the coast lines and national boundaries of South America.\n\nThe two species are:\n\n - `\"Bradypus variegatus\"\n <http://www.iucnredlist.org/details/3038/0>`_ ,\n the Brown-throated Sloth.\n\n - `\"Microryzomys minutus\"\n <http://www.iucnredlist.org/details/13408/0>`_ ,\n also known as the Forest Small Rice Rat, a rodent that lives in Peru,\n Colombia, Ecuador, Peru, and Venezuela.\n\nReferences\n----------\n\n * `\"Maximum entropy modeling of species geographic distributions\"\n <http://rob.schapire.net/papers/ecolmod.pdf>`_\n S. J. Phillips, R. P. Anderson, R. E. Schapire - Ecological Modelling,\n 190:231-259, 2006.\n"
18+
"\n# Species distribution modeling\n\n\nModeling species' geographic distributions is an important\nproblem in conservation biology. In this example we\nmodel the geographic distribution of two south american\nmammals given past observations and 14 environmental\nvariables. Since we have only positive examples (there are\nno unsuccessful observations), we cast this problem as a\ndensity estimation problem and use the :class:`~sklearn.svm.OneClassSVM`\nas our modeling tool. The dataset is provided by Phillips et. al. (2006).\nIf available, the example uses\n`basemap <https://matplotlib.org/basemap/>`_\nto plot the coast lines and national boundaries of South America.\n\nThe two species are:\n\n - `\"Bradypus variegatus\"\n <http://www.iucnredlist.org/details/3038/0>`_ ,\n the Brown-throated Sloth.\n\n - `\"Microryzomys minutus\"\n <http://www.iucnredlist.org/details/13408/0>`_ ,\n also known as the Forest Small Rice Rat, a rodent that lives in Peru,\n Colombia, Ecuador, Peru, and Venezuela.\n\nReferences\n----------\n\n * `\"Maximum entropy modeling of species geographic distributions\"\n <http://rob.schapire.net/papers/ecolmod.pdf>`_\n S. J. Phillips, R. P. Anderson, R. E. Schapire - Ecological Modelling,\n 190:231-259, 2006.\n"
1919
]
2020
},
2121
{

dev/_downloads/c0cf10731954dbd148230cf322eb6fd7/plot_tomography_l1_reconstruction.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,14 +21,14 @@
2121
addition to the data-fidelity term corresponding to a linear regression,
2222
we penalize the L1 norm of the image to account for its sparsity. The
2323
resulting optimization problem is called the :ref:`lasso`. We use the
24-
class :class:`sklearn.linear_model.Lasso`, that uses the coordinate descent
24+
class :class:`~sklearn.linear_model.Lasso`, that uses the coordinate descent
2525
algorithm. Importantly, this implementation is more computationally efficient
2626
on a sparse matrix, than the projection operator used here.
2727
2828
The reconstruction with L1 penalization gives a result with zero error
2929
(all pixels are successfully labeled with 0 or 1), even if noise was
3030
added to the projections. In comparison, an L2 penalization
31-
(:class:`sklearn.linear_model.Ridge`) produces a large number of labeling
31+
(:class:`~sklearn.linear_model.Ridge`) produces a large number of labeling
3232
errors for the pixels. Important artifacts are observed on the
3333
reconstructed image, contrary to the L1 penalization. Note in particular
3434
the circular artifact separating the pixels in the corners, that have
Binary file not shown.

dev/_downloads/scikit-learn-docs.pdf

34.3 KB
Binary file not shown.

dev/_images/iris.png

0 Bytes

0 commit comments

Comments
 (0)