Skip to content

Commit 7d837de

Browse files
committed
Pushing the docs to dev/ for branch: master, commit 890e65289877db32c0c440cadaa2b2ef2ef7a384
1 parent c8106be commit 7d837de

File tree

969 files changed

+2985
-2979
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

969 files changed

+2985
-2979
lines changed

dev/.buildinfo

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
# Sphinx build info version 1
22
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
3-
config: 1399f4416aa025e3557a0b67b941a0dc
3+
config: 2f3607c03aecd81fb813dba1dd2f3017
44
tags: 645f666f9bcd5a90fca523b33c5a78b7
24 Bytes
Binary file not shown.
24 Bytes
Binary file not shown.

dev/_downloads/plot_topics_extraction_with_nmf_lda.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Topic extraction with Non-negative Matrix Factorizationand Latent Dirichlet Allocation\n\n\nThis is an example of applying :class:`sklearn.decomposition.NMF` and\n:class:`sklearn.decomposition.LatentDirichletAllocation` on a corpus\nof documents and extract additive models of the topic structure of the\ncorpus. The output is a list of topics, each represented as a list of\nterms (weights are not shown).\n\nNon-negative Matrix Factorization is applied with two different objective\nfunctions: the Frobenius norm, and the generalized Kullback-Leibler divergence.\nThe latter is equivalent to Probabilistic Latent Semantic Indexing.\n\nThe default parameters (n_samples / n_features / n_topics) should make\nthe example runnable in a couple of tens of seconds. You can try to\nincrease the dimensions of the problem, but be aware that the time\ncomplexity is polynomial in NMF. In LDA, the time complexity is\nproportional to (n_samples * iterations).\n\n\n"
18+
"\n# Topic extraction with Non-negative Matrix Factorizationand Latent Dirichlet Allocation\n\n\nThis is an example of applying :class:`sklearn.decomposition.NMF` and\n:class:`sklearn.decomposition.LatentDirichletAllocation` on a corpus\nof documents and extract additive models of the topic structure of the\ncorpus. The output is a list of topics, each represented as a list of\nterms (weights are not shown).\n\nNon-negative Matrix Factorization is applied with two different objective\nfunctions: the Frobenius norm, and the generalized Kullback-Leibler divergence.\nThe latter is equivalent to Probabilistic Latent Semantic Indexing.\n\nThe default parameters (n_samples / n_features / n_components) should make\nthe example runnable in a couple of tens of seconds. You can try to\nincrease the dimensions of the problem, but be aware that the time\ncomplexity is polynomial in NMF. In LDA, the time complexity is\nproportional to (n_samples * iterations).\n\n\n"
1919
]
2020
},
2121
{
@@ -26,7 +26,7 @@
2626
},
2727
"outputs": [],
2828
"source": [
29-
"# Author: Olivier Grisel <[email protected]>\n# Lars Buitinck\n# Chyi-Kwei Yau <[email protected]>\n# License: BSD 3 clause\n\nfrom __future__ import print_function\nfrom time import time\n\nfrom sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer\nfrom sklearn.decomposition import NMF, LatentDirichletAllocation\nfrom sklearn.datasets import fetch_20newsgroups\n\nn_samples = 2000\nn_features = 1000\nn_topics = 10\nn_top_words = 20\n\n\ndef print_top_words(model, feature_names, n_top_words):\n for topic_idx, topic in enumerate(model.components_):\n message = \"Topic #%d: \" % topic_idx\n message += \" \".join([feature_names[i]\n for i in topic.argsort()[:-n_top_words - 1:-1]])\n print(message)\n print()\n\n\n# Load the 20 newsgroups dataset and vectorize it. We use a few heuristics\n# to filter out useless terms early on: the posts are stripped of headers,\n# footers and quoted replies, and common English words, words occurring in\n# only one document or in at least 95% of the documents are removed.\n\nprint(\"Loading dataset...\")\nt0 = time()\ndataset = fetch_20newsgroups(shuffle=True, random_state=1,\n remove=('headers', 'footers', 'quotes'))\ndata_samples = dataset.data[:n_samples]\nprint(\"done in %0.3fs.\" % (time() - t0))\n\n# Use tf-idf features for NMF.\nprint(\"Extracting tf-idf features for NMF...\")\ntfidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2,\n max_features=n_features,\n stop_words='english')\nt0 = time()\ntfidf = tfidf_vectorizer.fit_transform(data_samples)\nprint(\"done in %0.3fs.\" % (time() - t0))\n\n# Use tf (raw term count) features for LDA.\nprint(\"Extracting tf features for LDA...\")\ntf_vectorizer = CountVectorizer(max_df=0.95, min_df=2,\n max_features=n_features,\n stop_words='english')\nt0 = time()\ntf = tf_vectorizer.fit_transform(data_samples)\nprint(\"done in %0.3fs.\" % (time() - t0))\nprint()\n\n# Fit the NMF model\nprint(\"Fitting the NMF model (Frobenius norm) with tf-idf features, \"\n \"n_samples=%d and n_features=%d...\"\n % (n_samples, n_features))\nt0 = time()\nnmf = NMF(n_components=n_topics, random_state=1,\n alpha=.1, l1_ratio=.5).fit(tfidf)\nprint(\"done in %0.3fs.\" % (time() - t0))\n\nprint(\"\\nTopics in NMF model (Frobenius norm):\")\ntfidf_feature_names = tfidf_vectorizer.get_feature_names()\nprint_top_words(nmf, tfidf_feature_names, n_top_words)\n\n# Fit the NMF model\nprint(\"Fitting the NMF model (generalized Kullback-Leibler divergence) with \"\n \"tf-idf features, n_samples=%d and n_features=%d...\"\n % (n_samples, n_features))\nt0 = time()\nnmf = NMF(n_components=n_topics, random_state=1, beta_loss='kullback-leibler',\n solver='mu', max_iter=1000, alpha=.1, l1_ratio=.5).fit(tfidf)\nprint(\"done in %0.3fs.\" % (time() - t0))\n\nprint(\"\\nTopics in NMF model (generalized Kullback-Leibler divergence):\")\ntfidf_feature_names = tfidf_vectorizer.get_feature_names()\nprint_top_words(nmf, tfidf_feature_names, n_top_words)\n\nprint(\"Fitting LDA models with tf features, \"\n \"n_samples=%d and n_features=%d...\"\n % (n_samples, n_features))\nlda = LatentDirichletAllocation(n_topics=n_topics, max_iter=5,\n learning_method='online',\n learning_offset=50.,\n random_state=0)\nt0 = time()\nlda.fit(tf)\nprint(\"done in %0.3fs.\" % (time() - t0))\n\nprint(\"\\nTopics in LDA model:\")\ntf_feature_names = tf_vectorizer.get_feature_names()\nprint_top_words(lda, tf_feature_names, n_top_words)"
29+
"# Author: Olivier Grisel <[email protected]>\n# Lars Buitinck\n# Chyi-Kwei Yau <[email protected]>\n# License: BSD 3 clause\n\nfrom __future__ import print_function\nfrom time import time\n\nfrom sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer\nfrom sklearn.decomposition import NMF, LatentDirichletAllocation\nfrom sklearn.datasets import fetch_20newsgroups\n\nn_samples = 2000\nn_features = 1000\nn_components = 10\nn_top_words = 20\n\n\ndef print_top_words(model, feature_names, n_top_words):\n for topic_idx, topic in enumerate(model.components_):\n message = \"Topic #%d: \" % topic_idx\n message += \" \".join([feature_names[i]\n for i in topic.argsort()[:-n_top_words - 1:-1]])\n print(message)\n print()\n\n\n# Load the 20 newsgroups dataset and vectorize it. We use a few heuristics\n# to filter out useless terms early on: the posts are stripped of headers,\n# footers and quoted replies, and common English words, words occurring in\n# only one document or in at least 95% of the documents are removed.\n\nprint(\"Loading dataset...\")\nt0 = time()\ndataset = fetch_20newsgroups(shuffle=True, random_state=1,\n remove=('headers', 'footers', 'quotes'))\ndata_samples = dataset.data[:n_samples]\nprint(\"done in %0.3fs.\" % (time() - t0))\n\n# Use tf-idf features for NMF.\nprint(\"Extracting tf-idf features for NMF...\")\ntfidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2,\n max_features=n_features,\n stop_words='english')\nt0 = time()\ntfidf = tfidf_vectorizer.fit_transform(data_samples)\nprint(\"done in %0.3fs.\" % (time() - t0))\n\n# Use tf (raw term count) features for LDA.\nprint(\"Extracting tf features for LDA...\")\ntf_vectorizer = CountVectorizer(max_df=0.95, min_df=2,\n max_features=n_features,\n stop_words='english')\nt0 = time()\ntf = tf_vectorizer.fit_transform(data_samples)\nprint(\"done in %0.3fs.\" % (time() - t0))\nprint()\n\n# Fit the NMF model\nprint(\"Fitting the NMF model (Frobenius norm) with tf-idf features, \"\n \"n_samples=%d and n_features=%d...\"\n % (n_samples, n_features))\nt0 = time()\nnmf = NMF(n_components=n_components, random_state=1,\n alpha=.1, l1_ratio=.5).fit(tfidf)\nprint(\"done in %0.3fs.\" % (time() - t0))\n\nprint(\"\\nTopics in NMF model (Frobenius norm):\")\ntfidf_feature_names = tfidf_vectorizer.get_feature_names()\nprint_top_words(nmf, tfidf_feature_names, n_top_words)\n\n# Fit the NMF model\nprint(\"Fitting the NMF model (generalized Kullback-Leibler divergence) with \"\n \"tf-idf features, n_samples=%d and n_features=%d...\"\n % (n_samples, n_features))\nt0 = time()\nnmf = NMF(n_components=n_components, random_state=1, beta_loss='kullback-leibler',\n solver='mu', max_iter=1000, alpha=.1, l1_ratio=.5).fit(tfidf)\nprint(\"done in %0.3fs.\" % (time() - t0))\n\nprint(\"\\nTopics in NMF model (generalized Kullback-Leibler divergence):\")\ntfidf_feature_names = tfidf_vectorizer.get_feature_names()\nprint_top_words(nmf, tfidf_feature_names, n_top_words)\n\nprint(\"Fitting LDA models with tf features, \"\n \"n_samples=%d and n_features=%d...\"\n % (n_samples, n_features))\nlda = LatentDirichletAllocation(n_components=n_components, max_iter=5,\n learning_method='online',\n learning_offset=50.,\n random_state=0)\nt0 = time()\nlda.fit(tf)\nprint(\"done in %0.3fs.\" % (time() - t0))\n\nprint(\"\\nTopics in LDA model:\")\ntf_feature_names = tf_vectorizer.get_feature_names()\nprint_top_words(lda, tf_feature_names, n_top_words)"
3030
]
3131
}
3232
],

dev/_downloads/plot_topics_extraction_with_nmf_lda.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
functions: the Frobenius norm, and the generalized Kullback-Leibler divergence.
1515
The latter is equivalent to Probabilistic Latent Semantic Indexing.
1616
17-
The default parameters (n_samples / n_features / n_topics) should make
17+
The default parameters (n_samples / n_features / n_components) should make
1818
the example runnable in a couple of tens of seconds. You can try to
1919
increase the dimensions of the problem, but be aware that the time
2020
complexity is polynomial in NMF. In LDA, the time complexity is
@@ -36,7 +36,7 @@
3636

3737
n_samples = 2000
3838
n_features = 1000
39-
n_topics = 10
39+
n_components = 10
4040
n_top_words = 20
4141

4242

@@ -85,7 +85,7 @@ def print_top_words(model, feature_names, n_top_words):
8585
"n_samples=%d and n_features=%d..."
8686
% (n_samples, n_features))
8787
t0 = time()
88-
nmf = NMF(n_components=n_topics, random_state=1,
88+
nmf = NMF(n_components=n_components, random_state=1,
8989
alpha=.1, l1_ratio=.5).fit(tfidf)
9090
print("done in %0.3fs." % (time() - t0))
9191

@@ -98,7 +98,7 @@ def print_top_words(model, feature_names, n_top_words):
9898
"tf-idf features, n_samples=%d and n_features=%d..."
9999
% (n_samples, n_features))
100100
t0 = time()
101-
nmf = NMF(n_components=n_topics, random_state=1, beta_loss='kullback-leibler',
101+
nmf = NMF(n_components=n_components, random_state=1, beta_loss='kullback-leibler',
102102
solver='mu', max_iter=1000, alpha=.1, l1_ratio=.5).fit(tfidf)
103103
print("done in %0.3fs." % (time() - t0))
104104

@@ -109,7 +109,7 @@ def print_top_words(model, feature_names, n_top_words):
109109
print("Fitting LDA models with tf features, "
110110
"n_samples=%d and n_features=%d..."
111111
% (n_samples, n_features))
112-
lda = LatentDirichletAllocation(n_topics=n_topics, max_iter=5,
112+
lda = LatentDirichletAllocation(n_components=n_components, max_iter=5,
113113
learning_method='online',
114114
learning_offset=50.,
115115
random_state=0)

dev/_downloads/scikit-learn-docs.pdf

-5.48 KB
Binary file not shown.

0 commit comments

Comments
 (0)