Skip to content

Commit 7679268

Browse files
committed
Pushing the docs to dev/ for branch: master, commit b2723582f206bb1f0d6847a478ddc5295a1ca8b1
1 parent b045706 commit 7679268

File tree

1,014 files changed

+3046
-3046
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,014 files changed

+3046
-3046
lines changed
2 Bytes
Binary file not shown.
2 Bytes
Binary file not shown.

dev/_downloads/scikit-learn-docs.pdf

-1.2 KB
Binary file not shown.

dev/_downloads/wikipedia_principal_eigenvector.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Wikipedia principal eigenvector\n\n\nA classical way to assert the relative importance of vertices in a\ngraph is to compute the principal eigenvector of the adjacency matrix\nso as to assign to each vertex the values of the components of the first\neigenvector as a centrality score:\n\n https://en.wikipedia.org/wiki/Eigenvector_centrality\n\nOn the graph of webpages and links those values are called the PageRank\nscores by Google.\n\nThe goal of this example is to analyze the graph of links inside\nwikipedia articles to rank articles by relative importance according to\nthis eigenvector centrality.\n\nThe traditional way to compute the principal eigenvector is to use the\npower iteration method:\n\n https://en.wikipedia.org/wiki/Power_iteration\n\nHere the computation is achieved thanks to Martinsson's Randomized SVD\nalgorithm implemented in the scikit.\n\nThe graph data is fetched from the DBpedia dumps. DBpedia is an extraction\nof the latent structured data of the Wikipedia content.\n\n"
18+
"\n# Wikipedia principal eigenvector\n\n\nA classical way to assert the relative importance of vertices in a\ngraph is to compute the principal eigenvector of the adjacency matrix\nso as to assign to each vertex the values of the components of the first\neigenvector as a centrality score:\n\n https://en.wikipedia.org/wiki/Eigenvector_centrality\n\nOn the graph of webpages and links those values are called the PageRank\nscores by Google.\n\nThe goal of this example is to analyze the graph of links inside\nwikipedia articles to rank articles by relative importance according to\nthis eigenvector centrality.\n\nThe traditional way to compute the principal eigenvector is to use the\npower iteration method:\n\n https://en.wikipedia.org/wiki/Power_iteration\n\nHere the computation is achieved thanks to Martinsson's Randomized SVD\nalgorithm implemented in scikit-learn.\n\nThe graph data is fetched from the DBpedia dumps. DBpedia is an extraction\nof the latent structured data of the Wikipedia content.\n\n"
1919
]
2020
},
2121
{

dev/_downloads/wikipedia_principal_eigenvector.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@
2323
https://en.wikipedia.org/wiki/Power_iteration
2424
2525
Here the computation is achieved thanks to Martinsson's Randomized SVD
26-
algorithm implemented in the scikit.
26+
algorithm implemented in scikit-learn.
2727
2828
The graph data is fetched from the DBpedia dumps. DBpedia is an extraction
2929
of the latent structured data of the Wikipedia content.
136 Bytes
136 Bytes
-200 Bytes
-200 Bytes
230 Bytes

0 commit comments

Comments
 (0)