Skip to content

Commit 084b756

Browse files
committed
Write README for K-Means Algorithm
1 parent 7d370e1 commit 084b756

File tree

5 files changed

+85
-2
lines changed

5 files changed

+85
-2
lines changed

K-Means/Images/k_means_bad1.png

15.9 KB
Loading

K-Means/Images/k_means_bad2.png

15.5 KB
Loading

K-Means/Images/k_means_good.png

15.6 KB
Loading

K-Means/README.md

Lines changed: 81 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,83 @@
11
# K-Means
22

3-
Goal: Partition data into k clusters based on nearest mean
3+
Goal: Partition data into k clusters based on nearest means
4+
5+
The idea behind K-Means is to try and take data that has no formal classification to it and determine if there are any natural clusters within the data.
6+
7+
K-Means assumes that there are **k-centers** within the data. The data that is then closest to these *centroids* become classified or grouped together. K-Means doesn't tell you what the classifier is for that particular data group, but it assists in trying to find what clusters potentially exist.
8+
9+
## Algorithm
10+
The k-means algorithm is really quite simple at it's core:
11+
1. Choose k random points to be the initial centers
12+
2. Repeat the following two steps until the *centroid* reach convergence
13+
1. Assign each point to it's nearest *centroid*
14+
2. Update the *centroid* to the mean of it's nearest points
15+
16+
Convergence is said to be reached when all of the *centroids* have not changed.
17+
18+
This brings about a few of the parameters that are required for k-means:
19+
- **k** - This is the number of *centroids* to attempt to locate
20+
- **convergence distance** - This is minimum distance that the centers are allowed to moved after a particular update step.
21+
- **distance function** - There are a number of distance functions that can be used, but mostly commonly the euclidean distance function is adequate. But often can lead to convergence not being reached in higher dimensionally.
22+
23+
This is what the algorithm would look like in swift
24+
```swift
25+
func kMeans(numCenters: Int, convergeDist: Double, points: [VectorND]) -> [VectorND] {
26+
var centerMoveDist = 0.0
27+
let zeros = [Double](count: points[0].getLength(), repeatedValue: 0.0)
28+
29+
var kCenters = reservoirSample(points, k: numCenters)
30+
31+
repeat {
32+
var cnts = [Double](count: numCenters, repeatedValue: 0.0)
33+
var newCenters = [VectorND](count:numCenters, repeatedValue: VectorND(d:zeros))
34+
for p in points {
35+
let c = nearestCenter(p, Centers: kCenters)
36+
cnts[c]++
37+
newCenters[c] += p
38+
}
39+
for idx in 0..<numCenters {
40+
newCenters[idx] /= cnts[idx]
41+
}
42+
centerMoveDist = 0.0
43+
for idx in 0..<numCenters {
44+
centerMoveDist += euclidean(kCenters[idx], v2: newCenters[idx])
45+
}
46+
kCenters = newCenters
47+
} while(centerMoveDist > convergeDist)
48+
return kCenters
49+
}
50+
```
51+
52+
## Example
53+
These examples are contrived to show the exact nature of K-Means and finding clusters. These clusters are very easily identified by human eyes, we see there is one in the lower left corner, one in the upper right corner and maybe one in the middle.
54+
55+
In all these examples the stars represent the *centroids* and the squares are the points.
56+
57+
##### Good clusters
58+
This first example shows K-Means finding all three clusters:
59+
![Good Clustering](Images/k_means_good.png)
60+
61+
The selection of initial centroids found that lower left (indicated by red) and did pretty good on the center and upper left clusters.
62+
63+
#### Bad Clustering
64+
The next two examples highlight the unpredictability of k-Means and how not always does it find the best clustering.
65+
![Bad Clustering 1](Images/k_means_bad1.png)
66+
As you can see in this one the initial *centroids* were all a little two close and the 'blue' didn't quite get to a good place. By adjusting the convergence distance should be able to get it better.
67+
68+
![Bad Clustering 1](Images/k_means_bad2.png)
69+
This one the blue cluster never really could separate from the red cluster and as such sort of got stuck down there.
70+
71+
## Performance
72+
The first thing to recognize is that k-Means is classified as an NP-Hard type of problem. The selection of the initial *centroids* has a big effect on how the resulting clusters may end up. This means that trying to find an exact solution is not likely - even in 2 dimensional space.
73+
74+
As seem from the steps above the complexity really isn't that bad - it is often considered to be on the order of O(kndi), where **k** is the number of *centroids*, **n** is the number of **d** dimensional vectors and **i** is the number of iterations for convergence.
75+
76+
The amount of data has a big linear effect on the running time of k-means, but tuning how far you want the *centroids* to converge can have a big impact how many iterations will be done - **k** should be relatively small compared to the number of vectors.
77+
78+
Often times as more data is added certain points may lie in the boundary between two *centroids* and as such those centroids would continue to bounce back and forth and the **convergence** distance would need to be tuned to prevent that.
79+
80+
## See Also
81+
See also [Wikipedia](https://en.wikipedia.org/wiki/K-means_clustering)
82+
83+
*Written by John Gill*

README.markdown

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -95,13 +95,16 @@ Bad sorting algorithms (don't use these!):
9595

9696
### Machine learning
9797

98-
- [k-Means](K-Means/). Unsupervised classifier that partitions data into k clusters.
98+
##### Supervised learning
9999
- k-Nearest Neighbors
100100
- Linear Regression
101101
- Logistic Regression
102102
- Neural Networks
103103
- PageRank
104104

105+
##### Clustering
106+
- [k-Means](K-Means/). Unsupervised classifier that partitions data into k clusters.
107+
105108
## Data structures
106109

107110
The choice of data structure for a particular task depends on a few things.

0 commit comments

Comments
 (0)