Skip to content

Commit 3d03374

Browse files
author
Jake Romer
committed
URL encode links in Markdown files
There are a ton of broken markdown links in the READMEs throughout this repo because the files and directories they refer to have spaces in their names but these aren't HTML encoded in their links. This commit just replaces spaces with %20 to fix these.
1 parent ef1c33b commit 3d03374

File tree

21 files changed

+49
-49
lines changed

21 files changed

+49
-49
lines changed

AVL Tree/README.markdown

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
# AVL Tree
22

3-
An AVL tree is a self-balancing form of a [binary search tree](../Binary Search Tree/), in which the height of subtrees differ at most by only 1.
3+
An AVL tree is a self-balancing form of a [binary search tree](../Binary%20Search%20Tree/), in which the height of subtrees differ at most by only 1.
44

55
A binary tree is *balanced* when its left and right subtrees contain roughly the same number of nodes. That is what makes searching the tree really fast. But if a binary search tree is unbalanced, searching can become really slow.
66

77
This is an example of an unbalanced tree:
88

99
![Unbalanced tree](Images/Unbalanced.png)
1010

11-
All the children are in the left branch and none are in the right. This is essentially the same as a [linked list](../Linked List/). As a result, searching takes **O(n)** time instead of the much faster **O(log n)** that you'd expect from a binary search tree.
11+
All the children are in the left branch and none are in the right. This is essentially the same as a [linked list](../Linked%20List/). As a result, searching takes **O(n)** time instead of the much faster **O(log n)** that you'd expect from a binary search tree.
1212

1313
A balanced version of that tree would look like this:
1414

@@ -78,14 +78,14 @@ Insertion never needs more than 2 rotations. Removal might require up to __log(n
7878

7979
Most of the code in [AVLTree.swift](AVLTree.swift) is just regular [binary search tree](../Binary Search Tree/) stuff. You'll find this in any implementation of a binary search tree. For example, searching the tree is exactly the same. The only things that an AVL tree does slightly differently are inserting and deleting the nodes.
8080

81-
> **Note:** If you're a bit fuzzy on the regular operations of a binary search tree, I suggest you [catch up on those first](../Binary Search Tree/). It will make the rest of the AVL tree easier to understand.
81+
> **Note:** If you're a bit fuzzy on the regular operations of a binary search tree, I suggest you [catch up on those first](../Binary%20Search%20Tree/). It will make the rest of the AVL tree easier to understand.
8282
8383
The interesting bits are in the `balance()` method which is called after inserting or deleting a node.
8484

8585
## See also
8686

8787
[AVL tree on Wikipedia](https://en.wikipedia.org/wiki/AVL_tree)
8888

89-
AVL tree was the first self-balancing binary tree. These days, the [red-black tree](../Red-Black Tree/) seems to be more popular.
89+
AVL tree was the first self-balancing binary tree. These days, the [red-black tree](../Red-Black%20Tree/) seems to be more popular.
9090

9191
*Written for Swift Algorithm Club by Mike Taghavi and Matthijs Hollemans*

Big-O Notation.markdown

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,6 @@ Big-O | Name | Description
1919

2020
Often you don't need math to figure out what the Big-O of an algorithm is but you can simply use your intuition. If your code uses a single loop that looks at all **n** elements of your input, the algorithm is **O(n)**. If the code has two nested loops, it is **O(n^2)**. Three nested loops gives **O(n^3)**, and so on.
2121

22-
Note that Big-O notation is an estimate and is only really useful for large values of **n**. For example, the worst-case running time for the [insertion sort](Insertion Sort/) algorithm is **O(n^2)**. In theory that is worse than the running time for [merge sort](Merge Sort/), which is **O(n log n)**. But for small amounts of data, insertion sort is actually faster, especially if the array is partially sorted already!
22+
Note that Big-O notation is an estimate and is only really useful for large values of **n**. For example, the worst-case running time for the [insertion sort](Insertion%20Sort/) algorithm is **O(n^2)**. In theory that is worse than the running time for [merge sort](Merge Sort/), which is **O(n log n)**. But for small amounts of data, insertion sort is actually faster, especially if the array is partially sorted already!
2323

2424
If you find this confusing, don't let this Big-O stuff bother you too much. It's mostly useful when comparing two algorithms to figure out which one is better. But in the end you still want to test in practice which one really is the best. And if the amount of data is relatively small, then even a slow algorithm will be fast enough for practical use.

Binary Search Tree/README.markdown

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Binary Search Tree (BST)
22

3-
A binary search tree is a special kind of [binary tree](../Binary Tree/) (a tree in which each node has at most two children) that performs insertions and deletions such that the tree is always sorted.
3+
A binary search tree is a special kind of [binary tree](../Binary%20Tree/) (a tree in which each node has at most two children) that performs insertions and deletions such that the tree is always sorted.
44

55
If you don't know what a tree is or what it is for, then [read this first](../Tree/).
66

@@ -49,7 +49,7 @@ If we were looking for the value `5` in the example, it would go as follows:
4949

5050
![Searching the tree](Images/Searching.png)
5151

52-
Thanks to the structure of the tree, searching is really fast. It runs in **O(h)** time. If you have a well-balanced tree with a million nodes, it only takes about 20 steps to find anything in this tree. (The idea is very similar to [binary search](../Binary Search) in an array.)
52+
Thanks to the structure of the tree, searching is really fast. It runs in **O(h)** time. If you have a well-balanced tree with a million nodes, it only takes about 20 steps to find anything in this tree. (The idea is very similar to [binary search](../Binary%20Search) in an array.)
5353

5454
## Traversing the tree
5555

@@ -535,7 +535,7 @@ The code for `successor()` works the exact same way but mirrored:
535535

536536
Both these methods run in **O(h)** time.
537537

538-
> **Note:** There is a cool variation called a ["threaded" binary tree](../Threaded Binary Tree) where "unused" left and right pointers are repurposed to make direct links between predecessor and successor nodes. Very clever!
538+
> **Note:** There is a cool variation called a ["threaded" binary tree](../Threaded%20Binary%20Tree) where "unused" left and right pointers are repurposed to make direct links between predecessor and successor nodes. Very clever!
539539
540540
### Is the search tree valid?
541541

@@ -713,11 +713,11 @@ The root node is in the middle; a dot means there is no child at that position.
713713

714714
A binary search tree is *balanced* when its left and right subtrees contain roughly the same number of nodes. In that case, the height of the tree is *log(n)*, where *n* is the number of nodes. That's the ideal situation.
715715

716-
However, if one branch is significantly longer than the other, searching becomes very slow. We end up checking way more values than we'd ideally have to. In the worst case, the height of the tree can become *n*. Such a tree acts more like a [linked list](../Linked List/) than a binary search tree, with performance degrading to **O(n)**. Not good!
716+
However, if one branch is significantly longer than the other, searching becomes very slow. We end up checking way more values than we'd ideally have to. In the worst case, the height of the tree can become *n*. Such a tree acts more like a [linked list](../Linked%20List/) than a binary search tree, with performance degrading to **O(n)**. Not good!
717717

718718
One way to make the binary search tree balanced is to insert the nodes in a totally random order. On average that should balance out the tree quite nicely. But it doesn't guarantee success, nor is it always practical.
719719

720-
The other solution is to use a *self-balancing* binary tree. This type of data structure adjusts the tree to keep it balanced after you insert or delete nodes. See [AVL tree](../AVL Tree) and [red-black tree](../Red-Black Tree) for examples.
720+
The other solution is to use a *self-balancing* binary tree. This type of data structure adjusts the tree to keep it balanced after you insert or delete nodes. See [AVL tree](../AVL%20Tree) and [red-black tree](../Red-Black%20Tree) for examples.
721721

722722
## See also
723723

Binary Search/README.markdown

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ let numbers = [11, 59, 3, 2, 53, 17, 31, 7, 19, 67, 47, 13, 37, 61, 29, 43, 5, 4
1212
numbers.indexOf(43) // returns 15
1313
```
1414

15-
The built-in `indexOf()` function performs a [linear search](../Linear Search/). In code that looks something like this:
15+
The built-in `indexOf()` function performs a [linear search](../Linear%20Search/). In code that looks something like this:
1616

1717
```swift
1818
func linearSearch<T: Equatable>(_ a: [T], _ key: T) -> Int? {

Binary Tree/README.markdown

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ The child nodes are usually called the *left* child and the *right* child. If a
88

99
Often nodes will have a link back to their parent but this is not strictly necessary.
1010

11-
Binary trees are often used as [binary search trees](../Binary Search Tree/). In that case, the nodes must be in a specific order (smaller values on the left, larger values on the right). But this is not a requirement for all binary trees.
11+
Binary trees are often used as [binary search trees](../Binary%20Search%20Tree/). In that case, the nodes must be in a specific order (smaller values on the left, larger values on the right). But this is not a requirement for all binary trees.
1212

1313
For example, here is a binary tree that represents a sequence of arithmetical operations, `(5 * (a - 10)) + (-4 * (3 / b))`:
1414

Bloom Filter/README.markdown

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ An advantage of the Bloom Filter over a hash table is that the former maintains
1818
1919
## Inserting objects into the set
2020

21-
A Bloom Filter is essentially a fixed-length [bit vector](../Bit Set/), an array of bits. When we insert objects, we set some of these bits to `1`, and when we query for objects we check if certain bits are `0` or `1`. Both operations use hash functions.
21+
A Bloom Filter is essentially a fixed-length [bit vector](../Bit%20Set/), an array of bits. When we insert objects, we set some of these bits to `1`, and when we query for objects we check if certain bits are `0` or `1`. Both operations use hash functions.
2222

2323
To insert an element in the filter, the element is hashed with several different hash functions. Each hash function returns a value that we map to an index in the array. We then set the bits at these indices to `1` or true.
2424

Bounded Priority Queue/README.markdown

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Bounded Priority queue
22

3-
A bounded priority queue is similar to a regular [priority queue](../Priority Queue/), except that there is a fixed upper bound on the number of elements that can be stored. When a new element is added to the queue while the queue is at capacity, the element with the highest priority value is ejected from the queue.
3+
A bounded priority queue is similar to a regular [priority queue](../Priority%20Queue/), except that there is a fixed upper bound on the number of elements that can be stored. When a new element is added to the queue while the queue is at capacity, the element with the highest priority value is ejected from the queue.
44

55
## Example
66

Boyer-Moore/README.markdown

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ animals.indexOf(pattern: "🐮")
2424

2525
> **Note:** The index of the cow is 6, not 3 as you might expect, because the string uses more storage per character for emoji. The actual value of the `String.Index` is not so important, just that it points at the right character in the string.
2626
27-
The [brute-force approach](../Brute-Force String Search/) works OK, but it's not very efficient, especially on large chunks of text. As it turns out, you don't need to look at _every_ character from the source string -- you can often skip ahead multiple characters.
27+
The [brute-force approach](../Brute-Force%20String%20Search/) works OK, but it's not very efficient, especially on large chunks of text. As it turns out, you don't need to look at _every_ character from the source string -- you can often skip ahead multiple characters.
2828

2929
The skip-ahead algorithm is called [Boyer-Moore](https://en.wikipedia.org/wiki/Boyer–Moore_string_search_algorithm) and it has been around for a long time. It is considered the benchmark for all string search algorithms.
3030

Breadth-First Search/README.markdown

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -148,7 +148,7 @@ This will output: `["a", "b", "c", "d", "e", "f", "g", "h"]`
148148

149149
Breadth-first search can be used to solve many problems. A small selection:
150150

151-
* Computing the [shortest path](../Shortest Path/) between a source node and each of the other nodes (only for unweighted graphs).
152-
* Calculating the [minimum spanning tree](../Minimum Spanning Tree (Unweighted)/) on an unweighted graph.
151+
* Computing the [shortest path](../Shortest%20Path/) between a source node and each of the other nodes (only for unweighted graphs).
152+
* Calculating the [minimum spanning tree](../Minimum%20Spanning%20Tree%20(Unweighted)/) on an unweighted graph.
153153

154154
*Written by [Chris Pilcher](https://github.com/chris-pilcher) and Matthijs Hollemans*

Count Occurrences/README.markdown

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,9 @@
22

33
Goal: Count how often a certain value appears in an array.
44

5-
The obvious way to do this is with a [linear search](../Linear Search/) from the beginning of the array until the end, keeping count of how often you come across the value. That is an **O(n)** algorithm.
5+
The obvious way to do this is with a [linear search](../Linear%20Search/) from the beginning of the array until the end, keeping count of how often you come across the value. That is an **O(n)** algorithm.
66

7-
However, if the array is sorted you can do it much faster, in **O(log n)** time, by using a modification of [binary search](../Binary Search/).
7+
However, if the array is sorted you can do it much faster, in **O(log n)** time, by using a modification of [binary search](../Binary%20Search/).
88

99
Let's say we have the following array:
1010

0 commit comments

Comments
 (0)