forked from gcallah/algorithms
-
Notifications
You must be signed in to change notification settings - Fork 1
/
Background.html
215 lines (198 loc) · 7.1 KB
/
Background.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
<html>
<head>
<link href="style.css" rel="stylesheet" type="text/css"/>
<title>
Introduction to Algorithms, Lecture 2
</title>
</head>
<body>
<h1>Review of Big-O, Summation and Algebra
<a href="#note1">*</a>
</h1>
<h2>A review of Big-O, summations and algebra. </h2>
<h3>Why have different notations?</h3>
<p>
Why not just Big-Θ?
</p>
<p>
From the Khan Academy site:<br>
<blockquote>
We use big-Θ notation to asymptotically bound the
growth of a running time to
within constant factors above and below.
Sometimes we want to bound from only
above. For example, although the worst-case
running time of binary search is Θ(lg n),
it would be incorrect to say that binary search runs in
Θ(lg n) time in all cases.
What if we find the target value upon the first
guess? Then it runs in Θ(1) time.
The running time of binary search is never worse than
Θ(lg n), but it's sometimes better.
It would be convenient to have a form of
asymptotic notation that means
"the running time grows at most this much, but
it could grow more slowly."
We use "big-O" notation for just such occasions.
</blockquote>
</p>
<h3>Highest-order term</h3>
<p>
More graphics on why we only care about the highest-order
</p>
<p>
<img
src="https://cdn.kastatic.org/ka-cs-algorithms/6n2_vs_100n%2B300.png"
height="300" width="450">
<br>
<br>
<img
src="https://cdn.kastatic.org/ka-cs-algorithms/0.6n2_vs_1000n%2B3000.png"
height="300" width="450">
</p>
<h3>Meaning of constants in O, Θ and Ω expressions</h3>
<p>
Let's say we posit a search operating in <em>2n + 3</em> time.<br>
We want to show that this has Θ(n) complexity.<br>
We must find <em>k<sub>1</sub></em>, <em>k<sub>2</sub></em>,
and <em>n<sub>0</sub></em>, so that:
<em>k<sub>1</sub> * n < 2n + 3 < k<sub>2</sub> * n</em>
<br>
One solution is <em>k<sub>1</sub> = 1</em>, <em>k<sub>2</sub> = 3</em>,
and <em>n<sub>0</sub> = 4</em>.
<br>
Let's demonstrate!
</p>
<h2>Algorithmic design in action:</h2>
<h3>Fibonacci numbers</h3>
<p>
The problem of computing, given <em>n >= 0</em>,
the <em>n</em><sup>th</sup> Fibonacci number <em>F<sub>n</sub></em>.
</p>
<p>
The Fibonacci discussion is not in the textbook.
One place where it is presented in a nice way similar to what I will do in class is in
section 0.2 of the DasGupta, Papadimitriou, Vazirani <em>Algorithms</em> book.
</p>
<p>
Talked about the obvious recursive algorithm;
how it is very slow, intuitively due to recomputing the same subproblems.
Proved that the recursion tree grows at least as fast as the Fibonacci sequence itself.
</p>
<h4>Sketch of Proof</h4>
<p>
Note that every leaf of the recursion tree returns one. The sum of the value
of all of these leaves is going to be the Fibonacci number we return.
Therefore, there must be at least as many operations as the Fibonacci number
itself.
</p>
<p>
Showed how to speed the recursive algorithm algorithm up by "memoization":
Using a table to store answers for subproblems already computed.
Analyzed the running time and space of the recursive memoized version and of
the iterative algorithm that fills the table going in the forward direction.
Pointed out how to reduce space to constant.
</p>
<p>
<a href="https://github.com/gcallah/algorithms/blob/master/python/fibonacci.py">
Naive and faster Fibonacci code.</a>
</p>
<p>
By the way, this is an open-source project: you may contribute!
</p>
<p>
There is a closed-form formula for computing Fibonacci numbers.
</p>
<p>
So why can't we claim we can compute <em>n</em>th Fibonacci number in constant time?
Did not go into details, but this is related to our inability to directly represent
irrational numbers such as square roots, maybe related to precision/rounding,
and generally related to what operations are allowed and what are not, in an algorithm.
</p>
<p>
Using some tricks one can construct <em>n</em>th Fibonacci number
in <em>O(log n)</em> arithmetic operations, where we count additions/subtractions/multiplications
and do not worry about the fact that eventually the numbers get too large to manipulate
in constant time.
</p>
<h2>Data structures</h2>
<p>
Data structures form a very important part of algorithmic research.
</p>
<p>
However, this course does not focus on data structures.
We will only devote a couple of lectures, total, to this subject.
(There are algorithms courses that spend their entire time on data structures.
We have an advanced data structures expert in the department, Prof. John Iacono.)
</p>
<ul><li>A quick reminder of basic data structures:
<ul>
<li>Arrays
<li>Linked list
<br><img
src="https://upload.wikimedia.org/wikipedia/commons/thumb/6/6d/Singly-linked-list.svg/408px-Singly-linked-list.svg.png">
<li>Doubly-linked list
<br>
<img
src="https://upload.wikimedia.org/wikipedia/commons/thumb/5/5e/Doubly-linked-list.svg/610px-Doubly-linked-list.svg.png">
<li>Stacks
<br>
<img
src="https://upload.wikimedia.org/wikipedia/commons/b/b4/Lifo_stack.png">
<li>Queues
<br>
<img
src="https://upload.wikimedia.org/wikipedia/commons/thumb/5/52/Data_Queue.svg/405px-Data_Queue.svg.png">
<li>Double-ended queues
<li>ADT (<i>abstract data type</i>):
<br>the set of operations a data structure promises to provide
and the rules that they have to obey.
</ul>
<h3>
Detailed case study: Iterables as an ADT.
</h3>
<h4>
Basic operations.
</h4>
<ul>
<li>Get iterator.
<li>Get next value.
<li>Detect end of iterables.
</ul>
<h4>
Extended operations.
</h4>
<ul>
<li>
Rewind? We may not be able to if the iterator is exhausted in iterating
through it.
<li>Iterate in reverse?
<li>Access an arbitrary element? (By index, say.)
</ul>
<h4>
Implementations.
</h4>
<ul>
<li>Straight-forward: Array.
<li>Complex:<a
<a href="https://github.com/gcallah/Indra/blob/master/indra/agent_pop.py">
A population that can be iterated in many ways.
</a>
</ul>
</ul>
<h2>Homework</h2>
<p>
<ul>
<li>
Write the recursive "memo-ized" version of Fibonacci that I did not write.
You may use pseudo-code or actually write real code in... Python, Perl,
Java, C++... or you suggest a language!
<li>Write a queue.
<li>What is the Big-Θ order of the naive Fibonacci calculation?
</ul>
</p>
<a name="note1">* Based on Prof. Boris Aronov's lecture notes. </a>
<br>
<a name="note2">** Material drawn from Khan Academy.</a>
</body>
</html>