Lecture1 Notes
Lecture1 Notes
Adapted From Virginia Williams’ lecture notes. Additional credits: J. Su, W. Yang, Gregory
Valiant, Mary Wootters, Aviad Rubinstein.
Please direct all typos and mistakes to Nima Anari and Moses Charikar.
Introduction
1 Logistics
The class website is at https://cs161.stanford.edu. All course information is available
on the website.
1
creativity and mathematical precision. It is both an art and a science, and hopefully at
least some of you will come to love this combination. One other reason it is so much
fun is that algorithmic surprises abound. Hopefully this class will make you re-think what
you thought was algorithmically possible, and cause you to constantly ask “is there a
better algorithm for this task?”. Part of the fun is that Algorithms is still a young area,
and there are still many mysteries, and many problems for which (we suspect that) we
still do not know the best algorithms. This is what makes research in Algorithms so fun
and exciting, and hopefully some of you will decide to continue in this direction.
2
example, we could store the products of all pairs of n-digit numbers, and then just look up the
pair we need. This does result in performance gains, however, and also leads to exponential
storage costs. (For example, if n = 100, we would need to store a table of 102n = 10200
products. Note that the number of atoms in the universe is only ≈ 1080 . . . .) So we’ll have to
do something more clever.
Now we can split this problem into four subproblems, where each subproblem is similar to the
original problem, but with half the digits. This gives rise to a recursive algorithm.
Interestingly enough, this algorithm isn’t actually better! Intuitively this is because if we
expand the recursion, we still have to multiply every pair of digits, just like we did before. But
in order to prove this formally, we need to formally define the runtime of an algorithm and
prove that these algorithms are not very different in runtime.
Since T (1) is the time it takes to multiply two digits, we see that the above suggestion does
not reduce the number of 1-digit operations.
3
Note: In the lecture slides, we’ll consider a slightly different argument, which analyzes a
recursion tree. It’s a good exercise to understand both arguments! Again, we’ll discuss both
techniques more in coming lectures.
Since we assumed n = 2s , we have that T (n/2s ) = T (1) = 1, since multiplying two 1-digit
numbers counts as 1 basic operation. Hence T (n) = 3s , where n = 2s . Solving for s yields
s = log2 n, and hence we get
We were pretty sloppy with the above argumentation in a lot of ways. However, we’ll see
a much more principled way of analyzing the runtime of recursive algorithms in the coming
classes, so we won’t sweat about it too much now. The point is that (even if you do it
correctly) the running time of this algorithm scales like n1.6 . Thus is much better than the n2
algorithm that we learned in grade school!
4
algorithm that runs in time O(n log(n) log log(n)). More than 35 years later, Fürer (2007)
∗
developed an algorithm that ran in time n log(n)2log (n) . In case you are wondering what that
weird function log∗ (n) (read “log star”) is, it is the number of times you have to apply the
logarithm function log() iteratively to n in order to get down to something ≤ 1. For all values
of n less than the estimated number of atoms in the universe, the value of log∗ (n) (with
base 2) is less than 5. So log∗ (n) is a really really really slowly growing function of n. Finally,
Harvey and van der Hoeven (2019) gave an algorithm that runs in time O(n log n). This is
conjectured to be optimal. It is quite amazing that the seemingly simple (and old) question of
multiplying two numbers has proved to be so mysterious and has seen new research advances
as recently as 2019. This is what makes the study of algorithms so exciting!