Source code for all of the implementations in my video One second to compute the largest Fibonacci number I can. The video is a hard prerequisite to coming here and making fun of my code.
After cloning the repo, make sure to run
make init
to initialise the output folders before trying to build anything specific.
To compute a specific Fibonacci number using one of the given implementations, use
make bin/one_$(algo).O3.out
./bin/one_$(algo).O3.out
If you are only interested in the computation time, run
make clean-bin # or just ensure that ./bin/one_$(algo) doesn't exist
make FLAGS="-DPERF" one_$(algo)
By default, the output will be in scientific notation (with an option to fully expand).
However, since Fibonacci numbers are encoded in a dyadic base, the expansion in decimal is achieved with a very lazily-implemented
make clean-bin # or just ensure that ./bin/one_$(algo) doesn't exist
make FLAGS="-DCHECK" one_$(algo)
Then, the outputs will be in hexadecimal, and fully-expanded.
Note. The runtime generator (in particular, its attempt to find the maximum Fibonacci number computable by a given algorithm within 1 second) is highly nonscientific and inaccurate. I made no effort to ensure that the data from the video is perfectly replicable!
To generate the data for all algorithms, simply run
make all-data
Large numbers are encoded as base-vector
s), where
Algorithm | Runtime | Digit size ( |
---|---|---|
Naive | ||
"Linear" | ||
Simple matrix multiplication | ||
Fast exponentiation | ||
Strassen matrix multiplication | ||
Karatsuba multiplication | ||
DFT |
|
|
FFT |
|
|
Binet formula |
|
The "naïve" implementation is just the simple (non-memoised) recursive algorithm, and can be found in impl/naive.cpp
.
The "linear" implementation is the direct non-recursive implementation.
Of course, the algorithm is not actually linear; the runtime is
Naïve implementation based on the identity
This implementation improves on the simpler variant above by using the
This implementation modifies the fast exponentiation algorithm to use Strassen's matrix multiplication algorithm, which reduces the number of integer multiplications down from
This modification was not mentioned in the video, since it leads to minimal improvement over naïve matrix multiplication (it would be a different story if matrices were larger than
This implementation enhances the fast exponentiation algorithm by replacing the naïve grade-school integer multiplication algorithm with Karatsuba's
Note, however, that my implementation doesn't lead to any noticeable results until
This implementation takes the fast exponentiation algorithm and replaces its grade-school integer multiplication with integer multiplication based on the discrete Fourier transform.
The transform is over the field of complex numbers, which are represented with pairs of double
-precision floating-point numbers, so the algorithm's correctness is limited by this.
(I was too lazy to implement custom-precision floats.)
This improves the DFT algorithm with the Cooley-Tukey Fast Fourier Transform. Of course, this suffers from the same precision limitation.
Finally deviating from the matrix multiplication algorithms above, this algorithm is based on Binet's formula
(where
More precisely, we compute the coefficient of
Large integer multiplication is achieved with FFTs, so suffers from the same precision limitation as the previous two algorithms.