Category: computer science

Posts

map[]
16 March Paul Hankin / computer science / mathematics

This post provides a quick derivation of the fast Fibonacci doubling formulas, using the correspondence between Fibonacci numbers and the number of ways to climb $n$ steps taking 1 or 2 steps at a time.

The Fibonacci numbers are a sequence $\mathrm{Fib}(i)$ defined by $\mathrm{Fib}(1)=\mathrm{Fib}(2)=1$ and $\mathrm{Fib}(n+2)=\mathrm{Fib}(n+1)+\mathrm{Fib}(n)$.

The Fibonacci doubling formulas are:

$$\begin{eqnarray} \mathrm{Fib}(2n) &=& 2\mathrm{Fib}(n)\mathrm{Fib}(n+1) - \mathrm{Fib}(n)^2 \\ \mathrm{Fib}(2n+1) &=& \mathrm{Fib}(n+1)^2 + \mathrm{Fib}(n)^2 \end{eqnarray}$$

These formulas can be used to efficiently compute Fibonacci numbers (see the the end of the post for how). They are usually derived from a matrix power representation of Fibonacci numbers (or see one of my earlier posts for an alternative). This blog post gives a direct combinatorial derivation.

map[]
14 May Paul Hankin / computer science / mathematics

An earlier post described how to compute Fibonacci numbers in a single arithmetic expression.

Faré Rideau, the author of a page of Fibonacci computations in Lisp, suggested in a private email a simple and efficient variant, that I believe is novel.

For $X$ large enough, $\mathrm{Fib}_n = (X^{n+1}\ \mathrm{mod}\ (X^2-X-1))\ \mathrm{mod}\ X$.

That means you can compute Fibonacci numbers efficiently with a simple program:

for n in range(1, 21):
    X = 1<<(n+2)
    print(pow(X, n+1, X*X-X-1) % X)

This blog post describes how this method works, gives a few ways to think about it, easily infers the fast Fibonacci doubling formulas, provides a nice alternative to Binet’s formula relating the golden ratio and Fibonacci numbers, and expands the method to generalized Fibonacci recurrences, including a near one-line solution to the problem of counting how many ways to reach the end-square of a 100-square game using a single six-sided dice.

map[]
21 May Paul Hankin / game theory / computer science

This blog post looks at closed-hand Chinese Poker, and describes a near-optimal strategy for it which is readily implementable on a computer.

map[]
06 May Paul Hankin / computer science

Who would disagree that the run-time of mergesort is $O(n\mathrm{log},n)$ and it’s asymptotically optimal? Not many programmers I reckon, except perhaps to question whether it’s talking about a model of computation that’s not sufficiently close to a real computer, for example a quantum computer or one that performs arbitrary operations in parallel (possibly involving sticks of spaghetti).

However, if you try to understand how to formalize what it means for a sort to run in $O(n\mathrm{log},n)$ and for it to be optimal, it’s surprisingly difficult to find a suitable computational model, that is, an abstraction of a computer which elides all but the important details of the computer: the operations it can perform, and how the memory works.

In this post, I’ll look at some of the most common computational models used in both practice and theory, and find out that they’re all flawed in one way or another, and in fact in all of them either mergesort doesn’t run in $O(n\mathrm{log},n)$ or there’s asymptotically faster sorts.

map[]
27 April Paul Hankin / mathematics / computer science

This code, somewhat surprisingly, generates Fibonacci numbers.

def fib(n):
    return (4 << n*(3+n)) // ((4 << 2*n) - (2 << n) - 1) & ((2 << n) - 1)

In this blog post, I’ll explain where it comes from and how it works.