06 May 2015 / Paul Hankin / computer science

Who would disagree that the run-time of mergesort is \(O(n\mathrm{log}\,n)\) and it's asymptotically optimal? Not many programmers I reckon, except perhaps to question whether it's talking about a model of computation that's not sufficiently close to a real computer, for example a quantum computer or one that performs arbitrary operations in parallel (possibly involving sticks of spaghetti).

However, if you try to understand how to formalize what it means for a sort to run in \(O(n\mathrm{log}\,n)\) and for it to be optimal, it's surprisingly difficult to find a suitable computational model, that is, an abstraction of a computer which elides all but the important details of the computer: the operations it can perform, and how the memory works.

In this post, I'll look at some of the most common computational models used in both practice and theory, and find out that they're all flawed in one way or another, and in fact in all of them either mergesort doesn't run in \(O(n\mathrm{log}\,n)\) or there's asymptotically faster sorts.

27 April 2015 / Paul Hankin / mathematics / computer science

This code, somewhat surprisingly, generates Fibonacci numbers.

def fib(n):
    return (4 << n*(3+n)) // ((4 << 2*n) - (2 << n) - 1) & ((2 << n) - 1)

In this blog post, I'll explain where it comes from and how it works.