Posts

16 March 2020 / Paul Hankin / computer science / mathematics

This post provides a quick derivation of the fast Fibonacci doubling formulas, using the correspondence between Fibonacci numbers and the number of ways to climb $n$ steps taking 1 or 2 steps at a time.

The Fibonacci numbers are a sequence $\mathrm{Fib}(i)$ defined by $\mathrm{Fib}(1)=\mathrm{Fib}(2)=1$ and $\mathrm{Fib}(n+2)=\mathrm{Fib}(n+1)+\mathrm{Fib}(n)$.

The Fibonacci doubling formulas are:

$$\begin{eqnarray} \mathrm{Fib}(2n) &=& 2\mathrm{Fib}(n)\mathrm{Fib}(n+1) - \mathrm{Fib}(n)^2 \\ \mathrm{Fib}(2n+1) &=& \mathrm{Fib}(n+1)^2 + \mathrm{Fib}(n)^2 \end{eqnarray}$$

These formulas can be used to efficiently compute Fibonacci numbers (see the the end of the post for how). They are usually derived from a matrix power representation of Fibonacci numbers (or see one of my earlier posts for an alternative). This blog post gives a direct combinatorial derivation.

07 February 2020 / Paul Hankin / programming

I’ve recently been programming seriously in C, after around 10 years in higher level languages (Go, Python, C++, and others). I’ve been using C11, the latest standard, whereas previously I was working in C89.

I like programming in C. It’s not an easy language to write fluently because it doens’t provide many conveniences, it’s full of traps, and I’d avoid it if I was writing something that needed to be safe, but I still find it fun.

This post describes my recent experience with C and the new standard, and my observations about how things have and haven’t changed. It’s not a critique of C, and some of the obvious problems with writing C (such as lack of bounds checking of arrays) aren’t discussed.

16 June 2019 / Paul Hankin / programming

Computing large integer powers modulo some number is a somewhat common operation. For example, it’s used in RSA encryption. Usually, this is done using exponentiation by squaring, but this go program correctly prints the results of $n^{2^{64}}\ (\mathrm{mod}\ 115763)$ for $n$ from 1 to 20, seemingly naively:

package main

import "fmt"

func main() {
	for n := 1; n <= 20; n++ {
		result := 1
		for i := 0; i < 2^64; i++ {
			result = (result * n) % 115763
		}
		fmt.Printf("pow(%d, pow(2, 64)) mod 115763 = %d\n", n, result)
	}
}

It runs, unoptimized, in a few milliseconds on my desktop. You can run it yourself online using the go playground. Feel free to edit the code a little before running it to convince yourself it’s not just fast because the playground is caching the results or something.

How can it be that fast? Is go’s optimizing compiler that clever? It’s not, and there’s a trick in the code. Can you see it?

14 June 2019 / Paul Hankin / programming
17 June 2018 / Paul Hankin / programming

As I was growing up in England in the 80s, there was a boom in home microcomputers, with the Commodore 64, the ZX Spectrum, and the BBC Micro being three popular choices. These provided an excellent and approachable introduction to programming, with many of my friends learning programming in BASIC and assembler. We taught ourselves the fundamentals of computing while we were playing, and at a relatively early age.

These days the computing environment is complex, and it’s much harder for a beginner to get started, or even know how to get started. Mostly programming is learnt at university or in other formal education. While there is definitely more to learn now than before, it seems like the fundamentals of coding should still be easier to pick up than it currently is.

This post takes a look at what made home micros effective learning environments, and considers what a modern equivalent might look like.

10 June 2018 / Paul Hankin / mathematics / game theory

This article describes how to use the Kelly criterion to make rational choices when confronted with a risky financial decision, and suggests a way to estimate the most you should be willing to pay for any particular sort of insurance.

The Kelly criterion (which at its core is the idea that the logarithm of your wealth is a better measure of money’s value to you than its absolute value) is well understood by the informed gambling community, and should be more widely known.

If you decide to apply the knowledge in this post, also consult with a financial professional (which as we’ll see later doesn’t include most finance or economics students, and most young financial professionals), and read the disclaimer at the end.

14 May 2018 / Paul Hankin / computer science / mathematics

An earlier post described how to compute Fibonacci numbers in a single arithmetic expression.

Faré Rideau, the author of a page of Fibonacci computations in Lisp, suggested in a private email a simple and efficient variant, that I believe is novel.

For $X$ large enough, $\mathrm{Fib}_n = (X^{n+1}\ \mathrm{mod}\ (X^2-X-1))\ \mathrm{mod}\ X$.

That means you can compute Fibonacci numbers efficiently with a simple program:

for n in range(1, 21):
    X = 1<<(n+2)
    print(pow(X, n+1, X*X-X-1) % X)

This blog post describes how this method works, gives a few ways to think about it, easily infers the fast Fibonacci doubling formulas, provides a nice alternative to Binet’s formula relating the golden ratio and Fibonacci numbers, and expands the method to generalized Fibonacci recurrences, including a near one-line solution to the problem of counting how many ways to reach the end-square of a 100-square game using a single six-sided dice.

20 April 2016 / Paul Hankin / programming
21 May 2015 / Paul Hankin / game theory / computer science

This blog post looks at closed-hand Chinese Poker, and describes a near-optimal strategy for it which is readily implementable on a computer.

06 May 2015 / Paul Hankin / computer science

Who would disagree that the run-time of mergesort is $O(n\mathrm{log},n)$ and it’s asymptotically optimal? Not many programmers I reckon, except perhaps to question whether it’s talking about a model of computation that’s not sufficiently close to a real computer, for example a quantum computer or one that performs arbitrary operations in parallel (possibly involving sticks of spaghetti).

However, if you try to understand how to formalize what it means for a sort to run in $O(n\mathrm{log},n)$ and for it to be optimal, it’s surprisingly difficult to find a suitable computational model, that is, an abstraction of a computer which elides all but the important details of the computer: the operations it can perform, and how the memory works.

In this post, I’ll look at some of the most common computational models used in both practice and theory, and find out that they’re all flawed in one way or another, and in fact in all of them either mergesort doesn’t run in $O(n\mathrm{log},n)$ or there’s asymptotically faster sorts.