Posts

16 March 2020 / Paul Hankin / computer science / mathematics

This post provides a quick derivation of the fast Fibonacci doubling formulas, using the correspondence between Fibonacci numbers and the number of ways to climb \(n\) steps taking 1 or 2 steps at a time.

The Fibonacci numbers are a sequence \(\mathrm{Fib}(i)\) defined by \(\mathrm{Fib}(1)=\mathrm{Fib}(2)=1\) and \(\mathrm{Fib}(n+2)=\mathrm{Fib}(n+1)+\mathrm{Fib}(n)\).

The Fibonacci doubling formulas are:

\[\begin{eqnarray} \mathrm{Fib}(2n) &=& 2\mathrm{Fib}(n)\mathrm{Fib}(n+1) - \mathrm{Fib}(n)^2 \\ \mathrm{Fib}(2n+1) &=& \mathrm{Fib}(n+1)^2 + \mathrm{Fib}(n)^2 \end{eqnarray}\]

These formulas can be used to efficiently compute Fibonacci numbers (see the the end of the post for how). They are usually derived from a matrix power representation of Fibonacci numbers (or see one of my earlier posts for an alternative). This blog post gives a direct combinatorial derivation.

07 February 2020 / Paul Hankin / programming

I've recently been programming seriously in C, after around 10 years in higher level languages (Go, Python, C++, and others). I've been using C11, the latest standard, whereas previously I was working in C89.

I like programming in C. It's not an easy language to write fluently because it doens't provide many conveniences, it's full of traps, and I'd avoid it if I was writing something that needed to be safe, but I still find it fun.

This post describes my recent experience with C and the new standard, and my observations about how things have and haven't changed. It's not a critique of C, and some of the obvious problems with writing C (such as lack of bounds checking of arrays) aren't discussed.

16 June 2019 / Paul Hankin / programming

Computing large integer powers modulo some number is a somewhat common operation. For example, it's used in RSA encryption. Usually, this is done using exponentiation by squaring, but this go program correctly prints the results of \(n^{2^{64}}\ (\mathrm{mod}\ 115763)\) for \(n\) from 1 to 20, seemingly naively:

package main

import "fmt"

func main() {
	for n := 1; n <= 20; n++ {
		result := 1
		for i := 0; i < 2^64; i++ {
			result = (result * n) % 115763
		}
		fmt.Printf("pow(%d, pow(2, 64)) mod 115763 = %d\n", n, result)
	}
}

It runs, unoptimized, in a few milliseconds on my desktop. You can run it yourself online using the go playground. Feel free to edit the code a little before running it to convince yourself it's not just fast because the playground is caching the results or something.

How can it be that fast? Is go's optimizing compiler that clever? It's not, and there's a trick in the code. Can you see it?

14 June 2019 / Paul Hankin / programming
17 June 2018 / Paul Hankin / programming

As I was growing up in England in the 80s, there was a boom in home microcomputers, with the Commodore 64, the ZX Spectrum, and the BBC Micro being three popular choices. These provided an excellent and approachable introduction to programming, with many of my friends learning programming in BASIC and assembler. We taught ourselves the fundamentals of computing while we were playing, and at a relatively early age.

These days the computing environment is complex, and it's much harder for a beginner to get started, or even know how to get started. Mostly programming is learnt at university or in other formal education. While there is definitely more to learn now than before, it seems like the fundamentals of coding should still be easier to pick up than it currently is.

This post takes a look at what made home micros effective learning environments, and considers what a modern equivalent might look like.