16 June 2019 / Paul Hankin / programming

Computing large integer powers modulo some number is a somewhat common operation. For example, it's used in RSA encryption. Usually, this is done using exponentiation by squaring, but this go program correctly prints the results of \(n^{2^{64}}\ (\mathrm{mod}\ 115763)\) for \(n\) from 1 to 20, seemingly naively:

package main

import "fmt"

func main() {
	for n := 1; n <= 20; n++ {
		result := 1
		for i := 0; i < 2^64; i++ {
			result = (result * n) % 115763
		fmt.Printf("pow(%d, pow(2, 64)) mod 115763 = %d\n", n, result)

It runs, unoptimized, in a few milliseconds on my desktop. You can run it yourself online using the go playground. Feel free to edit the code a little before running it to convince yourself it's not just fast because the playground is caching the results or something.

How can it be that fast? Is go's optimizing compiler that clever? It's not, and there's a trick in the code. Can you see it?

14 June 2019 / Paul Hankin / programming
17 June 2018 / Paul Hankin / programming

As I was growing up in England in the 80s, there was a boom in home microcomputers, with the Commodore 64, the ZX Spectrum, and the BBC Micro being three popular choices. These provided an excellent and approachable introduction to programming, with many of my friends learning programming in BASIC and assembler. We taught ourselves the fundamentals of computing while we were playing, and at a relatively early age.

These days the computing environment is complex, and it's much harder for a beginner to get started, or even know how to get started. Mostly programming is learnt at university or in other formal education. While there is definitely more to learn now than before, it seems like the fundamentals of coding should still be easier to pick up than it currently is.

This post takes a look at what made home micros effective learning environments, and considers what a modern equivalent might look like.

10 June 2018 / Paul Hankin / mathematics / game theory

This article describes how to use the Kelly criterion to make rational choices when confronted with a risky financial decision, and suggests a way to estimate the most you should be willing to pay for any particular sort of insurance.

The Kelly criterion (which at its core is the idea that the logarithm of your wealth is a better measure of money's value to you than its absolute value) is well understood by the informed gambling community, and should be more widely known.

If you decide to apply the knowledge in this post, also consult with a financial professional (which as we'll see later doesn't include most finance or economics students, and most young financial professionals), and read the disclaimer at the end.

14 May 2018 / Paul Hankin / computer science / mathematics

An earlier post described how to compute Fibonacci numbers in a single arithmetic expression.

Faré Rideau, the author of a page of Fibonacci computations in Lisp, suggested in a private email a simple and efficient variant, that I believe is novel.

For \(X\) large enough, \(\mathrm{Fib}_n = (X^{n+1}\ \mathrm{mod}\ (X^2-X-1))\ \mathrm{mod}\ X\).

That means you can compute Fibonacci numbers efficiently with a simple program:

for n in range(1, 21):
    X = 1<<(n+2)
    print(pow(X, n+1, X*X-X-1) % X)

This blog post describes how this method works, gives a few ways to think about it, easily infers the fast Fibonacci doubling formulas, provides a nice alternative to Binet's formula relating the golden ratio and Fibonacci numbers, and expands the method to generalized Fibonacci recurrences, including a near one-line solution to the problem of counting how many ways to reach the end-square of a 100-square game using a single six-sided dice.