A suspiciously fast program

Computing large integer powers modulo some number is a somewhat common operation. For example, it’s used in RSA encryption. Usually, this is done using exponentiation by squaring, but this go program correctly prints the results of $n^{2^{64}}\ (\mathrm{mod}\ 115763)$ for $n$ from 1 to 20, seemingly naively:

package main

import "fmt"

func main() {
	for n := 1; n <= 20; n++ {
		result := 1
		for i := 0; i < 2^64; i++ {
			result = (result * n) % 115763
		}
		fmt.Printf("pow(%d, pow(2, 64)) mod 115763 = %d\n", n, result)
	}
}

It runs, unoptimized, in a few milliseconds on my desktop. You can run it yourself online using the go playground. Feel free to edit the code a little before running it to convince yourself it’s not just fast because the playground is caching the results or something.

How can it be that fast? Is go’s optimizing compiler that clever? It’s not, and there’s a trick in the code. Can you see it?

First, let’s check it’s giving the correct results. We’ll use go’s big integer package big to write a unit test, since big helpfully already contains a function big.Int.Exp that computes modular powers. The syntax of go’s big integers is a bit clumsy, but the test runs like this:

package main

import (
	"math/big"
	"testing"
)

func PowBig(n, mod int) int {
	x := big.NewInt(int64(n)) // x = n
	y := big.NewInt(2)
	y.Exp(y, big.NewInt(64), nil) // y = exp(2, 64)
	m := big.NewInt(int64(mod)) // m = mod
	return int(new(big.Int).Exp(x, y, m).Int64())
}

func PowFast(n, mod int) int {
	result := 1
	for i := 0; i < 2^64; i++ {
		result = (result * n) % mod
	}
	return result
}

func TestPowFast(t *testing.T) {
	const P = 115763
	for n := 1; n <= 20; n++ {
		got := PowFast(n, P)
		want := PowBig(n, P)
		if got != want {
			t.Errorf("PowFast(%d, %d) = %d, want %d", n, P, got, want)
		}
	}
}

The tests pass, again in a fraction of a second.

What’s going on

If you didn’t spot it, then the trick is that in go (and C, and python) 2^64 is not “two to the power of 64” but rather “2 xor 64” which is 66. But then the next question is that if the code is wrong, why does it produce the correct results?

Fermat’s Little Theorem (FLT) states that if $p$ is a prime, and $a$ is any number, then $a^{p-1} \equiv 1\ (\mathrm{mod}\ p)$. Unlike it’s big brother, Fermat’s Last Theorem, this statement is a relatively easy to prove, and comes up in introductory number theory classes.

Now, 115763 is a prime which we’ll call $P$, and $2^{64} \equiv 66\ (\mathrm{mod}\ 115763)$. That means that we can write $2^{64}$ as $kP + 66$ for some $k$, and then $a^{2^{64}} = (a^{kP})a^{66} \equiv a^{66}\ (\mathrm{mod}\ P)$.

Conclusion

It’s a silly trick, but I think it’s fun. Can you think of other programs that look superficially right, contain glaring errors (like using xor instead of pow), but still magically produce the right answers? Link to them in the comments if so: I don’t think they’re so easy to find.

If you’ve made it this far, you might like to check out this twitter thread, where the typo of 2^32 and variants are found in real C code, which inspired this meander.

Paul Hankin Written by:

Programming since 1981, professionally since 2000. I’ve a PhD in programming language semantics but these days I prefer programming in Go, C, and Python. I’ve worked mostly on games and large-scale server software.