Posts on Paul's blog
https://blog.paulhankin.net/post/
Recent content in Posts on Paul's blog
Hugo  gohugo.io
enus
Mon, 16 Mar 2020 00:00:00 +0000

Deriving the Fibonacci doubling formulas combinatorially
https://blog.paulhankin.net/fibonacci_doubling/
Mon, 16 Mar 2020 00:00:00 +0000
https://blog.paulhankin.net/fibonacci_doubling/
<p>This post provides a quick derivation of the fast Fibonacci
doubling formulas, using the correspondence between Fibonacci numbers
and the number of ways to climb $n$ steps taking 1 or 2 steps at a time.</p>
<p>The Fibonacci numbers are a sequence $\mathrm{Fib}(i)$ defined by $\mathrm{Fib}(1)=\mathrm{Fib}(2)=1$ and $\mathrm{Fib}(n+2)=\mathrm{Fib}(n+1)+\mathrm{Fib}(n)$.</p>
<p>The Fibonacci doubling formulas are:</p>
<p><code>$$\begin{eqnarray} \mathrm{Fib}(2n) &=& 2\mathrm{Fib}(n)\mathrm{Fib}(n+1)  \mathrm{Fib}(n)^2 \\ \mathrm{Fib}(2n+1) &=& \mathrm{Fib}(n+1)^2 + \mathrm{Fib}(n)^2 \end{eqnarray}$$</code></p>
<p>These formulas can be used to efficiently compute Fibonacci numbers (see the the end of the post for how). They are usually derived from a matrix power representation of Fibonacci numbers (or see <a href="https://blog.paulhankin.net/fibonacci2/">one of my earlier posts</a> for an alternative). This blog post gives a direct combinatorial derivation.</p>

Back to C
https://blog.paulhankin.net/backtoc/
Fri, 07 Feb 2020 00:00:00 +0000
https://blog.paulhankin.net/backtoc/
<p>I’ve recently been programming seriously in C, after around 10 years in higher
level languages (Go, Python, C++, and others). I’ve been using C11, the
latest standard, whereas previously I was working in C89.</p>
<p>I <em>like</em> programming in C. It’s not an easy language to write fluently because
it doens’t provide many conveniences, it’s full of traps, and I’d avoid it if I
was writing something that needed to be safe, but I still find it fun.</p>
<p>This post describes my recent experience with C and the new standard, and my
observations about how things have and haven’t changed. It’s not a critique of C,
and some of the obvious problems with writing C (such as lack of bounds checking
of arrays) aren’t discussed.</p>

A suspiciously fast program
https://blog.paulhankin.net/powercheat/
Sun, 16 Jun 2019 00:00:00 +0000
https://blog.paulhankin.net/powercheat/
<p>Computing large integer powers modulo some number is a somewhat common operation. For example, it’s used in RSA encryption. Usually, this is done using exponentiation by squaring, but this go program correctly prints the results of $n^{2^{64}}\ (\mathrm{mod}\ 115763)$ for $n$ from 1 to 20, seemingly naively:</p>
<pre><code>package main
import "fmt"
func main() {
for n := 1; n <= 20; n++ {
result := 1
for i := 0; i < 2^64; i++ {
result = (result * n) % 115763
}
fmt.Printf("pow(%d, pow(2, 64)) mod 115763 = %d\n", n, result)
}
}
</code></pre><p>It runs, unoptimized, in a few milliseconds on my desktop.
You can run it yourself online using the <a href="https://play.golang.org/p/y2L63tDfUMJ">go playground</a>. Feel free to edit the code a little before running it to convince yourself it’s not just fast because the playground is caching the results or something.</p>
<p>How can it be that fast? Is go’s optimizing compiler that clever? It’s not, and there’s a trick in the code. Can you see it?</p>

Generating random Latin squares
https://blog.paulhankin.net/latinsquares/
Fri, 14 Jun 2019 00:00:00 +0000
https://blog.paulhankin.net/latinsquares/
<p>Generation of random Latin Squares (such that each latin square of a given size is equally
likely) is a deceptively difficult problem.</p>
<p>This post describes my own implementation, loosely based on the <a href="https://www.researhcgate.net/publication/308517970_Generation_of_Random_Latin_Squares_Step_By_Step_and_Graphically">Java implementation described by Ignacio Gallego Sagastume</a> which
implements the rather ingenious method of <a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/%28SICI%2915206610%281996%294%3A6%3C405%3A%3AAIDJCD3%3E3.0.CO%3B2J">Jacobson and Matthews</a></p>

A gentle introduction to hard programming
https://blog.paulhankin.net/learnprogramming/
Sun, 17 Jun 2018 00:00:00 +0000
https://blog.paulhankin.net/learnprogramming/
<p>As I was growing up in England in the 80s, there was a boom in
home microcomputers, with the Commodore 64, the ZX Spectrum,
and the BBC Micro being three popular choices. These
provided an excellent and approachable introduction to programming,
with many of my friends learning programming in BASIC and assembler.
We taught ourselves the fundamentals of computing while we were playing,
and at a relatively early age.</p>
<p>These days the computing environment is complex, and it’s much harder for a
beginner to get started, or even know how to get started. Mostly
programming is learnt at university or in other formal education.
While there is definitely more to learn now than before, it seems like
the fundamentals of coding should still be easier to pick up than
it currently is.</p>
<p>This post takes a look at what made home micros effective
learning environments, and considers what a modern equivalent might look
like.</p>

Insurance and the Kelly criterion
https://blog.paulhankin.net/kellycriterion/
Sun, 10 Jun 2018 00:00:00 +0000
https://blog.paulhankin.net/kellycriterion/
<p>This article describes how to use the Kelly criterion to make rational
choices when confronted with a risky financial decision, and suggests
a way to estimate the most you should be willing to pay for any
particular sort of insurance.</p>
<p>The Kelly criterion (which at its core is the idea that the logarithm
of your wealth is a better measure of money’s value to you than its absolute
value) is well understood by the informed gambling community, and
should be more widely known.</p>
<p>If you decide to apply the knowledge in this post, also consult with a financial
professional (which as we’ll see later doesn’t include most finance or economics
students, and most young financial professionals), and read the disclaimer at the end.</p>

A novel and efficient way to compute Fibonacci numbers
https://blog.paulhankin.net/fibonacci2/
Mon, 14 May 2018 00:00:00 +0000
https://blog.paulhankin.net/fibonacci2/
<p><a href="https://blog.paulhankin.net/fibonacci/">An earlier post</a> described how to compute Fibonacci numbers in a single arithmetic expression.</p>
<p>FarÃ© Rideau, the author of a <a href="http://fare.tunes.org/files/fun/fibonacci.lisp">page of Fibonacci computations in Lisp</a>, suggested in a private
email a simple and efficient variant, that I believe is novel.</p>
<p>For $X$ large enough, $\mathrm{Fib}_n = (X^{n+1}\ \mathrm{mod}\ (X^2X1))\ \mathrm{mod}\ X$.</p>
<p>That means you can compute Fibonacci numbers efficiently with a simple program:</p>
<pre><code>for n in range(1, 21):
X = 1<<(n+2)
print(pow(X, n+1, X*XX1) % X)
</code></pre>
<p>This blog post describes how this method works, gives a few ways to think about it, easily infers the fast Fibonacci doubling formulas, provides a nice alternative to Binet’s formula relating the golden ratio and Fibonacci numbers, and expands the method to generalized Fibonacci recurrences, including a near oneline solution to the problem of counting how many ways to reach the endsquare of a 100square game using a single sixsided dice.</p>

Little Man Computer
https://blog.paulhankin.net/littlemancomputer/
Wed, 20 Apr 2016 00:00:00 +0000
https://blog.paulhankin.net/littlemancomputer/
<p>I had never seen this miniassemblerbased educational computer before. <a href="https://en.wikipedia.org/wiki/Little_man_computer">wikipedia.org/Little_man_computer</a>.</p>
<p>I couldn’t find a good online emulator, so I wrote one: <a href="https://blog.paulhankin.net/lmc/lmc.html">Little Man Computer Emulator</a>.</p>
<p>Enter the program on the left, click “Assemble”, enter some inputs if your program needs them, and then step
through the execution.</p>
<p>It’s probably got some bugs since it was a quick hack, but it worked on the examples I tried it on.</p>

Nearoptimal closedhand Chinese Poker.
https://blog.paulhankin.net/chinesepoker/
Thu, 21 May 2015 00:00:00 +0000
https://blog.paulhankin.net/chinesepoker/
<p>This blog post looks at closedhand Chinese Poker, and describes
a nearoptimal strategy for it which is readily implementable
on a computer.</p>

Everything you know about complexity is wrong
https://blog.paulhankin.net/complexityrant/
Wed, 06 May 2015 00:00:00 +0000
https://blog.paulhankin.net/complexityrant/
<p>Who would disagree that the runtime of mergesort is $O(n\mathrm{log},n)$ and it’s asymptotically optimal?
Not many programmers I reckon, except perhaps to question whether it’s talking about
a model of computation that’s not sufficiently close to a real computer, for example a quantum
computer or one that performs arbitrary operations in parallel (possibly
involving <a href="http://en.wikipedia.org/wiki/Spaghetti_sort">sticks of spaghetti</a>).</p>
<p>However, if you try to understand how to formalize what it means for a sort
to run in $O(n\mathrm{log},n)$ and for it to be optimal,
it’s surprisingly difficult to find a suitable computational model, that is,
an abstraction of a computer which elides all but the important
details of the computer: the operations it can perform, and how the memory
works.</p>
<p>In this post, I’ll look at some of
the most common computational models used in both practice and theory, and
find out that they’re all flawed in one way or another, and in fact in all
of them either mergesort doesn’t run in $O(n\mathrm{log},n)$ or there’s
asymptotically faster sorts.</p>

An integer formula for Fibonacci numbers
https://blog.paulhankin.net/fibonacci/
Mon, 27 Apr 2015 00:00:00 +0000
https://blog.paulhankin.net/fibonacci/
<p>This code, somewhat surprisingly, generates Fibonacci numbers.</p>
<pre><code>def fib(n):
return (4 << n*(3+n)) // ((4 << 2*n)  (2 << n)  1) & ((2 << n)  1)
</code></pre>
<p>In this blog post, I’ll explain where it comes from and how it works.</p>