Posts on Paul's blog
https://blog.paulhankin.net/post/
Recent content in Posts on Paul's blog
Hugo  gohugo.io
enus
Sun, 17 Jun 2018 00:00:00 +0000

A gentle introduction to hard programming
https://blog.paulhankin.net/learnprogramming/
Sun, 17 Jun 2018 00:00:00 +0000
https://blog.paulhankin.net/learnprogramming/
<p>As I was growing up in England in the 80s, there was a boom in
home microcomputers, with the Commodore 64, the ZX Spectrum,
and the BBC Micro being three popular choices. These
provided an excellent and approachable introduction to programming,
with many of my friends learning programming in BASIC and assembler.
We taught ourselves the fundamentals of computing while we were playing,
and at a relatively early age.</p>
<p>These days the computing environment is complex, and it's much harder for a
beginner to get started, or even know how to get started. Mostly
programming is learnt at university or in other formal education.
While there is definitely more to learn now than before, it seems like
the fundamentals of coding should still be easier to pick up than
it currently is.</p>
<p>This post takes a look at what made home micros effective
learning environments, and considers what a modern equivalent might look
like.</p>
<p></p>

Insurance and the Kelly criterion
https://blog.paulhankin.net/kellycriterion/
Sun, 10 Jun 2018 00:00:00 +0000
https://blog.paulhankin.net/kellycriterion/
<p>This article describes how to use the Kelly criterion to make rational
choices when confronted with a risky financial decision, and suggests
a way to estimate the most you should be willing to pay for any
particular sort of insurance.</p>
<p>The Kelly criterion (which at its core is the idea that the logarithm
of your wealth is a better measure of money's value to you than its absolute
value) is well understood by the informed gambling community, and
should be more widely known.</p>
<p>If you decide to apply the knowledge in this post, also consult with a financial
professional (which as we'll see later doesn't include most finance or economics
students, and most young financial professionals), and read the disclaimer at the end.
</p>

A novel and efficient way to compute Fibonacci numbers
https://blog.paulhankin.net/fibonacci2/
Mon, 14 May 2018 00:00:00 +0000
https://blog.paulhankin.net/fibonacci2/
<p><a href="https://blog.paulhankin.net/fibonacci/">An earlier post</a> described how to compute Fibonacci numbers in a single arithmetic expression.</p>
<p>FarÃ© Rideau, the author of a <a href="http://fare.tunes.org/files/fun/fibonacci.lisp">page of Fibonacci computations in Lisp</a>, suggested in a private
email a simple and efficient variant, that I believe is novel.</p>
<p>For <span class="math">\(X\)</span> large enough, <span class="math">\(\mathrm{Fib}_n = (X^{n+1}\ \mathrm{mod}\ (X^2X1))\ \mathrm{mod}\ X\)</span>.</p>
<p>That means you can compute Fibonacci numbers efficiently with a simple program:</p>
<pre><code>for n in range(1, 21):
X = 1<<(n+2)
print(pow(X, n+1, X*XX1) % X)
</code></pre>
<p>This blog post describes how this method works, gives a few ways to think about it, easily infers the fast Fibonacci doubling formulas, provides a nice alternative to Binet's formula relating the golden ratio and Fibonacci numbers, and expands the method to generalized Fibonacci recurrences, including a near oneline solution to the problem of counting how many ways to reach the endsquare of a 100square game using a single sixsided dice.
</p>

Little Man Computer
https://blog.paulhankin.net/littlemancomputer/
Wed, 20 Apr 2016 00:00:00 +0000
https://blog.paulhankin.net/littlemancomputer/
<p>I had never seen this miniassemblerbased educational computer before. <a href="https://en.wikipedia.org/wiki/Little_man_computer">wikipedia.org/Little_man_computer</a>.</p>
<p>I couldn't find a good online emulator, so I wrote one: <a href="https://blog.paulhankin.net/lmc/lmc.html">Little Man Computer Emulator</a>.</p>
<p>Enter the program on the left, click "Assemble", enter some inputs if your program needs them, and then step
through the execution.</p>
<p>It's probably got some bugs since it was a quick hack, but it worked on the examples I tried it on.</p>
<p></p>

Nearoptimal closedhand Chinese Poker.
https://blog.paulhankin.net/chinesepoker/
Thu, 21 May 2015 00:00:00 +0000
https://blog.paulhankin.net/chinesepoker/
<p>This blog post looks at closedhand Chinese Poker, and describes
a nearoptimal strategy for it which is readily implementable
on a computer.</p>
<p></p>

Everything you know about complexity is wrong
https://blog.paulhankin.net/complexityrant/
Wed, 06 May 2015 00:00:00 +0000
https://blog.paulhankin.net/complexityrant/
<p>Who would disagree that the runtime of mergesort is <span class="math">\(O(n\mathrm{log}\,n)\)</span> and it's asymptotically optimal?
Not many programmers I reckon, except perhaps to question whether it's talking about
a model of computation that's not sufficiently close to a real computer, for example a quantum
computer or one that performs arbitrary operations in parallel (possibly
involving <a href="http://en.wikipedia.org/wiki/Spaghetti_sort">sticks of spaghetti</a>).</p>
<p>However, if you try to understand how to formalize what it means for a sort
to run in <span class="math">\(O(n\mathrm{log}\,n)\)</span> and for it to be optimal,
it's surprisingly difficult to find a suitable computational model, that is,
an abstraction of a computer which elides all but the important
details of the computer: the operations it can perform, and how the memory
works.</p>
<p>In this post, I'll look at some of
the most common computational models used in both practice and theory, and
find out that they're all flawed in one way or another, and in fact in all
of them either mergesort doesn't run in <span class="math">\(O(n\mathrm{log}\,n)\)</span> or there's
asymptotically faster sorts.</p>
<p></p>

An integer formula for Fibonacci numbers
https://blog.paulhankin.net/fibonacci/
Mon, 27 Apr 2015 00:00:00 +0000
https://blog.paulhankin.net/fibonacci/
<p>This code, somewhat surprisingly, generates Fibonacci numbers.</p>
<pre><code>def fib(n):
return (4 << n*(3+n)) // ((4 << 2*n)  (2 << n)  1) & ((2 << n)  1)
</code></pre>
<p>In this blog post, I'll explain where it comes from and how it works.
</p>