Generating random Latin squares

Generation of random Latin Squares (such that each latin square of a given size is equally likely) is a deceptively difficult problem.

This post describes my own implementation, loosely based on the Java implementation described by Ignacio Gallego Sagastume which implements the rather ingenious method of Jacobson and Matthews

Basics

A Latin square is an $n$ by $n$ grid containing $n$ different symbols, arranged such that each row and each column of the grid contains exactly one of each symbol.

Here’s an example, containing the numbers 1 to 5.

It’s easy to quickly verify that each of the numbers 1 to 5 appear exactly once in each row and column.

Other example Latin squares appear in the graphical header of this post.

Jacobson and Matthews

In their paper, Jacobson and Matthews present an algorithm for generating random Latin squares. Their work relies on two main insights. First, that a Latin square is equivalent to an incidence matrix. That’s an $n$ by $n$ by $n$ array of 0’s and 1’s such that if you cut a line of $n$ elements out of the array, in any of the 3 dimensions (that is, left-to-right, down-to-up, front-to-back), you have exactly one 1 entry.

These incidence matrices correspond to Latin squares: if you start with an incidence matrix and fix two dimensions (I choose $x$ and $y$), then for each $i$ and $j$, there’s an unique $k$ such that $M_{i,j,k}$ is 1. This $k$ is the value of the corresponding Latin square at position $i,j$.

Their ingenious step is to generalize this idea of incidence matrix. They frame the “exactly one element in each line is 1” to “if you fix $i,j$, then $\Sigma_{k=1}^nM_{i,j,k} = 1$, with similar equations for fixing $i,k$ and fixing $j,k$. Then they allow a single element of the incidence matrix to be -1. Although these almost-incidence-matrices no longer correspond to Latin squares, this loosening of the definitions allows them to define a random transition from any incidence matrix or almost-incidence-matrix to some other. They prove that these random transitions allow one to eventually get from any incidence matrix to any other, and that the stationary distribution is uniform. That means if you apply enough of these random transitions and have landed on a proper incidence matrix, you’re almost equally likely to be at any possible incidence matrix.

The transition step

Jacobson and Matthews define the transition step based on whether you’ve currently got a perfect incidence matrix (that is, one with no -1 entry), or an imperfect one (that is, one with a single -1 entry).

In the perfect case, they pick $i,j,k$, a 0 entry of the incidence matrix. In the imperfect case, they pick $i,j,k$ to be the -1 entry of the imperfect incidence matrix.

Next, they choose $i', j', k'$ such that each of $M_{i,j,k'}$, $M_{i,j',k}$, $M_{i',j,k}$ are all 1. In the perfect case, this choice is fixed (since each line contains exactly one 1), but in the imperfect case, there’s a choice of two 1’s in the three lines, and they pick from the choices randomly.

Now, $i,j,k$ and $i',j',k'$ are considered to define a cube, with the eight corners of the cube being $\{i,i'\}\times\{j,j'\}\times\{k,k'\}$. They’ve constructed this cube such that $M_{i,j,k'}$, $M_{i,j',k}$ and $M_{i',j,k}$ are all 1. Because of the definition of the incidence matrix, the corners $M_{i',j',k}$, $M_{i',j',k}$ and $M_{i,j',k'}$ are all 0. We know that $M_{i,j,k}$ is either 0 or -1 depending on whether we started with a perfect or imperfect incidence matrix. That just leaves $M_{i',j',k'}$ which may be 0 or 1.

The transition step is to add 1 to the element $M_{i,j,k}$, subtract one from the element $M_{i',j',k'}$, and toggle the remaining entries. With a little thought, one can see that this leaves a perfect incidence matrix if $M_{i',j',k'}$ was 1, and an imperfect incidence matrix if not.

Implementation

To implement the algorithm efficiently, we’d like the transition function to take O(1) time. The slow part, with a naive implementation, will be finding the 1 elements in the (perfect or imperfect) incidence matrix.

I take a similar, but slightly more efficient, approach to Sagastume’s Java implementation.

The matrix $M$ is represented like this:

xy [n][n]int // an array of n by n integers
yz [n][n]int
xz [n][n]int
perfect bool
m [3]int
mxy, myz, mxz int

(as a pedantic side note, the xy, yz and xz arrays are actually slices in go to allow for n to be varied at runtime).

The correspondence to the $M$ matrix is as follows:

  • xy[i][j]==k implies $M_{i,j,k}=1$
  • yz[j][k]==i implies $M_{i,j,k}=1$
  • xz[i][k]==j implies $M_{i,j,k}=1$
  • perfect describes whether $M$ is perfect.
  • if $M$ isn’t perfect, then m contains the coordinates of the -1 entry …
  • … and the entries m[0],m[1],mxy, m[0],mxz,m[2], myz,m[1],m[2] of $M$ are also 1.

This representation makes finding 1’s in the incidence matrix efficient, and has the property that xy, yz and xz each correspond to a Latin square representation of the incidence matrix when the incidence matrix is proper.

The full go source code is here, and is set up to output a random 25x25 Latin square when run. The code is small and fast enough to run on the go playground, which you can find here by clicking here. The playground version has a Seed and N parameter that you can change to get different Latin squares.

package main

import (
	"fmt"
	"math/rand"
	"time"
)

func rand2(a, b int) (int, int) {
	if rand.Intn(2) == 0 {
		return a, b
	}
	return b, a
}

func Latin(n int) [][]int {
	xy := make([][]int, n)
	xz := make([][]int, n)
	yz := make([][]int, n)
	for i := 0; i < n; i++ {
		xy[i] = make([]int, n)
		yz[i] = make([]int, n)
		xz[i] = make([]int, n)
	}
	for i := 0; i < n; i++ {
		for j := 0; j < n; j++ {
			k := (i + j) % n
			xy[i][j] = k
			xz[i][k] = j
			yz[j][k] = i
		}
	}

	var mxy, mxz, myz int
	var m [3]int
	proper := true
	minIter := n * n * n
	for iter := 0; iter < minIter || !proper; iter++ {
		var i, j, k, i2, j2, k2 int
		var i2_, j2_, k2_ int

		if proper {
			// Pick a random 0 in the array
			i, j, k = rand.Intn(n), rand.Intn(n), rand.Intn(n)
			for xy[i][j] == k {
				i, j, k = rand.Intn(n), rand.Intn(n), rand.Intn(n)
			}
			// find i2 such that [i2, j, k] is 1. same for j2, k2
			i2 = yz[j][k]
			j2 = xz[i][k]
			k2 = xy[i][j]
			i2_, j2_, k2_ = i, j, k
		} else {
			i, j, k = m[0], m[1], m[2]
			// find one such that [i2, j, k] is 1, same for j2, k2.
			// each is either the value stored in the corresponding
			// slice, or one of our three temporary "extra" 1s.
			// That's because (i, j, k) is -1.
			i2, i2_ = rand2(yz[j][k], myz)
			j2, j2_ = rand2(xz[i][k], mxz)
			k2, k2_ = rand2(xy[i][j], mxy)
		}

		proper = xy[i2][j2] == k2
		if !proper {
			m = [3]int{i2, j2, k2}
			mxy = xy[i2][j2]
			myz = yz[j2][k2]
			mxz = xz[i2][k2]
		}

		xy[i][j] = k2_
		xy[i][j2] = k2
		xy[i2][j] = k2
		xy[i2][j2] = k

		yz[j][k] = i2_
		yz[j][k2] = i2
		yz[j2][k] = i2
		yz[j2][k2] = i

		xz[i][k] = j2_
		xz[i][k2] = j2
		xz[i2][k] = j2
		xz[i2][k2] = j
	}
	return xy
}

func main() {
	rand.Seed(time.Now().UnixNano())
	const N = 25
	for _, row := range Latin(N) {
		for _, c := range row {
			fmt.Printf("%3d ", c+1)
		}
		fmt.Printf("\n")
	}
}
Paul Hankin Written by:

Programming since 1981, professionally since 2000. I’ve a PhD in programming language semantics but these days I prefer programming in Go, C, and Python. I’ve worked mostly on games and large-scale server software.