### Background And Notes On Extending The N-Queens Puzzle Into 3 Dimensions

(This is an evolving document, and I'll add to it as our project matures.)
Last updated:  7th April 2010
Project Overview
The 3-dimensional N-Queens solutions shown on this website were sought for and discovered in a joint project between myself and my son Martin.

Martin gives his own insights into the project on his  Queens On A Chessboard  website, and on his  Beyond The 2nd Dimension  page he discusses the prospect of taking this work into 4 dimensions and higher!

Martin's notes are crisp, neat, clear and concise, in contrast to my wordy paragraphs below, so you may choose to visit his site too (or even  instead!).

 Before I delve into the 3D stuff, here are some notes on the 2-dimensional beginnings of this decades-long hobby. The Long Long 2D Background I began mulling over the basic, original version of this 2D (2-dimensional) 8 Queens on an 8 by 8 chessboard puzzle around 1975(1), and at interludes ever since I've written faster and more efficient C and C++ programs to solve the puzzle.  It was easy to generalise the 8 by 8 board algorithm to solve boards of almost any size (limited only by CPU processing speeds) and, as I describe on other pages on this website, I have already identified (specifically) the guaranteed first possible puzzle solutions for most 2D board sizes up to 49 by 49 squares. Although, on the spectrum of life's big problems, this puzzle ranks way down amongst million-piece jig-saw puzzles, enormously long domino stacks and proudly preserved coin collections, nevertheless - On 2-dimensional N-Queens boards, I claim that I lead the world in finding, specifically,  the first possible solution  to each of the largest 2-dimensional board sizes attempted under this puzzle's rules.  I will welcome any challenge to this claim. In the 2D mode of this hobby I limited myself to finding only these first possible solutions, because from my earliest programming days I knew that finding the total of all possible solutions for larger boards, or even just counting them, was an astronomical task. If I'd attempted it, I'd have been overtaken rapidly by someone with a better algorithm and/or a faster computer, and indeed that's already happening. I recommend Walter Kosters' n-Queens bibliography (A) for example, which shows this puzzle's appearance in many fields of study.

2-Dimensions, Been There ... 3-Dimensions, Exciting New Territory
I wanted to take this puzzle further than just solving ever larger board sizes, and a logical and apparently achievable target was to take it up into the third dimension. So, way back in February 2002 after a few weeks' experimentation, I'd adapted my tried and trusted 2D "First Queens" C++ program to place queens one by one, into the cells of an N x N x N 'board' or cube.  And I made good progress too.  But in spite of some clever virtual rotations of my cube and the re-application of my algorithm from successive 'corners' and sides of the cube, I could never place more than a few dozen non-competing queens at any one time.  This early one-queen-at-a-time algorithm was leaving gaps and voids where I knew there should be queens, so in great frustration I abandoned the project for a while!

Then, in January 2008, in a conversation with Martin about the delicious unpredictabilities in this puzzle's 2-dimensional solutions, I described my earlier attempts to solve it in 3-dimensions by placing one queen at a time into an initially empty cube. To my delight he engaged completely with me on the project and was soon raising new questions and proposals of his own.

Martin already knew that for any large board size there were hundreds or thousands of  'fully populated' 2D solutions. (They are 'fully populated' because there is exactly one queen in every row and every column of each solved board.)  Even without seeing a single line of code he predicted that some combinations of solved 2D boards, when laid on successive layers of a cube, would yield 'maximally-populated' 3D solutions.  They would be 'maximally populated' because every vertical column of the solved cubes would also contain exactly one queen, and absolutely no further unchallenged queens could be added anywhere into such cubes!   (Drat! Why didn't I think of that!)

So, with freshly charged enthusiasm I set off to try to implement this 2D solution-shuffling scheme in C++.

In those early talks we deliberated on the basic logic of this idea; on how to spot the worthwhile partial permutations of boards which deserved more algortithmic time, while quickly skipping over the fruitless ones. Very soon (in February 2008) I decided that this was an ideal candidate for a recursive programming technique (B).  There were distinct parallels with a Towers Of Hanoi puzzle because our algorithm would be thrashing upwards and backwards through cube layers and through our "stack" of candidate 2D solutions.  (Ohhh! I was drooling at the prospects of writing the code to do this!!!).

My decision to use recursion gave us an interesting quandary for a while, because although I was familiar with recursive subroutines, Martin had so far not struggled with them in his own programming. Conversely, when it came to setting the overall strategy for solving this puzzle in 3-dimensions, especially the anticipation of some hurdles and opportunities, Martin had an amazingly clear grasp of these virtual cubes and how we should shuffle our 'stacks' of candidate 2D solutions!  His younger, uncluttered intellect was rapidly shaping the algorithm which I would need to implement in C++, and yet - even after I'd built my 'recursive' program foundations, we struggled for a while to communicate between our conceptually separate islands!

Catalytic Conversations
But we managed OK, and throughout many months of dialogue in what I call our "programmers' catalytic conversations", Martin listened patiently while I 'talked out' some of my C++ coding conundrums.  He in turn delivered many insights and nudges of his own towards our eventually-successful algorithm.  I don't mind admitting that he rescued me from one or two developmental-dead-ends in my implementation of our algorithm, even though algorithms were once my personal, programming strong point(!)  Anyway, together we gradually streamlined the core sorting and board-rejection functions.  I discarded and re-wrote many routines on the way, I built the program's User Interface elements, devised the file structures for storing and retrieving our solutions and our partial results, and naturally I added many tweaks and tricks of my own.

Yes, No, Yes, No ...!
For a long while my program was producing small numbers of solutions for the 11 x 11 x 11 cube and the 13 x 13 x 13 cube, and we settled on the faster-to-complete 11 x 11 x 11 cube as our benchmark for testing.

(I'll refer to cube sizes hereafter with the shorthand "113 cube".)

In November 2008 I proudly announced to Martin that I'd found 74 "solutions" among the millions of possible permutations (C) of all 2,680 2D-board solutions eligible to be tested on the 113 cube.  He diplomatically replied "Ah! 74? Yes, very good ... but no Dad, you're not there yet!  There should be many more than 74 solutions".  So I returned to my compiler, and tried again.

This continued for a few weeks (with interludes of course for other things in my life), and in spite of Martin's insistence and his attempts to persuade me, I felt certain ... absolutely certain ... that my recursive code was wringing all possible 3D solutions from these cubes.  In spite of judicious amendments to my C++ code, from small tweaks, to surgical snips and even entire subroutine transplants, that number "74" kept popping out of my recursive loops.  The durability, repeatability and damned ornery resilience of  "74" had me convinced that this was all we were going to get. Ever! (Well, really, I knew Martin was right, and I knew I still had work to do.  But I was baffled that my code kept yielding exactly 74 solutions!)

The key point here was that Martin had calculated (see next paragraph) that we should see over 100 eventual solutions to an 113 cube, and possibly more than 200 solutions.  So, nomatter how sure I felt that my code was delivering all possible solutions, he politely held to his conjecture and invited me to return again to my compiler and burn more candles!!  Like every good researcher adhering to sound scientific methodology, he simply offered his conjecture to me, and never insisted that he was right and I was wrong ... but ... I sort of knew he was right, really ... (Drat!).

Martin based his assertion of  "lots more than 74 solutions" on results from his own experiments with a trimmed down version of our algorithm in his own PHP-based program. (Interestingly, his is a non-recursive program!).  But I'll also give due credit to his ability to visualise so many aspects of this entire puzzle in his head, in advance, at one time!!  At this stage my own perception of our algorithm was ironically being hindered and constrained by the 90% successful, cleverly recursive C++ routines in which I'd invested so much time, whereas Martin's viewpoint of it had no such baggage.

I eventually accepted that although I was the author of these recursive routines, their opaque and 'hidden' complexity was to blame for blocking my 'grasp' of this core part of our algorithm - especially at the beginning and end of a complete shuffle of each cube's 'stack' of 2D solutions.  So - we did something we hadn't done before on this project - I installed my compiler and my C++ project onto a tatty but capable old 500MHz laptop (as was permitted by my compiler license!), and we brought Martin and my code together at last!.  Then, on a long Sunday afternoon in January 2009 we reviewed my code, almost function by function.  Martin soon grasped my program's overall 'recursive' structure, even though he was only slightly fluent in C++.

As always on our Sunday get-togethers we had limited time for this detailed scrutiny, so in order to prove that the key corners of our algorithm were doing what they oughta do, Martin devised a cunning stack of dummy 2D "solutions" for our smallest 43 cube, and most importantly, he armed me with the certain knowledge that when my program (eventually) worked, back on my main development PC, I would see 24 (no more and no less) "diagnostic" solutions to this 43 cube. (Note that there are no real solutions for the 43 cube.)  The reader might spot that 24 is "factorial 4" (or 4!), so this ploy by Martin was able to verify that my program was in fact testing every permutation of every one of the 'dummy' N-Queens solutions that Martin devised.  It was a simple but thorough test, and because our "diagnostic cube" was a miniature 43 one, it would only take a second or less to run.

Over the following week, based on my scrutiny of the program's behaviour in this diagnostic state, I built a series of OpenOffice 'Impress' (PowerPoint equivalent) slides, illustrating graphically, in excrutiatingly small steps how MY program SHOULD be behaving!  And - Ha! -Yes. Some big pennies did drop! ... I spotted exactly where I needed to modify an outer, structural part of my code, relating, no surprise, to those recursive routines.

YES! YES!
So, after a few more hours of adjustments, tests, re-adjustments and (taking no shortcuts) some further diagnostics on my program, on the 7th February 2009 I was able to declare to Martin that we had a whole pile of 264 "solutions" for the 113 cube, and no less than 624 "solutions" for the 133 cube.  I've placed the word "solutions" in quotes because Martin has already predicted (now with little doubt) that when we have eliminated the multitude of mirrored "solutions", plus their rotations, translations and inversions, we will see a neat set of only 11 unique solutions for the 113 cube.  We are still assessing (OK, Martin is assessing ...) how few of the  133 cube's 624 solutions will count as unique solutions.

For completeness, I can report that between the sizes of 43 and 153, only the 113 and 133 cubes accomodate these maximally-populated solutions of exactly N2 queens. There are no maximally populated 3D solutions for cubes sizes from 43  to 103  inclusive, and there are no maximally populated 3D solutions for the 123, 143 and 153 cubes.

Martin's own  3D Solutions So Far  page summarises these figures in a neat table, but more significantly he explains why most of these cubes have zero 3D solutions of exactly N2 queens.  He also presents his intriguing hypothesis on calculating in advance, how many solutions we might find in larger cubes!

On the next cube size, 163, my computer is still shuffling (very very slowly) through that cube's long list of  14.7 million 2D solutions.  So far there are zero 3D solutions for this size.

Update - Effective Tuesday 5th May 2009 and with my program at version 3.0.6.4
I've made some huge adaptations to my program to enable it to work on the next largest cube size 173 without demanding infeasible chunks of working memory (on Windows XP Pro with 3GB RAM).  To achieve this, my program now re-solves each of the 17 x 17 (2D) solutions on-the-fly, on demand, as and when required by our algorithm, instead of reading the solutions from a pre-prepared file on disk.  So now I am able to run a further instance of my program, shuffling and filtering through the 173 cube's stack of  95.8 million 2D solutions. Of course this cube is advancing even more slowly than the 163 cube!!

Visualizing The 3D Solutions
(Please pop back later for updates here.  In this section I'll explain how I converted our 3D solutions into *.OBJ files, and how I adapted those *.OBJ files so that JavaView can display the 3-dimensional cubes that you can see on my other web page.)

Amendments:
1. April 2010:   Changed all references to the year 1976 to read 1975, after I re-discovered my old notes from that era.

External References:
A. The n-Queens bibliography - Maintained by Walter Kosters from Universiteit Leiden. Updated in 2009 by Pieter Bas Donkersteeg.
B. Recursive Functions - The University of Strathclyde, Glasgow, Scotland.
C. Fast-Factorial-Functions - The Homepage of Factorial Algorithms, Peter Luschny, 2000-2010.