OK, my heuristic-search solver for the Google+ game “Flood-It” got me a score of 8440 (and a ranking of “SuperPixie”) before the game’s utter inability to handle transient network failures ended the run. I think I’m done. (Part of the point was to ruin the game for myself, by demonstrating that a three-year-old laptop is better at the game than I’ll ever be, so I could quit playing it…)
I implemented four main search strategies: Depth-first branch and bound, dynamic move ordering, limited discrepancy search, and a transposition table. Thanks to Bart Massey for teaching me everything I know about game search—and for telling me how useful netpbm is. (I convert game screenshots to textual board descriptions using a pipeline of PPM filters, two of which I wrote specially for this.)
The branch-and-bound pruning heuristic uses the value from the transposition table if it was found with a sufficiently large number of discrepancies remaining. Barring that, it falls back to counting how many colors are left, which is admissable because it takes at least one move per color to clear the board. Unfortunately, since the maximum number of colors is 6, and the game tree is usually more than 20 moves deep, the fallback heuristic only prunes pretty close to the leaves. I think the biggest remaining improvement would come from an improved heuristic that’s still fast to evaluate, but I haven’t been able to come up with one.
The move ordering heuristic tries all moves that hit the transposition table first, in order of ascending value. (Move ordering doesn’t need an admissable heuristic, so table entries are used regardless of the number of discrepancies.) The remaining moves are then tried with a greedy heuristic, evaluating moves that flood the largest board area first.
One game-specific trick seems to work well no matter what search strategies I use. If there’s a move available that completely eliminates some color, don’t consider any other moves from the current position. Even if more than one color can be completely eliminated, just pick one; the others will be treated similarly when we recurse. This optimization eliminates a class of transpositions near the leaves without using space in the transposition table, and also avoids evaluating the move-ordering heuristic when it doesn’t matter. It’s a small optimization since it usually only helps near the leaves, but it makes a measurable difference.
When I implemented limited discrepancy search, searches started returning optimal solutions dramatically faster, but after finding the optimal solution the proof that it is actually optimal takes noticeably longer. The trade-off seems worth-while to me, but since I can’t quickly tell whether the last reported solution is optimal, I used sub-optimal solutions several times when I was a little too impatient.
Those who know me will be surprised that I wrote the search algorithm in Java and the PPM filters in C, instead of using Haskell. While I did make most of my Java objects immutable, these search strategies include a certain amount of single-threaded state that needs to be updated in the order the tree is evaluated in, like the transposition table and the current best depth. Further, some of the state was easy to update using the “do-undo” strategy, which eliminates some memory allocation per evaluated move, thereby reducing time spent in garbage collection. While I could have implemented the same algorithms in the ST monad or something, I felt this task was easier in an imperative language.
Feel free to check out the code from my git repository. And if you think of a better admissable heuristic, I’d love to hear about it.
Comments from Google+
Ian Duffe - 2011-11-18 20:54:33-0500
Neat stuff. Now I want to play around with netpbm
Bart Massey - 2011-11-23 04:21:18-0500
(BTW, one thing I hate about gitweb is that it doesn’t mention a clone URL. I have the same problem with BartForge. I guessed it, though.)
pnmcrop is supposed to achieve the same effect as your crop program, but it apparently doesn’t work. Bleah.
Once I got your solver working, I had big fun with it.
It looks like most of the improvements it finds on its initial solution consist of simply omitting an unnecessary move in the middle somewhere. You should probably run each leaf through an optimizer that tries just leaving each move out in turn and seeing if the resulting solution still works; run until convergence.
There’s a better heuristic based on “deadlocks”, although it’s a tiny bit tricky to compute. If a region is surrounded by pixels not part of the current flood, then there is a minimum cost of 1 to reach that region: this is true recursively. These costs occur in parallel for disconnected regions, but to clear a color will require at least to clear the maximum deadlock leading to a pixel of that color. Pseudocoding:
for each color c for each occurrence u(c) of c find the minimum distance d(u(c)) in color steps from u to the flood let m(c) be the maximum over u(c) of d(u(c)) return the maximum over c of m(c) + the number of colors - 1
Since you are working with a graph instead of pixels (obviously the right thing to do), you can modify the above appropriately. I believe that the heuristic as I have given it isn’t tight; you can probably tighten it up a bit further with some thought.
(That said, probably an argument similar to the blocks-world one will show that generalized FloodIt is NP-hard; exercise for the reader.)
I think A* or IDA* is probably the best search plan here. That said, what you have seems to work just fine.
Again, this is cool! Thanks for sharing. I’m teaching search next quarter, so maybe I’ll bug you to come in and talk to the class.
Jamey Sharp - 2011-11-25 01:26:15-0500
Yay, I was hoping for that level of review! Thanks, Bart. I’d enjoy talking with your students, but I’m done playing Flood-It now. Honest! Otherwise, in order:
My gitweb setup does show both http and git clone URIs. They’re on the
summary page, below description, owner, and last change date. My
I think pnmcrop doesn’t work because there’s a subtle gradient between grey and white in the background, so the corners don’t match. (I tried it first.) I just looked for un-saturated colors, which sufficiently distinguishes between board and not-board pixels.
The big win with your optimizing pass is that it tightens the pruning bound while it’s finding better solutions sooner, I guess? It sounds to me like a hybrid between local search and complete search, and that seems interesting.
I think an equivalent way to describe your admissible heuristic is to find the maximum depth a breadth-first search would reach (in the graph representation I’m using). Does that sound right? I tried to devise a heuristic like that, but since I couldn’t see a way to update it incrementally after each move, I expected it would slow down evaluating each node by too much to be worthwhile. I should probably have tested it anyway, I guess… Perhaps the iterated deepening idea would help in evaluating it efficiently.
Converting the pixel grid into a graph representation was an oddly challenging exercise all by itself, by the way. I might have been better served by making it a little less Haskell-like, just this once.
Hope your Thanksgiving went well!
Bart Massey - 2011-12-15 16:27:23-0500
Last night when I should have been sleeping, I was thinking about your
netpbm code for some reason. Turns out you can replace your cropping
pnmquant -center -nofs 7 | pnmcrop, and most of your
map reader with
pamscale -reduce 24, both of which are shamefully
hard-wired, but hey. You still need the part of your map reader that
converts colors into letters, though you can use
pamtopnm -plain to
get an ASCII-encoded PPM at the end, which makes it possible to write
the rest as an awk script.
Having meditated on all this, I wasted a couple of hours today: the result is at http://github.com/BartMassey/floodit-convert — it now handles small, medium and large boards correctly by relaxing some of the hard constants. Have fun. :-)