Thursday, December 29, 2011

The time to be spent as a measure of optimal play

The rules in my previous post seem to lack substance when they are played out. If the players have decided to set up the two opposing nests close enough to each others, the first player always win with the strategy to simply rush forward. Otherwise, if the nests are set up far enough, the two players have time to first secure their nest by surrounding them with two layers of their stones. I don't see any way then for them to reach their respective winning goal. We'd better forget about those rules.

More generally, avoid rules or positions where one player may force the game to last endlessly.

Even more generally, try to win in as few moves as possible, or else lose in as many moves as possible. Renown researchers in Game Theory and developers of games engines such as Jonathan Schaeffer and his colleagues at The University of Alberta GAMES Group are well aware that time is part of optimal play.

I don't know however how to justify it. Is it an axiom we can hardly legitimate? Or can it be derived from some other principle that would be more widely accepted?

What I know is, it doesn't show up in most discussions about developing basic ai and game engines.


Most chess-playing engines start evaluating positions with a materialistic account of the pieces on the board, adding 1 for every White Pawn, subtracting 1 for every Black Pawn, adding about 3 for every White Knight, subtracting the same amount for every Black Knight, and so on ... Positional and dynamic advantages and weaknesses are also carefully weighted and added or subtracted, yielding a final score in the form of a real number. The closer the score is to some positive big number, the closer White is to win easily. Symmetrically, the closer the score is to some negative big number, the closer Black is to win easily.

In go, the basic unit of measure might be the stone and its equivalent territory point, all the more as they directly contribute to the winning condition. In Arimaa, it might be the Rabbit or the Elephant or anything else -- there is no consensus yet. And so on.

While this heterogenous approach has made it fairly easy for every engine to compare positions within the same game, there are general flaws with it :
  • You can't easily compare any Chess position with any Go position.
  • You can't easily compare what two different Arimaa engines think of the same position.
  • You must invent specific formulae to combine the score of a leaf and its depth in the tree search
  • Winning easily is not an explicit and well-defined part of the score definition.
A better approach would be to measure the number of moves you expect the players to need before winning, or even better, the time you expect they'll spend.

We'll keep that in mind while exploring future games.