Last week I took my car in for an oil change and I got a chance to peruse a book that I received for Christmas: Artificial Intelligence: A Modern Approach by Russell and Norvig. It is a hefty 900 pages but each chapter can be read and understood separately, much like good code.
Like any thorough textbook it starts out with a definition of AI and divides all AI into four categories.
1. Systems that think like humans.
2. Systems that act like humans.
3. Systems that think rationally.
4. Systems that act rationally.
If you think about it, thinking rationally and acting rationally are two very different concepts. The book calmly says that it is concerned with the fourth point, "Systems that act rationally."
Now that I've covered the boring stuff, I can talk about the alpha-beta (AB) algorithm which can be used to program an AI for two player games. AB "should only be applied to positions that are quiescent, that is, unlikely to exhibit wild swings in value in the near future." Obviously in Magic there are often many wild swings that happen in the course of a normal game, so the AB algorithm might have trouble evaluating some of Magic's game states, especially when a player is trying to maximize the use of cards like Glorious Charge or Overrun which pumps up all of the player's creatures. I often maximize cards like these by choosing not to trade creatures when attacking or blocking.
Another AB deficiency is the "horizon problem" is when the AI "is facing a move by the opponent that causes serious damage and is ultimately unavoidable." My guess is that this applies more to chess than Magic.
Most AB implementations let you specify the depth or the number of moves ahead to search. The book suggests "iterative deepening search" which is a strategy that sidesteps the issue of choosing the best depth limit by trying all possible depth limits as limited by the time allowed. First dept 0 is searched, then dept 1, and then 2 and so on. Combining this search with AB can be very successful because the "effectiveness of AB depends on the ordering in which the successors are examined." So if it is possible, it would be sensible to give AB a good move to begin with so AB can remove (prune) a large number of lesser, unsuccessful moves.
Finally, I often say that it is hard to encode a Magic card so the computer can understand it. The more formal name for this problem is called "knowledge representation" which is the study of how to put knowledge into a form that a computer can use.
Next time I’ll talk about other AI algorithms that can be used for 2 player games and AB's evaluation function which looks at the game state and returns a number representing the chance that the AI will win.