Jump to content

Horizon effect

From Wikipedia, the free encyclopedia

The horizon effect, also known as the horizon problem, is a problem in artificial intelligence whereby, in many games, the number of possible states or positions is immense and computers can only feasibly search a small portion of them, typically a few plies down the game tree. Thus, for a computer searching only a fixed number of plies, there is a possibility that it will make a detrimental move, but the effect is not visible because the computer does not search to the depth at which its evaluation function reveals the true evaluation of the line (i.e., beyond its "horizon").

When evaluating a large game tree using techniques such as minimax with alpha-beta pruning, search depth is limited for feasibility reasons. However, evaluating a partial tree may give a misleading result. When a significant change exists just over the horizon of the search depth, the computational device falls victim to the horizon effect.

In 1973 Hans Berliner named this phenomenon, which he and other researchers had observed, the "Horizon Effect."[1] He split the effect into two: the Negative Horizon Effect "results in creating diversions which ineffectively delay an unavoidable consequence or make an unachievable one appear achievable." For the "largely overlooked" Positive Horizon Effect, "the program grabs much too soon at a consequence that can be imposed on an opponent at leisure, frequently in a more effective form."

Greedy algorithms tend to suffer from the horizon effect.

The horizon effect can be somewhat mitigated by quiescence search. This techniques extends the effort and time spent searching board states left in volatile positions and allocates less effort to easier-to-assess board states. For example, "scoring" the worth of a chess position often involves a material value count, but this count is misleading if there are hanging pieces or an imminent checkmate. A board state after the white queen has captured a protected black knight would appear to the naive material count to be advantageous to white as they are now up a knight, but is probably disastrous as the queen will be taken in the exchange one ply later. A quiescence search may tell a search algorithm to play out the capture and checks before scoring leaf nodes with volatile positions.

Examples

[edit]

In chess, assume a situation where the computer only searches the game tree to six plies and from the current position determines that the queen is lost in the sixth ply; and suppose there is a move in the search depth where it may sacrifice a rook, and the loss of the queen is pushed to the eighth ply. This is, of course, a worse move than sacrificing the queen because it leads to losing both a queen and a rook. However, because the loss of the queen was pushed over the horizon of search, it is not discovered and evaluated by the search. Losing the rook seems to be better than losing the queen, so the sacrifice is returned as the best option whereas delaying the sacrifice of the queen has in fact additionally weakened the computer's position.

In Go, the horizon effect is a major concern for writing an AI capable of even beginner-level play, and part of why alpha-beta search was a weak approach to Computer Go compared to later machine learning and pattern recognition approaches. It is a very common situation for certain stones to be "dead" yet require many moves to actually capture them if fought over. The horizon effect may cause a naive algorithm to incorrectly assess the situation and believe that the stones are savable by calculating a play that seems to keep the doomed stones alive by trying to save them. While the death of the group can indeed be delayed, it cannot be stopped, and contesting this will only allow more stones to be captured. A classic example that beginners learn are Go ladders, but the same general idea occurs even in situations that aren't strictly ladders.[2]

Black to play. Playing on the X spot
gets the stones briefly out of atari,
and thus appears a useful move
to shallow searches...
...but Black loses far more than
three stones if the ladder is
foolishly played to completion.

See also

[edit]

References

[edit]
  1. ^ Berliner, Hans J. (1973). "Some Necessary Conditions for a Master Chess Program". Proceedings of the 3rd International Joint Conference on Artificial Intelligence. Stanford, CA, USA, August 20–23, 1973: 77–85.
  2. ^ Burmeister, Jay; Wiles, Janet (1995). "The Challenge of Go as a Domain for AI Research: A Comparison Between Go and Chess" (PDF). pp. 181–186. doi:10.1109/ANZIIS.1995.705737.
[edit]