So far we have seen that a state space representation consists of the following components:
Also, we said that to solve a problem we do not need to search the entire state
space but only that part which leads to a solution, i.e., we need to search for
an appropriate operator sequence, transforming the initial state through a
number of intermediate states into the goal state. To perform this search
systematically we need a control strategy that decides which operator to apply
next during the search. These strategies are commonly represented using
trees 2: construct a tree with the initial state as its
root, next generate all the offspring of the root by applying all of the
applicable operators to the initial state, next for every leaf node generate its
successors by applying all of the applicable operators, etc. When these steps
are performed a structure as displayed in fig. structure will
arise.
Trees are a special case of a more general structure called a graph
3.
A tree is a graph each of whose nodes has a unique parent (except for the root
node, which has no parent). Searching a tree is easier than searching a graph,
because when a new node is generated in the tree we can be sure it has not
been generated before. This is true because every node has only one parent, so
there cannot be two or more different paths leading to the same node. In a
graph, however, nodes usually have more than one parent. Therefore, when
searching a graph one should make provisions to deal with these situations.
Saying that a node has more than one parent means that the node is generated
by a different sequence of the same operators. That is, the same node may be
part of several paths, and continuing the processing of both these nodes (which are
really the same node) would be redundant and a waste of effort. This can be
avoided at the price of additional bookkeeping. Instead of traversing a tree we
traverse a directed graph 4: every time a
node is generated we examine the set of nodes generated so far to see if this
node already exists in the graph. If it does, we throw it away (note: see
page for exceptions to this rule), if not, we add it to
the graph.
This way we will also avoid a related problem: if we did not check whether a node had been generated before, the search process would very likely end up in a cycle , in which the same set of nodes is generated over and over again. For instance, when we apply operator 'move empty tile left', next 'move empty right' and next move 'empty tile left' etc. to the 8-puzzle, the problem-solving process would go on producing the same nodes without end. When we modify the search procedure as described above, this situation will never arise, because we try to look up every node that is generated, before it is added to the graph.
A special kind of graph is the AND/OR graph that is used in
problem-solving methods involving problem reduction. In the case of a normal
graph, each node represents a different alternative state to be chosen next and
the search process may continue along one of these nodes arbitratily. In a
problem-reduction representation, however, we also need to deal with operators
that divide the original problem into a set of sub-problems each of which
need to be solved, instead of any of them. For example, suppose problem A can be solved
either by solving problems B and C or by solving problems D and E or by solving
problem F. This situation is depicted in fig. .