2. Agents based on the objectives
Knowledge on the current state of the world is not always
sufficient to decide what to make. For example, in a crossing of ways, the
taxi-man can transfer on the left, to transfer on the right or continue right.
The correct decision depends on where the taxi wants to go. In other words, in
addition to the description of the current state, the agent needs certain
information on its objective which describes the situations
which are desirable, for example, to arrive at the destination suggested by the
passenger. The program of the agent can combine with information on the results
of the possible actions (same information used to bring up to date the internal
state in the case of the reactive agent) to choose the actions which make it-
possible to achieve the goal. The structure of the agent based on the
objectives arises as follows:
How is currently the world?
Which action do I have to take now?
state
How the world evolves/moves ?
Which effects my actions cause?
Objectives
Senseurs
Effectors
How is currently the world?
Agent
Agent based on the objectives and on the models, which stores
the information of the state of the world as well as the whole of the
objectives that it tries to reach, and which is able to choose the action which
possibly will guide it towards the attack of its objectives.
In some occasions, the choice of the actions based on the
objectives is direct, when to achieve the goals is the immediate result of an
individual action. In other occasions, this choice can be complicated, when the
agent must consider complex sequences to find the way which enables him to find
the objective. Research and planning are
under the fields of the Artificial Intelligence which are focused on how to
find the sequences of the actions which make possible the agents to achieve
their goals.
It should be considered that the decision-making of this type
is basically different from the rules of condition - action described before,
in which it is necessary to take account of the considerations on the future
(such as "what will occur if I do this or that?" and "this will return to me it
happy?"). In the designs of the reactive agents, this information is not
represented explicitly, because the rules which the agent handles directly
project perceptions on the actions. The reactive agent slows down when it sees
the lights of brake. An agent based on the objectives, at the beginning, can
reason that if the vehicle which it sees in front of A of the lit lamps of
brake, therefore it is spirit to reduce its speed. Being given the way in which
the world evolves/moves normally, the single action which makes it possible to
achieve the goal not to knock itself against other vehicles, it is to slow
down.
Although the agent based on the objectives can appear less
efficient, it is more flexible since the knowledge which supports its decision
is represented explicitly and can be modified. If it starts to rain, the agent
can bring up to date its knowledge on how behave the brakes; what will imply
that all the concerning manners to bring up to date deteriorate automatically
to adapt to the new circumstances. For the reactive agent, in addition, we will
have to rewrite much rules of condition-action. The behavior of the agent based
on the objectives can change easily so that it moves to a different
localization. Rules of the reactive agents in connection with when to transfer
and when to continue straight are valid only for one concrete destination and
must be modified each time the agent moves to another destination.
|