I.2.5.
Elements of an expert system
The figure 5.4 shows the elements of a typical expert system.
In a system based on the rules, the base of knowledge contains the knowledge of
field necessary to solve the problems codified in the form of rules. While the
rules are a popular paradigm to represent knowledge, other types of expert
systems use various representations.
An expert system includes the following elements:
1. User interface: the mechanism which
allows the communication between the user and the expert system.
2. Means of explanation: explain to the user
the reasoning of the system.
3. Active memory: A total data base
of the facts used by the rules.
4. Mechanism of inference: makes inferences
while deciding which rules satisfy the facts or the objects, priority gives to
the satisfied rules and carries out the rule with the highest priority.
5. Diary: created by the mechanism of
inference, the diary is a list with priorities assigned with the rules whose
models satisfy the facts or the objects of the active memory.
6. Means of acquisition of knowledge: sees
automatic so that the user introduces knowledge into the system, without having
the engineer of knowledge so that it codifies the explicit knowledge of manner.
According to the establishment of the system, the user
interface can be a simple screen of text or a very sophisticated screen, of
high resolution, with maps of bits which in general, is used to simulate a
control panel with buttons and windows.
In an expert system based on the rules, the base of knowledge
receives also the name of memory of production. Let us take
the example of the problem for deciding on the crossing of a street. The
production for the two rules is as follows, where the arrows mean that the
system will carry out the actions which are on the right arrow if the
conditions of A left are true:
the light is red ? Stop
the light is green ? cross
The production of rules can be expressed in a pseudo-code
equivalent with the format IF... THEN, as follows:
Rule: light is red
IF
the light is red
THEN
Stop
Rule: Light is green
IF
the light is green
THEN
Cross
Each rule is identified by a name, follow-up of the part IF
rule. The section between the parts IF and THEN bears several names like
antecedent, conditional, model part or right side.
The individual condition
The «light is green» is called conditional
element or model.
Certain examples of the rules of the real expert systems are:
System MYCIN for the diagnosis of meningitides and bacterial
infections.
IF
The culture medium is blood, and
The identity of the organization is not known with certainty,
and
The colouring of the organization is gram negative, and
The morphology of the organization is in sticks, and
The patient presents a rise in the temperature
THEN
There is a weak obviousness which suggests (4) the identity of
the organization is pseudomone
System XCON/R1 to configure the systems of treatment VAX of
DEC.
IF
The current context is the assignment of the devices to the
Unibus modules and
There is a disk drive with dual port which was not assigned
and
The type of necessary controller is known and there are two
controllers and none them has an assigned device and
The number of devices which these controllers can support is
known
THEN
To assign the disk drive to each controller, and
To revise that the two controllers were associated and that
each one supports drive
In a system based on the rules, the mechanism of inference
determines which antecedents of rule, if there is, were satisfied by the facts.
Two general methods of inferences which are used with frequency as strategies
for the solution of the problems with the expert systems are: the
chaining before and the back chaining. Other methods
used for more specific needs can include the analysis of the means and ends,
the reduction of the problem, the localization in towards, the test of
generation of the plans, the hierarchical planning of the problem, etc.
The front chaining is the active reasoning of the facts to the
conclusions which result from it. For example, if you see that it is spirit to
rain before leaving the house (made), then you must leave with an umbrella
(conclusion).
The back chaining implies reasoning reverses energy of an
assumption aiming at checking a possible conclusion, with the facts. For
example, if you did not look at outside and somebody enters with wet shoes and
an umbrella, your assumption will be that it is spirit to rain; to support it
you could ask the person if in truth it is spirit to then rain, if the answer
is yes, the assumption is true and is converted into a fact. Like known as
front, an assumption can be seen as a fact whose veracity is in doubt and
requires to be restored. The assumption can then be interpreted like an
objective to check.
According to the design, a mechanism of inference will carry
out the chaining before or the back chaining. For example, OPS5 and CLIPS are
designed to carry out the front chaining, whereas MYCIN carries out the back
chaining, and other types of mechanisms of inference, like ART and KEE carry
out both. The choice of mechanism of inference depends on the type of the
problem. The diagnostic one of the problems is solved better with the back
chaining, while the supervision and control are carried out better with the
help of the front chaining.
The active memory can contain facts which contemplate the
current state of the light, as "the light is green" or "the light is red". One
of these facts or all can be in the active memory at the same time. If the
semaphore functions normally, only one of these facts will be in the memory.
However, it is possible that two facts are in the memory if there is a
dysfunction in the semaphore. Note that there is a difference between the base
of knowledge and the active memory. The facts cannot interact between them; the
fact "the light is green" does not have an effect on the fact "the light is
red"; on the other hand our knowledge of the semaphore says that if the two
facts are present in a simultaneous way, then there is a fault in the
semaphore.
If there is a fact "the light is green" in the active memory,
the mechanism of inference will realize that this fact satisfies the
conditional part of the rule of green light and will put this rule in its
diary. If a rule has several models, then all must be satisfied with a
simultaneous way so that the rule passes to the diary. Certain models can be
satisfied by specifying the absence with certain facts in the active memory.
When all the models of a rule are satisfied, it is said that
it is activated or initiated. Several
activated rules can be in the diary at the same time, in which case, the
mechanism of inference must choose a rule of discharge. The term to discharge
comes from the neurophysiology, the study of the nervous system. An individual
nervous cell or neuron emits an electric signal when it is stimulated; the lack
of much of stimulus can cause that the neuron still discharges for a short
period; this phenomenon is called refraction. The expert systems based on the
rules are built by using the refraction with an aim of preventing commonplace
embrouillements. I.e., that if the rule of the green light continues to
discharge several times on the same fact, the expert system will not complete a
useful work.
Several methods were invented to obtain the refraction. In a
language for expert systems called OPS5, each fact receives a single, known
identifier like labels time, when it is introduced into the
active memory. After a rule discharged a fact, the inference engine will not
discharge any more on the same fact because its label of time was already used.
After the part THEN of a rule, there is a series of
actions which will be carried out when the rule discharges.
This part of the rule is known like the consequence or
right side. When the rule of the red light discharges, its
action is to cross. In general, the specific actions include the addition or
the suppression of the facts in the active memory or the results of
impressions. The format of these actions depends on the syntax of the language
used; for example, in OPS5, ART and CLIPS, the addition of a new fact called
"to stop" with the memory activates would be (to affirm to stop). Had to their
predecessor LISP, these languages were conceived to require brackets around the
models and actions.
The mechanism of inference operates in cycles. Several names
were given to describe the cycle, like cycle act-recognition, cycle
selection-execution, cycle situation-answer and cycle
situation-action. It does not matter the name, the mechanism of
inference will carry out a group of some tasks repeated until certain criteria
cause the stop of the execution. The tasks of a cycle for OPS5, Shell of expert
system typical, are shown in the pseudocode according to like
resolution of conflict, act, correspondence and
checking of the interruptions.
As long as nothing is done
Resolution of conflict: if there are
activations, then to select that having more raised priority
Act: sequentially to carry out the actions
on the right side of selected activation. Those which change the memory
activates have an immediate effect in this cycle. To eliminate from the diary
the activation which has just discharged.
Correspondence: to update the diary while
revising if the left side of an unspecified rule is satisfied. In this case, to
activate the rule. To eliminate the actions if the left sides of the rules are
not satisfied any more.
Checking of the interruptions: if an action
of interruption is carried out or an ordering of rupture is given, then to
carry out them.
END - AS LONG AS
To accept the orders of the new user.
Several rules can be activated and pass to the diary during a
cycle. Activations of the former cycles will also remain in the diary unless
which they were decontaminated because their left sides are not read satisfied.
Thus, the number of activations in the diary will vary during the execution.
According to the program, an activation can in the diary but ever be always
selected to discharge; same manner, certain rules can never not be activated.
In these cases, the objectives of these rules should be re-examined because
either they are not necessary, or their models had not been well designed.
The mechanism of inference carries out the actions of
activation having the highest degree of priority in the diary, then, those of
following in degree of priority and so on until there is no activation. Various
diagrams of priority were conceived in the shells of the expert systems. In
general, all the shells allow that the engineer of knowledge defines the
priority of the rules.
The conflicts in the diary arrive when various activations
have the same degree of priority and that the mechanism of inference must
decide which rule to discharge. Different the shells has different manners to
deal with this problem: in the original paradigm of Newell and Simon, the rules
which entered the first in the system have the predetermined priority highest
(Newell 72, p. 33); in OPS5, the rules with more complex models have the most
raised priority; in ART and CLIPS, the rules have the same predetermined
priority unless the engineer of knowledge assigns distinct priorities to them.
For this moment, control turns over at the maximum
level of interpreter of the orders so that the user gives more
instructions to Shell of the expert system. The maximum level is the
predetermined manner in which the user communicates himself with the expert
system, and it is indicated by the task "To accept new orders of the user".
The maximum level is the user interface with Shell while an
application of expert system develops. Regularly user interfaces more
sophisticated are designed to facilitate the operation of the expert systems.
For example an expert system to control a manufacturing plant can have a user
interface which shows the bar chart of the factory, with posting of high
resolution of map of bits colors. The warnings and the messages of state can
appear brilliant colors, with buttons and scales simulated. Actually, the main
effort can be devoted to the design and the implementation of the user
interface, and not to the base of knowledge of the expert system, especially
when it is about a prototype. According to the capacities of Shell, the user
interface can be implemented starting from the rules or in another language
called by the expert system.
A means of explanation must admit that the user puts the
question how the system arrived at a certain conclusion or why certain
information is necessary. For an expert system based on the rules, the question
of knowing how the system arrived at a certain conclusion is easy to answer
because it is possible to keep in a panel a history of the activated rules and
contents of the active memory. The sophisticated means of explanation allow
that the user puts questions of the type "that would happen it if...?" to
explore the ways of reasoning which are alternated through the hypothetical
reasoning.
|