This makefile is for Borland C++ Version 3. Those for other compilers
will be similar. One of the major differences is the inclusion of
different run-time libraries:
rtx_bc3.lib Borland C++ version 3
rtx_mc7.lib Microsoft C/C++ version 7
rtx_tc3.lib Borland Turbo C version 3
The library appropriate for the compiler being used must be linked with
the code generated by the RTX compiler in order for the generated code
to work.
Please note that these libraries have all been compiled using the Large
memory model. The code generated by the RTX compiler in intended to be
compiled using this same memory model.
An explanation of the makefile follows:
Line 2: Specifies that .dsl and .afs are standard file
extensions.
Lines 3-5: Specifies that to make a .c file from a .dsl file you
run rtx1, which is pass 1 of the RTX compiler, with the
.dsl file name as argument. Then you run rtx2.
Lines 6-8: Specifies how the files such as temp_ctl.h get created
from the .dsl files.
Lines 9-11: Subroutine prototype and record type definitions can be placed
in a .afs specification file. These are akin to .h files
in C. These lines specify how a .h file can be
automatically generated from a .afs file.
A specification file can be included in any RTSUB by a
statement such as:
SPEC xxx.afs;
To avoid the need for the user to maintain compatible .h
and .afs files for C and DSL, the user can simply
specify the records types and subroutine prototypes in
the .afs file and have the program afs2h convert these
into a compatible .h file.
Line 14: CFLAGS specifies compiler options, in this example for
Borland C++. It is important to set the flags to tell
the compiler where to find the RT-Expert include and
library files.
Line 18: Specifies all the component code objects in the program.
Lines 20-21: Specifies how to create the program my_main.exe. If
there are too many RTSUBs, then the command line created
by Make may be longer than the 127 bytes allowed by DOS
and the command line will be truncated. This can be
overcome in the Borland Make utility by replacing lines
20 and 21 with:
my_main.exe: $(OBJS)
$(CC) @&&!
$(CFLAGS) $(OBJS) rtx_bc3.lib
!
This will cause the expanded form of the items between
the ! marks to be placed in a file and to have the
compiler take its input from there.
A similar facility exists in the Microsoft Make utility
but with different syntax.
$(CC), the compiler name, is predefined by Make unless
changed.
Line 23: The .h files in this dependency statement are derived by
running the RTX compiler which also creates the C files
from the .dsl files.
Lines 25-39: Remaining application specific dependencies.
Note that the generation of the .h files from the .dsl files is
explicitly stated. This allows for the re-generation of the .h files if
the .dsl file changes.
5. Other Topics
Variables can be declared as numeric data types such as floating point
or integer numbers. They can be declared as boolean data types which are
true or false or they can be declared as characters or strings of
characters. They can also be declared as symbolic data types which have
values such as ON or OFF.
Variable declarations for numbers can be of the form
temperature is FLOAT;
or they can use the form:
temperature: FLOAT;
Other permissible formats are:
x,y: LONG;
x,y are LONG;
x,y is LONG;
(Also allowed with plural variables, although we slaughter the Queen's
English.)
DSL requires the declaration of all variables before use. This is so
that the DSL compiler can detect as many user errors as possible. For
example, this strong data typing enables DSL to check that a floating
point number is not added to a string and that the arguments to a
function or procedure are of the correct type.
Some of the types that a variable can be declared as being are:
SHORT - 16 bit integer*
LONG - 32 bit integer*
INTEGER - 16 bit integer*
FLOAT - 32 bit floating point*
DOUBLE - 64 bit floating point*
BOOLEAN - TRUE or FALSE
STRING - null terminated string of characters
CHAR - single ASCII character
SYMBOLIC - symbolic variable
GOAL - goal variable for goal-directed chaining
*(Note that these are operating system and hardware dependent - Typical
values are shown.)
Record variable types can be declared as in:
TYPE xxx IS
BEGIN
yy:FLOAT;
xx:SHORT;
END;
These declarations must precede the use of a type in a .dsl file. If a
parameter of the RTSUB is of a user-defined type then the type
declaration must precede the RTSUB itself, as in:
TYPE atype IS
BEGIN
yy:FLOAT;
xx:SHORT;
END;
RTSUB xxx(a: ATTRIB atype) IS
.........
In such a case, the C equivalent of the type definition will be placed
in the file generated .h file. If a record type is internal to the
RTSUB, then it can be declared in the declaration section of the RTSUB,
before its use, as in:
RTSUB yyy(.......) IS
DECLARE
TYPE btype IS
BEGIN
yy:FLOAT;
xx:SHORT;
END;
bb IS btype;
In this case the C type declaration is not generated into the .h file.
In the case where the record is a parameter, the RTX compiler will also
generate a record type declaration:
typedef struct
{
atype dsl_data; /* record structure */
dsl_attr_type dsl_attr; /* attributes of variable (time
* last modified and importance).
*/
} dsl_atype;
into the .h file. This declaration can then be used to declare a static
variable to hold the record and its attributes.
Variables within a record are referred to using a dot convention. Thus
the field xx of record aa is referenced by aa.xx. Records can be nested
and as many dot fields as necessary can be used.
When rules refer to a specific field of a record in their antecedent,
they are only triggered by a change to that field, not to any other
field in that record. Thus:
IF aa.xx > 10.5 THEN aa.yy := 2;
will only be triggered when the field aa.xx is changed and will only
trigger rules that have aa.yy in their antecedent. If a record is an OUT
parameter from a subroutine call or is returned from a function then it
is assumed that all fields have changed. Similarly, if the whole record
is set in an assignment statement, then it is assumed that all fields
have changed.
If yy.zz has subfields, then it is assumed that yy.zz has changed if any
of its subfields change. This applies at any level of record nesting.
If an ATTRIB variable is a parameter to an RTSUB and it is determined to
have changed, all fields are assumed to have changed.
All variables can be in one of two states: defined or undefined. If a
variable has been assigned a value, either through the execution of a
rule or through the arrival of a new value for the variable in a
parameter, then it becomes defined. A variable is undefined before it is
set. It can become undefined in a statement such as:
if y = defined then x:= undefined;
Note that defined and undefined are keywords. Also note that the above
rule will be evaluated whenever a new value is assigned to y and its
antecedent will always be true in such a case.
If a consequent statement such as:
a := b + c;
is executed, then a will be set to undefined if either b or c is
undefined. In general, the left hand side of any consequent statement
will be set to undefined if any variable on the right hand side is
undefined. The exceptions to this are procedure or function calls which
are called with undefined variables set equal to the appropriate "magic"
numbers. It is up to the subroutine to determine how to handle undefined
variables.
If the subroutine being called cannot handle undefined numbers, then the
rule should be constructed in such a way as to preclude this such as in:
IF x = defined THEN y := sin(x) ELSE y := undefined;
A top level rule such as:
IF b HAS CHANGED and c HAS CHANGED THEN a := b + c;
will recompute a whenever b or c change and will set a to undefined if
either b or c become undefined.
The only form of expression where this propagation of defined and
undefined is not followed is in boolean expressions. If x,y, and z are
booleans then:
z := x OR y;
will give z a value of true if either x or y is defined and true. In
this case the other variable can be undefined. If both x or y are
undefined then z is undefined. DSL correctly handles the evaluation of
booleans that may take undefined states.
We may also want to execute a statement if a variable changes, that is
takes on a new value or becomes undefined. We can test for this by using
the keyword CHANGED as in:
IF aa HAS CHANGED THEN .....
This same keyword can be used to cause a rule to be re-evaluated
whenever the RTSUB is called as in:
IF TIME CHANGED THEN .....
An antecedent clause is assumed to be false if there is not enough data
to evaluate the clause. For example:
IF x > y OR c > d THEN f := TRUE;
This rule will be evaluated if x, y, c, or d changes. The antecedent
will be true if either x and y are defined and x is greater than y or c
and d are defined and c is greater than d.
Antecedent clauses are only evaluated when all their requisite variables
contain valid data. For efficiency, they are only re-evaluated whenever
a variable in the clause changes. Also the same clause appearing in more
than one antecedent is only evaluated once.
A top level rule can be written in the form of a statement. For example:
a := b + c;
This is equivalent to writing:
IF b HAS CHANGED OR c HAS CHANGED THEN a := b + c;
These 'statement' rules are triggered by the availability of new data
values just like regular rules.
Statements can appear in nested blocks as the consequent of a rule, as
in
IF aa = TRUE THEN
BEGIN
a := b + c;
z := 2 * a + 3;
c := undefined;
END;
These statements are executed sequentially if the predicate clause of
their top level rule is true. They are not triggered by changes to their
right hand side variables.
Rules in DSL can be nested as in:
IF w > x THEN
IF y > z THEN
f := TRUE;
They can also be a part of a statement block as in:
IF w > x THEN
BEGIN
a := b * c;
IF y > z THEN f := TRUE;
END;
DSL processes nested rules and statements differently from top-level
rules. Statements within a nested BEGIN/END block are processed
sequentially. Nested rules are treated much like an IF/THEN statement
is treated in a procedural language. That means that the antecedent is
evaluated. If it is true, the consequent is executed. If it is false or
undefined then the alternate consequent is executed (if present). This
is different from a top-level rule where the rule is only evaluated if
the data has changed. In a nested rule, the rule is evaluated based
only on the value of the data, not the newness of the data.
Statement blocks and rules can be nested to any arbitrary level but only
the top level rules are triggered by changes to their antecedent. It is
important to remember to include "trigger" variables in the antecedent
to a top level rule. For example:
IF state = state_1 THEN
BEGIN
........
zz := aa + bb;
END;
This rule will be evaluated only when state changes to state_1. This may
be what we want, but usually we require that, if we are in state_1 and
aa or bb changes, then re-evaluate zz. In this case we would need to
state:
IF state = state_1 AND (aa HAS CHANGED OR bb HAS CHANGED) THEN
BEGIN
........
zz := aa + bb;
END;
If we only wanted the nested block to be evaluated if aa and bb had
values then we would state:
IF state = state_1 AND aa = defined AND bb = defined THEN
BEGIN
........
zz := aa + bb;
END;
Note that keywords in DSL can be in mixed upper and lower case.
Variables are case sensitive as are subroutine names and data types.
Users familiar with the Ada language will note the syntactic similarity
with Ada. While DSL is functionally very different from an Ada program
in the way it executes, Ada syntax has been used for DSL expressions
wherever possible. This is because the Ada syntax has been carefully
designed to be usable and logically consistent.
Some of the actions of the rule execution mechanism are set by PRAGMA
statements. These usually appear in the INIT section of the RTSUB. They
control such things as the order of rule execution.
Rules execution order is determined by the dynamic importance the rules
hold within an RTSUB. This importance can be determined by a number of
methods set by PRAGMA ORDER. These are:
LEXICAL: The rule's importance is based on lexical order
(the earlier rules in the RCO are more important
than the later). This is the default.
DATA: The rule's importance is inherited from the
importance of its most important data item. This
allows important data items to pre-empt less
important data items in flowing through the
rules.
RECENCY: Rules whose antecedent variables have been most
recently changed are given precedence. This
gives priority to reactive chains.
The use of DSL rules has considerable advantages over coding the if...then...else... blocks in a procedural language such as C. The problems with using a procedural language for decision making include:
a) Typically only a small percentage of the rules are affected
by an input data variable (such as 'temperature' in the
example). Using a conventional language, a programmer has to
insert additional code so that only those rules for which
there is valid data are executed. For example, the rule:
IF x > y THEN c := a + b;
can only be executed if valid data values are available for
x and y. Also, if x is greater than y then c becomes
undefined if a or b is undefined. This means that the
programmer in a conventional language has to insert
considerable code to track which variables contain valid
data and which if...then...else... statements can be
executed as a result. This is all taken care of
automatically by the DSL rules execution mechanism.
Further, it is desirable from an efficiency viewpoint, to
only execute a rule when new values are available for
variables which affect the left hand side condition of the
rule. As soon as there are more than a few rules, the
complexity of the code, using conventional methods,
increases exponentially with the number of rules. This is
made worse by the fact that variables set on the right hand
assignment side of one rule often appear on the left hand
condition side of a number of other rules. This requires
that the programmer add code to force execution of rules
which can be executed as a result of the execution of other
rules. Also the programmer has to provide code to decide
which of a number of possible rules should be executed if
the setting of a variable results in the left hand side
condition of a number of rules becoming true. All this
results in very complex code which is hard to modify and
maintain.
b) The rules in the decision code modules are the most
frequently changed part of most systems as they contain the
knowledge of the system which evolves over time as the
system's usage changes. For example, it may be desirable to
change the temperature at which to maintain a room or to
make this a function of both the temperature and the
humidity. If the rules are written in a procedural language
then it is very difficult to make changes as the changes to
one rule may affect the code linking this rule to many other
rules, and these to many more rules.
It should be noted that the complexity problem is especially severe when
the system has to function in real-time. That is, it has to respond in a
timely manner to randomly arriving data from multiple sources. DSL
solves this problem by generating the code to automatically sequence the
execution of the rules.
DSL rules follow similar principles to expert systems rules except that:
1. They are oriented towards the building of real-time or on-line
systems
2. They are compiled and not interpreted
3. They are trigger-driven in a data flow manner
4. They are strongly data typed to allow automatic verification
by the compiler
5. They are oriented towards reasoning about the times at which
events occur
6. They are able to perform computations with variables which are
defined only over a limited time.
7. They can be automatically re-triggered by the passage of time.
8. They have alternate action (else) clauses which are needed for
efficient interpretation of data.
Unlike expert systems which use large monolithic sets of rules,
RT-Expert encourages programmers to break their rules down into objects
which represent limited decision domains. These objects are then coded
as RTSUBs which can be developed, tested, and maintained as separate
entities. Typically the RTSUBs will contain 10 to 30 rules which is a
good compromise between ease of development and efficiency of execution.
This also allows users to develop libraries of re-usable knowledge