стр. 60 |

TEAM LinG - Live, Informative, Non-cost and Genuine !

171

10.4 SUM.PAR

adding 89.01 to 339.6 getting 428.61

adding 78.9 to 428.61 getting 507.51

adding 67.89 to 507.51 getting 575.4

adding 56.78 to 575.4 getting 632.18

adding 45.67 to 632.18 getting 677.85

adding 34.56 to 677.85 getting 712.41

adding 23.45 to 712.41 getting 735.86

adding 12.34 to 735.86 getting 748.2

total of all is 748.2

SUM.PAR finished.

As before, at the end of the run we enter history;, and get this list of п¬Ѓred rule

instances.

>>history;

Fired Rule History, first (stack top) to most recent

(stack bottom)

******** STACK TOP **********

rule r0 Time Tags 17 16 pconf 1000

rule r0 Time Tags 17 15 pconf 1000

rule r0 Time Tags 17 14 pconf 1000

rule r0 Time Tags 17 13 pconf 1000

rule r0 Time Tags 17 12 pconf 1000

rule r0 Time Tags 17 11 pconf 1000

rule r0 Time Tags 17 10 pconf 1000

rule r0 Time Tags 17 9 pconf 1000

rule r0 Time Tags 17 8 pconf 1000

rule r0 Time Tags 17 7 pconf 1000

rule r0 Time Tags 17 6 pconf 1000

rule r0 Time Tags 17 5 pconf 1000

rule r0 Time Tags 17 4 pconf 1000

rule r0 Time Tags 17 3 pconf 1000

rule r0 Time Tags 17 2 pconf 1000

rule r0 Time Tags 17 1 pconf 1000

rule r1 Time Tags 33 pconf 1000

******** STACK BOTTOM **********

So far, there appears to be no advantage to parallel mode.

10.4.3 Running sum.par with prstack; and run 1; Commands

We now run sum.par by repeating the prstack; and run 1; command sequence, with

this output;

TEAM LinG - Live, Informative, Non-cost and Genuine !

172 SIMPLE EXAMPLE PROGRAMS

Program SUM.PAR computes the sum of s recursively in one

parallel step.

Compiling program SUM.PAR

ready to run SUM.PAR

>>prstack;

LOCAL STACK (unordered)

******** STACK TOP **********

rule r0 Time Tags 17 16 pconf 1000

rule r0 Time Tags 17 15 pconf 1000

rule r0 Time Tags 17 14 pconf 1000

rule r0 Time Tags 17 13 pconf 1000

rule r0 Time Tags 17 12 pconf 1000

rule r0 Time Tags 17 11 pconf 1000

rule r0 Time Tags 17 10 pconf 1000

rule r0 Time Tags 17 9 pconf 1000

rule r0 Time Tags 17 8 pconf 1000

rule r0 Time Tags 17 7 pconf 1000

rule r0 Time Tags 17 6 pconf 1000

rule r0 Time Tags 17 5 pconf 1000

rule r0 Time Tags 17 4 pconf 1000

rule r0 Time Tags 17 3 pconf 1000

rule r0 Time Tags 17 2 pconf 1000

rule r0 Time Tags 17 1 pconf 1000

******** STACK BOTTOM **********

>>run 1;

adding 65.43 to 0 getting 65.43

adding 54.32 to 65.43 getting 119.75

adding 43.21 to 119.75 getting 162.96

adding 32.1 to 162.96 getting 195.06

adding 21.09 to 195.06 getting 216.15

adding 32.1 to 216.15 getting 248.25

adding 1.23 to 248.25 getting 249.48

adding 90.12 to 249.48 getting 339.6

adding 89.01 to 339.6 getting 428.61

adding 78.9 to 428.61 getting 507.51

adding 67.89 to 507.51 getting 575.4

adding 56.78 to 575.4 getting 632.18

adding 45.67 to 632.18 getting 677.85

adding 34.56 to 677.85 getting 712.41

adding 23.45 to 712.41 getting 735.86

adding 12.34 to 735.86 getting 748.2

>>prstack;

LOCAL STACK (unordered)

******** STACK TOP **********

rule r1 Time Tags 33 pconf 1000

TEAM LinG - Live, Informative, Non-cost and Genuine !

173

10.6 MEMBERSHIP FUNCTIONS, FUZZIFICATION AND DEFUZZIFICATION

******** STACK BOTTOM **********

>>run 1;

total of all is 748.2

>>prstack;

LOCAL STACK (unordered)

******** STACK TOP **********

******** STACK BOTTOM **********

In the entire program sum.par, only 17 rule instances have been found п¬Ѓreable, as

compared to hundreds of rules in sum.fps. It is very well known that in production

systems such as FLOPS, the bulk of the program run time (outside of I/O) is used up

by the system in determining which rules are п¬Ѓreable. The improvement in running

time achieved by parallel FLOPS over serial FLOPS can be dramatic.

10.5 COMPARISON OF SERIAL AND PARALLEL FLOPS

Conceptually, serial FLOPS amounts to a depth-п¬Ѓrst search of a decision tree, and

parallel; FLOPS amounts to a breadth-п¬Ѓrst search. In practice, if information must

be elicited from a user, when the next question to be asked depends on the

answer to the previous question, serial FLOPS is appropriated; if the information

comes in automatically, or is present at the beginning of the program run, parallel

FLOPS is appropriate. In the sum problem, all the information is available at the

beginning of the run, so the parallel program sum.par is to be preferred. If a

problem can be solved with either serial or parallel FLOPS, the lower overhead

of parallel FLOPS is usually the way to go.

10.6 MEMBERSHIP FUNCTIONS, FUZZIFICATION AND

DEFUZZIFICATION

10.6.1 Membership Functions in FLOPS/

Membership functions in FLOPS are speciп¬Ѓed by the memf command. This permits

the programmer to specify the point at which the membership function п¬Ѓrst begins to

rise from zero; the point at which it п¬Ѓrst reaches 1; the point at which it п¬Ѓrst starts

down from 1; and the point at which the function reaches zero again. If the function

stays at 1 from -inп¬Ѓnity until it starts to drop, the п¬Ѓrst parameter is set to 21e6; if the

стр. 60 |