Test Results

In this section we will present the figures that were obtained from our test runs. Figures for various Prolog systems will be given. We were using the same machine as for our internal tests. And we were also using the same Jekejeke Prolog system as for our internal tests. The Jekejeke Prolog system was run in default mode, which means that all its optimizations are switched on.

We were comparing the Jekejeke Prolog system to other available Prolog systems. We compared it with 4 freely available Prolog systems and one commercially available Prolog system. In particular we compared the following Prolog systems:

We were running the test suite again two times in a row, and were only measuring the second run. The first run would be a cold start with incomplete caching of the predicate references. The second run is a warm start which we wanted to measure.

For the Jekejeke Prolog system we invoked the Harness Java Class. For the other Prolog systems we opened their console and manually started the test harness. B-Prolog and Ciao Prolog did provide a DOS console whereas the other Prolog systems had a custom console.

The manual start included first consulting the Prolog system specific test harness file and then invoking the test suite predicate. The test harness file for a Prolog system includes its predicate for measuring the elapsed time.

The test harness file for a Prolog system also includes the statements to read the common file and the test programs. Not in all Prolog systems this was possible via the consult/1 predicate. For the GNU Prolog system we had to refer to the include/1 predicate.

The absolute raw results measured in milliseconds are displayed in the table below:

Table 4: Absolute Detailed Interpreter Results (ms)
Test ECLiPSe SWI gprolog Ciao B-Prolog
Jekejeke
nrev 59
156
269
405
655
720
crypt 94
515
483
141
905
624
deriv 105
249
312
1'841
842
391
poly 94
297
312
109
780
502
qsort 94
296
375
437
1'030
779
tictac
171
375
670
312
1'076
969
queens
94
405
530
124
1'186
739
query
265
967
797
468
1'919
985
mtak
109
281
405
94
1'435
676
perfect
658
328
359
234
967
527
calc
140
343
422
1'245
1'108
492
Total 1'883
4'212
4'934
5'410
11'903
7'404

The picture below shows the total results relative to the Jekejeke Prolog system:


Picture 8: Relative Interpreter Results

Some of the tested Prolog systems do not distinguish between compiled and interpreter mode. We find SWI-Prolog and Jekejeke Prolog in this category. Some of the tested Prolog systems are declared as compilers. We find ECLiPSe Prolog and GNU Prolog in this category. For these systems there was not much choice in how to execute the benchmarks.

Some of the tested Prolog systems provide both compiled and interpreted mode. We find B-Prolog and Ciao Prolog in this category. For these systems we choose the interpreted mode to run our benchmarks. As the bar chart shows, there is no clear divide between compiled and interpreted. But the fastest Prolog system here is a compiled one.

Comments