[Commit] mint-2004b mint.tex,1.6,1.7 translators.tex,1.5,1.6

Bart Massey commit at keithp.com
Wed Dec 1 22:00:36 PST 2004


Committed by: bart

Update of /local/src/CVS/mint-2004b
In directory home.keithp.com:/tmp/cvs-serv1581

Modified Files:
	mint.tex translators.tex 
Log Message:
Finished evaluation section, etc.



Index: mint.tex
===================================================================
RCS file: /local/src/CVS/mint-2004b/mint.tex,v
retrieving revision 1.6
retrieving revision 1.7
diff -u -d -r1.6 -r1.7
--- mint.tex	1 Dec 2004 23:20:49 -0000	1.6
+++ mint.tex	2 Dec 2004 06:00:33 -0000	1.7
@@ -7,6 +7,8 @@
 \usepackage{prentcsmacro}
 \usepackage{epic,eepic}
 
+\renewcommand{\baselinestretch}{0.98}
+
 \newcommand{\mysep}{\par\noindent\rule{0.75in}{0.5pt}\\}
 
 \newcommand{\id}[1]{{\tt #1}}

Index: translators.tex
===================================================================
RCS file: /local/src/CVS/mint-2004b/translators.tex,v
retrieving revision 1.5
retrieving revision 1.6
diff -u -d -r1.5 -r1.6
--- translators.tex	2 Dec 2004 05:19:14 -0000	1.5
+++ translators.tex	2 Dec 2004 06:00:33 -0000	1.6
@@ -50,17 +50,43 @@
 
 \subsection{Evaluation}
 
-For some grammars, particularly ambiguous ones involving precedence, the 
-implementation of parser generator performed extremely poorly even for
-a modestly-sized grammar.  It seems the structure of the grammar had more
-to do with the degraded performance than size did, suggesting that an
-opportunity for further optimization exists.
+Although the MINT tool is still in the late stages of
+development, it is complete enough to do some preliminary
+analysis of its performance and usability.  All of the
+sample grammars of the Appendix were timed over 5 runs on a
+3GHz Pentium laptop: the median runtime is reported here.
+The results are summarized in table~\ref{tab-perf}.  The
+length reported for each grammar is the average
+length  in tokens of a production rule in that grammar.
+Lex and parse table size is measured in terms of tokens, to
+account for the fact that the ASCII serialization is much
+larger than the expected in-memory set size.  By this
+measure, sizes appear to be quite reasonable.
 
-The parse table size, however, did seem to correspond with the generation
-time fairly closely, suggesting a complexity endemic to the task at hand.
-Some good results were achieved by the careful application of memoization
-to the inner loop of the LR(1) closure algorithm.  Various statistics
-on the performance of the generator with respect to the examples grammars
-given in this paper can be found (wherever).
+\begin{table}
+\begin{center}\small
+\begin{tabular}{r|r|r|r|r|r|r}
+&Runtime&Tokens&Rules&Length&Lexer&Parser\\
+\hline
+expgram.mnt&0.468s&6&4&2.5&237&1022\\
+regram.mnt&1.280s&12&14&2.07&397&6555\\
+bnfgram.mnt&2.520s&23&44&1.97&3965&3737\\
+nickle.mnt&70.037s&38&45&2.66&4213&61158
+\end{tabular}
+\end{center}
+\caption{MINT Performance}\label{tab-perf}
+\end{table}
+
+The MINT lexer generator is quite fast, and did not
+contribute significantly to reported MINT runtimes.  The
+MINT parser generator seems to perform a bit disappointingly
+for grammars involving ambiguity resolved by precedence.  It
+seems the structure of the grammar has more to do with the
+degraded performance than does size.  Known opportunities
+for further optimization exist: the performance of
+production MINT is expected to be much improved.  Some good
+speedups have already been achieved by the careful
+application of memoization to the inner loop of the LR(1)
+closure algorithm.
 
 




More information about the Commit mailing list