Stackless & String-processing
Tim Peters
tim_one at email.msn.com
Wed Jul 21 00:57:32 EDT 1999
More information about the Python-list mailing list
Wed Jul 21 00:57:32 EDT 1999
- Previous message (by thread): Stackless & String-processing
- Next message (by thread): Stackless & String-processing
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
[Fernando Pereira] > ... > But from my readings (those mentioned in the thread and others) as well > as your comments, the whole point of monadic combinators for parsing is > to interweave the grammatical structure with other computations, That was at least part of the point, the rest being the perceived advantages of using the same notation (language) for both parsing and semantic processing. Based on your work with weighted automata, I'd guess you'd be the last person on earth to dismiss the advantages of a unifying framework <wink>. > thus making the underlying grammatical structure less accessible to > optimization. Whereas your approaches have factored things in such a way as to make them more accesible to optimization. If it isn't clear yet <wink>, efficiency isn't driving Graham here. He's found a cool new approach, and is having fun with it. Different strokes. Most of my parsing life has been concerned with artificial languages, and there I've got a different complaint about intermixing syntax with semantics: the grammar is interesting for its own sake, and it's a positive burden at times to clutter it up with semantic annotations! Case in point: Python's grammar is formally defined by the file Grammar/Grammar in the source distribution. That's the input to a parser generator. At 60 non-noise lines, it's quite possible to digest the whole thing, and minor changes in syntax can be effected without touching anything else (how minor? exactly what doesn't require changing anything else <wink>). This is a joy as it is; even extending it to a conventional attributed grammar would get in the way of the purposes to which the grammar file is *usually* put. I haven't tried using parser combinators, but in reading the papers the question I had over and over is whether they scale well in practice, whether measured by speed or maintainability (they eventually answered the "speed" one: no). It didn't bode well that each new example seemed to require creating a new combinator, and then you've got the implementation of *that* to worry about on top of the target grammar and the semantics. It seemed to carry the flavor of programming in an assembly language for "a real parsing engine" (as Icon often feels to me in contrast to SNOBOL4 as grammars get large). "You can have any sequencing you want, but you have to build it yourself." > As I suggested above, it would be interesting to define classes of > monadic programs that still allow (an appropriate extension of) those > optimizations. But AFAIK that has not been done. It would. Not for me, though <wink>. lex/yacc-looking-better-all-the-time-ly y'rs - tim
- Previous message (by thread): Stackless & String-processing
- Next message (by thread): Stackless & String-processing
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Python-list mailing list