On Pitman's „Special forms in Lisp“
19th November 2011
It appears that during last few years controversial concept of fexprs is actualized in Lisp community. Newlisp and Picolisp, two small, dynamically scoped Lisps supporting fexprs are actively developed and respective communities slowly, but consistently, grow. Fexprs are frequently discussed on authoritative Lambda the Ultimate web site, passionately advocated by Schemers Thomas Lord and Ray Dillinger. Related links and discussions appear in other Internet forums and blogs. Recently, John Shutt published Ph.D. thesis on programming language Kernel, his attempt to extend Scheme with fexprs, while keeping lexical scope. Shutt's ideas attracted significant attention and few efforts for implementation are reported.
On the first sight, fexprs are elegant and powerful feature. The reason for their discontinuation in most important Lisp dialects is not obvious. According to Christian Quiennec, fexprs were „put to death“ by Kent Pitman who in his 1980 conference presentation „Special Forms in Lisp“
Pitman's opinion was representative:
The presentation “Special forms in Lisp” covers many fexpr related issues. Unfortunately, some relevant properties of fexprs could be misunderstood or omitted.
2. Importance of Fexprs
It appears that Pitman's conclusion cited above is more result of the counting small and practical pros and cons of fexprs than making of “the big picture.” And that is exactly where fexprs shine. That argument is expressed on particularly strong and inspiring way by Smalltalk designer Alan Kay :
Fexprs really do add to the generality of the language, on particularly interesting way – by exposing the most important elements of Lisp language to processing as a first class objects, during runtime, like functions and other data are processed.
Furthermore, fexprs can replace both functions and macros, making Lisp not only more general, but also conceptually simpler, with more regular semantics. The implementation of Lisp can be, at least theoretically, smaller. Even the number of basic, built-in fexprs can be reduced, since quote is equivalent to (lambda-fexpr(x)x), if lambda-fexpr is, conveniently, fexpr version of lambda.
3. Expandability of Fexprs
Second, comparison of macros and fexprs is done in “macros in general” vs “fexprs in general” fashion. In such comparison, all macros have some desirable properties that all fexprs cannot have. Pitman wrote:
That means, macros are expandable during compile time. Particularly, expansion allows "code walking" and various optimizations during compile time. Unlike macros, in general case, fexprs cannot be expanded at all, let alone before runtime. However, comparison is not fair, because fexprs in general are more expressive than macros in general. The macros should be compared with fexprs that could be used as an alternative.
For a given macro M, defined with
(define-macro (M v1 ... vn) _ _ _)
we can easily define associated macro-like fexpr with
(define-fexpr (F v1 ... vn)(eval (begin _ _ _))).
Fexpr F is equivalent to macro M in following sense: every program P that uses M, and the program P[F/ M], obtained by replacement of M with F evaluate to same result. Furthermore, if F is macro-like fexpr equivalent to macro M, we can define expansion of fexpr call (F ... ) as expansion of macro call (M ... ). Macro-like fexpr calls can be expanded during compiling, and expansion can be used in “code walkers” as well. It could be said that macros are equivalent to one class of fexprs that can be inlined and optimized during compile time.
Possible objection is that compiler cannot know that particular fexpr is not used as the first class object. But programmer can do that; he only needs to recognize that fexpr can be implemented as macro. It is sound approach: programmer, generally, knows more than it can be deduced from the code he wrote. Another possible objection is that, if fexprs are used as macros, the advantage of the fexprs is lost. It is true if programmer limits himself on macro-like fexprs. But, he can also use more general fexprs – if loss of "transparence of functionality" is acceptable.
Assuming that compiler can optimize fexprs on described – indeed, very simple way – then, for every program that uses macros, there is equivalent and equally fast program that uses fexprs instead of macros. Inverse is not true. Neither one macro can replace fexpr in programs that use fexpr as the first class value. Rare Lisp dialects support the first class macros (not fexprs!), but these are not discussed here.
4. The Price of Macro Expansion
Third, although Pitman warned that macro expansion is space-demanding, the possibility that macro expansion can be time-demanding was not discussed. Usually, time required for macro-expansion is not important, because expansion is done only once, before compiling, and after that, the program is used in executable form only. However, Lisp, perhaps more than other languages, is designed to be used for generation of the code during runtime. Generated code can be evaluated using eval; in that case, macro calls are expanded during runtime. Slightly less obviously, if program generates lambda-expressions and compile (convert, coerce) these in form that can be “applied” or “funcalled” then expansion during runtime is unavoidable, even if eval isn't explicitly used.
The problem of macro expansion during runtime was known in Lisp community and some efforts were invested in solving it. See “Evolution of Lisp” for discussion. The problem can be avoided if non-expanding fexprs are used instead of macros. Surprisingly, that comparative advantage of fexprs is not well described in literature. Pitman doesn’t discuss it. Few years later, Z. Lichtman reported moderate slowdown (15%) if macros are used instead of fexprs. As shown in some earlier posts, the price can be higher.
It can be confusing that I wrote about expandable fexprs, and now, I claim that fexprs benefit from non-expanding. There is no single fexpr alternative to given macro; there are many of these. Some are expandable, and others are not.
For instance, take a look on fexpr at-least, generalized or, such that
(at-least e0 e1 ... en)
is true if and only if, well, at least e0 of expressions e1, ..., en evaluate to true. There are many ways this fexpr can be defined – some of these expand, and others do not. For instance, the first of the following two fexprs (slightly changed Newlisp) doesn’t expand, and second expands - and expansion can be done before runtime:
As a side note (because that issue is not discussed in “Special forms”) careless use of fexprs, just like careless use of macros, might result in accidental overshadowing of variables. The solutions are similar (i.e. using gensyms or some kind of predefined “hygiene”). Shutt's approach is novel.
In “Special forms” the technique of wrapping macro around function for reduction of size of the expanded code is described:
The same technique can improve speed of macro expansion of the code during runtime; it alleviates the problem, but doesn’t solve it completely.
Although Pitman's article contains number of valid arguments, including some in favour of fexprs - three important arguments seem to be omitted:
These claims constitute strong case for fexprs, particularly because one of the main arguments against fexprs was their influence on the speed of the programs.