The syntax definition uses an extended BNF where all symbols are terminals, except
nonterminal symbols are capitalized;
::=
, |
, *
, +
, [
, ]
are
metasymbols with the usual meaning (definition, alternative, zero or
more repetitions, one or more repetitions, begin of optional part, end
of optional part).
The type system is the system of simple types with a type and recursion. The type language comprises the types
basic, for every expression that never evaluates to a function or an element of an algebraic datatype as defined by define-data;
[τ1,...,τn]→τ0, for every expression that always evaluates to a function;
TC[τ1,...,τn], for every expression that always evaluates to an element of the algebraic datatype named TC (the τi are the types of the arguments of the constructors in some unspecified fixed order);
, for every expression that cannot be given one of the other types, in particular for an expression that may evaluate
to a function and also to some non-function value;
to an element of datatype TC and also to an element of a different datatype TC’ or an element of a non-algebraic type.
The PGG system performs type inference for this system using Henglein’s algorithm [14]. Due to the presence of recursive types, the result of the type inference is a graph where each node is annotated with a type constructor.
The binding-time analysis assigns a binding-time to each node in the type graph and ensures that the binding-time assignment is well-formed. Wellformedness of such a binding-time assignment B means that the annotation on a type node is always less than or equal to the annotations on the direct descendants of that node. A node of type is well-formed if it assumes the maximum possible binding time (it is kept dynamic throughout all stages of specialization). That means that potential type clashes are postponed to the last stage of running the program.
The binding-time analysis inserts a lift-expression on top of each expression of basic type that occurs as
the argument of a primitive operations;
the argument of a function;
the “then” or “else” arm of a conditional; or
the argument of a data constructor.
A lift-expression injects a value computed at specialization time into the residual program.
For primitive operations, the binding time analysis imposes a second set S of binding-time annotations on the nodes of the type graph. The well-formedness criterium for them has two aspects. First, the S annotation is always greater than or equal to the B annotation. Second, the S annotation of a node is greater than or equal to the S annotations of each direct descendant node in the type graph. For each expression that performs a primitive operation, the specializer requires that all arguments are leveled. That is, the binding time analysis enforces that the S and B annotations of all arguments are equal. In consequence, all argument computations take place at the same binding time (see [31]).
This restriction makes it safe to allow primitive operations to have functions as arguments.
PGG performs a representation analysis that assigns to each node in the type graph yet another binding time M. The M annotation of a type is wired in such a way that it reflects the maximum level + 1 at which expressions of that type will be subject to memoization, 0 if they are never memoized. For example, an expression of a type with an M value of 0 will never be memoized. All those expressions will use the standard representation of values of that type. If the B annotation is zero and the M value is greater than zero, the memoized representation will be used, which incurs a runtime overhead. The general condition is that expressions with B<M use the memoized representation and the others use the standard representation.
Unfortunately, there is a catch: if a type is memoized and at the same time required to be leveled then the type must assume the maximum binding time. This is, because primitive operations cannot deal with the memoized representation. The catch is that we now have a cyclic dependency: B implies the placement of memoization points, which implies a setting of M, which implies a deteriorated setting of B, which implies the placement of more memoization points, and so on.
PGG automatically inserts memoization points on top of dynamic conditionals and on top of dynamic lambdas. However, this only happens if the branches of the dynamic conditional or the body of the dynamic lambda contains a control transfer at specialization time, i.e., a static function call. Furthermore, if several of these are nested in an expression, only the outermost receives a memoization point. This is a slight refinement of the standard strategy [3, 2].
It is possible to turn this feature off for a given function by defining it using define-without-memoization (see 4.9.1). In any case, memoization points can be defined and inserted manually (see 4.9.5).
The binding-time analysis and the specializer treat eval specially. The binding-time analysis enforces that the argument of eval is leveled. The result type is also leveled and it may have any binding time that is greater than or equal to the argument’s binding time.
If the binding times are equal then the eval function is called at the specified level. If the binding times differ by one then the specializer simply drops the static argument value into the residual program. Otherwise, the specializer preserves the static argument for the next level and decrements the binding-time difference.
If a function is declared as the apply function (see 4.9.4) then the specializer uses a special postprocessor that transforms expressions of the form
(apply f (cons x1 (cons x2 (cons x3 ...))))
into
(f x1 x2 x3 ...)
To this end, it is necessary to declare cons as pure(see 4.9.4).
In addition to the standard lambda abstraction there is a polyvariant memoizing abstraction operator.
E ::= (lambda-poly (V*) E*)
The specializer treats a dynamic lambda-poly just like an ordinary lambda. A static lambda-poly specializes to a vector of all required specializations of the abstraction. The specializer constructs a partially static value consisting of a memoization map and a reference to the vector. Both, the vector and the memoization map, are initially empty. Whenever the specializer applies a lambda-poly it looks up the static skeleton of the arguments in the memoization map. If a specialization for this skeleton is already present in the map, it constructs a reference to the corresponding position in the vector. Otherwise, it extends the memoization map, constructs a new specialization of the lambda-poly, and inserts it into the vector.
This construct implements a first-class memoization mechanism and it could be used to replace the usual memoization.
These operators can be used in source programs.
The module cogen-boxops exports the following operators that manipulate references (boxed values):
(make-cell exp)
allocates a new mutable reference cell which initially contains the value of exp. Returns the reference to the new cell.
(cell-ref exp)
returns the value stored in the referenced cell if exp evaluates to a reference.
(cell-set! exp1 exp2)
stores the value of exp2 in the cell referenced by exp1 provided this value is a reference. The return value is unspecified.
The directives are only allowed at the top-level of a source program.
D ::= (define-without-memoization (P V*) D0* E*) | (define-without-memoization P E)
Define procedure P. The specializer does not to automatically insert memoization points in the body of P.
D ::= (define-data TC (C Ci*)+) | (define-data TC hidden (C Ci*)+)
Define the algebraic datatype TC with constructors C and selectors Ci. In addition, constructor test operations C? are defined. The name TC is used during type checking to check for equality of types.
For example, the declaration
(define-data list (nil) (cons car cdr))
defines the constructors nil (nullary) and cons binary, the constructors tests nil? and cons?, and the selectors car and cdr.
The binding-time analysis considers algebraic datatypes as partially static, i.e., the arguments of a constructor can have a different (higher) binding time than the constructor itself. In such cases, the specializer performs arity raising when appropriate. The constructors, selectors, and test operations are binding-time polyvariant, i.e., each use of such an operation may have a different binding-time pattern.
The second form of define-data declares a datatype whose elements are ignored by the memoization mechanism. This is a potentially dangerous feature because it can change the meaning of a program during specialization by cheating the memoization mechanism.
D ::= (define-type (P B*) B)
Declares the arity of primitive operation P. The actual values of the Bs are currently ignored.
Example:
(define-type (cons * *) *)
declares the operator cons of arity 2.
A variable which is declared as an operator but does not occur at operator position in an expression is eta-expanded by the frontend according to its declaration.
D ::= (define-primitive O T [dynamic|error|opaque|pure|apply|Number])
Declares the operator O of type T with an optional property.
The parameter T declares the type of the operator. It can be either -, indicating that the type of O is not restricted, or it can specify a polymorphic type for O. The grammar is as follows:
T ::= - | T0 T0 ::= (all TV T0) | (rec TV T0) | (TC T0*) | TV
Here, TV stands for a type variable (an arbitrary symbol) and TC stands for a type constructor (an arbitrary symbol). The syntax, (all TV T0), declares that variable TV is all-quantified in T0, like ∀ alpha.τ0. The syntax, (rec TV T0), declares a recursive type, like μα.τ0. The remaining cases are type constructor application and occurrence of a type variable. For convenience, the function type constructor, ->, is treated specially. Writing (-> t1 ... tn t0) declares a Scheme function that takes n parameters (n ≥ 0) and delivers a result of type t0.
The properties dynamic and opaque are synonyms. Each of them forces the binding time of O’s result to be dynamic. The property error advises the binding-time analysis that the result of the operation can assume any type whatsoever (because an error primitive raises an exception and never returns a value) and that its binding time is determined from the binding times of the arguments as with any primitive operation. The remaining properties advise the specializer what to do when it residualizes the operator. The pure property states that the operator does not have side effects. Instead of creating a let-binding for the expression (O V*), the specializer will treat it as a value, potentially discarding or duplicating the expression. The apply property states that the operator O is the apply function as defined in the Scheme standard.
If the property is a Number it declares the least binding time of the operator.
D ::= (define-memo M Number [Active]) | (define-memo M Number 'deferred)
Defines M as a unary operator indicating a memoization point at level Number. This is also the binding time of the memoization point. For two-level specialization this number is 1. Useful in connection with define-without-memoization.The optional parameter Active defines the minimum level of specialization at which the specialization point is active. The default is 0, that is, the specialization point is always active. Useful in connection with multi-level specialization if the same program is specialized with different levels.
Both parameters, Number and Active may be integer expressions using the free variable max-level, which is bound to the maximum binding-time presently in use.
Example: Similix defines the operator _sim-memoize as an indicator for memoization points. To achieve the same behavior in PGG requires the following declaration.
(define-memo _sim-memoize 1)
The second form of the directive declares an operator to construct deferred memoization points. An applied occurrence of a deferred memoization point has the form
(M V E)
When specialization hits upon a deferred memoization point, it extracts the static skeleton and looks it up in a secondary cache. Just as with standard memoization points, it creates a function call to a specialized version of E. The difference is that the specialization of E depends on a future value of V, so it must be deferred until the value of V becomes available.
D ::= (load Filename)
includes the contents of the file named by the string Filename into the program. The included file may contain additional loads, without limit on the nesting level. The Filename argument is always interpreted relative to the current directory that Scheme48 is running in.
D ::= (begin D*)
As in Scheme, top-level definitions may appear nested inside a begin. This is handy if a macro is to expand to more than one definition.
This section summarizes the available top-level commands of the PGG system.
The main entry point of the system is the function cogen-driver. It takes the following parameters
(cogen-driver InputSpec BindingTimeSkeleton)
where
InputSpec is either a single string specifying the name of a “jobfile” that contains a list of filenames, or a list of strings each of which specifies a filename: the source program is the content of all these files concatenated together.
BindingTimeSkeleton is a list where the first element is a symbol denoting the main function of the source program and the remaining elements are binding times for the parameters of the main function. The main function must have exactly as many parameters as the BindingTimeSkeleton indicates.
A binding time is a non-negative integer, with 0 denoting static. The PGG system assumes that the binding time of the result of the main function is the maximum of 1 and the binding times of the function’s parameters, i.e., the BindingTimeSkeleton.
The result of calling cogen-driver is a generating extension, i.e., a list of Scheme definitions that can be loaded and run.
This function is defined in module pgg. If it is not accessible try
> ,open pgg
at the top-level Scheme48 prompt.
A number of optional parameters may be specified after the BindingTimeSkeleton argument. They allow the generation of the generating extension as a Scheme48 module. The following options are recognized.
(goal SYMBOL) specifies that SYMBOL will be the name of the specialization entry point (i.e., the first parameter to specialize) of the generating extension.
(export SYMBOL ...) adds the listed SYMBOLs to the export list of the generated module.
(open SYMBOL ...) considers the listed SYMBOLs as module names that are to be opened to run the generating extension.
(files SYMBOL ...) considers the listed SYMBOLs as names of files that will be included in the generating module.
(SYMBOL ...) is included as an option line in the structure declaration for the generating module.
STRING the name of the file where the module should be written. Must be the last; subsequent options are ignored. Writes the files STRING.scm and STRING.config.scm, after stripping any extension from STRING.
The function specialize is the human interface to running a generating extension. It has two forms.
(specialize GenextMain BindingTimeSkeleton ARGS)
and
(specialize BindingTimeSkeleton ARGS NEW-GOAL)
GenextMain is the main function of the generating extension,
BindingTimeSkeleton is a list where the first element is a symbol denoting the name of main function of the generating extension, and the remaining elements are binding times its parameters. The list of binding times must be identical to the one given to cogen-driver when creating this generating extension.
ARGS is the list of arguments to the generating extension. Its length must be equal to the number of binding times supplied in the BindingTimeSkeleton. The positions corresponding to 0 entries in the skeleton contain the currently static arguments. The positions corresponding to other entries in the skeleton must contain symbols, they are used as stubs for generating identifiers.
(optional argument) NEW-GOAL is the name of the entry point for the specialized program. If it is not specified, PGG invents one for you, but admittedly not a very original one.
The result is a call template for the specialized function, a list consisting of the name of the function and of names of the arguments. The residual program can be retrieved via (get-residual-program). It is a list of Scheme definitions. Furthermore, the variable *support-code*contains additional code, for example data definitions that are necessary to run the specialized program.
If the returned call template has the form
'(multi-memo Level 'Goal Goal Bts Args)
then we have done one step of a multi-level specialization[13]. It means that the residual program is again a generation extension where Level is a number, Goal is the name of the entry point, Bts are the binding times of the arguments, and Args is a list of symbols of the same length. It can be loaded as usual and specialized again with specialize by constructing the BindingTimeSkeleton from Goal and Bts.
This function is defined in module pgg-residual. If it is not accessible try
> ,open pgg-residual
at the top-level Scheme48 prompt.
The function continue continues a specialization that has been suspended at a deferred memoization point.
(continue Name Arg)
The parameter Name identifies the index that the specialization waits for. It is used to match the index value in pending deferred memoization points. The example in Section 3.6 uses the symbols ’mod1 and ’mod2 for this purpose.
The parameter Arg is the indexed value. This is a value of base type and its representation is completely up to the programmer. The example in Section 3.6 uses the empty list to indicate an empty world.
The function suspend writes the current memoization cache and the cache of deferred specializations to a file.
(suspend Filename)
The Filename parameter indicates the name of the file in which the memoization cache and the deferred cache are stored.
The function resurrect installs a memoization cache and deferred cache from a file. It sets up the system to continue a previously suspended specialization.
(resurrect Filename)
The Filename parameter indicates the name of the file in which the memoization cache and the deferred cache are stored.
Returns #t
if the file was successfully read. Otherwise, it returns
#f.
These options are accessible in module cogen-globals except where otherwise noted. Some of them only make sense for programmers who want to use the frontend separately.
(set-bta-display-level! n)
Default: 1.
Display output from the binding-time analysis, 0 ≤ n ≤ 4. 0 means no output.
(set-effect-display-level! n)
Default: 1.
Display output from the effect analysis. 0 means no output.
(set-scheme->abssyn-let-insertion! v)
Default:
#f.
Instruct the frontend to insert let expressions for the bound variables of lambdas and definitions.
Useful if the frontend is to be used in other projects.
(set-memo-optimize! v)
Default: #t.
Optimize the representation of functions and algebraic datatypes. Use expensive memoized representation only for data that actually passes a memoization point.
(set-abssyn-maybe-coerce! v)
Default: #t.
Instructs the frontend to insert provisional lift expressions at certain places. The backend eliminates these later on if they are useless. Can be turned of for using the frontend separately.
(set-generate-flat-program! b)
Default: #f.
Instructs the generating extension to produce flat programs. By default, the residual programs have exaclty one top-level definition, all others are nested inside and invisible to the outside.
(gensym-ignore-name-stubs!)
(module cogen-gensym)
Instructs the generating extension to ignore name stubs when generating fresh symbols.
(gensym-use-name-stubs!)
(module cogen-gensym) Default.
Instructs the generating extension to use provided name stubs wherever possible.
(set-memolist-stages! n)
Default: 0.
Set optimization level for memoization table. If set to n then memoization uses n cascaded association lists, indexed by the first n elements of the static projection at a memoization point.
(set-lambda-is-pure! v)
Default: #t.
The code generator considers lambda abstractions as pure values if this flag is set.
(set-lambda-is-toplevel! v)
Default: #f.
Generate a toplevel function for each memoized lambda abstraction if set. Required for suspendand resurrectto work properly.
Pretty printing is available through function p in module pretty-print. The function takes one parameter, the expression that is to be pretty-printed.
The function (writelpp LIST FILE) writes the list LIST to file FILE applying the pretty printer to each element of the list. It is defined in module auxiliary.
The contents of modules are generally available by typing
> ,open Modulename
to the top-level command line interpreter of Scheme48.
PGG assumes a declarative semantics: the order of a sequence of top-level definitions does not matter, even if they are spread over several files.
PGG implements R5RS macros with the restriction that macros defined by let-syntax and letrec-syntax cannot expand to macro definitions.
For debugging purposes it is sometimes helpful to read the generating extension, because it is a representation of the binding-time annotated program. Besides standard Scheme constructs it contains the following kinds of expressions, most of which are implemented as macros in module pgg-library. The semantics of the ellipsis ... is the same as in the syntax-rules patterns of Scheme [19], zero or more repetitions of the preceding item. The list is sorted alphabetically.
(multi-memo level fname fct bts args)
denotes a
memoization point at level level, fname is a
symbol specifying the name of the generating function to run,
fct is the function itself, bts is a list of
binding times describing the arguments of the function,
args is the list of arguments (must have the same length as
bts)
(_app lv f a ...)
application of non-memoized function
f to arguments a ... at level lv
(_app lv f a ...)
application of memoized function
f to arguments a ... at level lv
(_begin lv bl e1 e2)
a begin at level
lv, bl is the binding time of e1
(_cell-set!_memo lv ref arg)
updates a memoized reference
cell ref at level lv with value arg
(_cell-eq?_memo lv ref1 ref2)
tests two memoized
reference cells ref1 and ref2 for equality at
level lv
(_ctor_memo lv (bt ...) ctor arg ...)
creates a memoized
object with constructor ctor, lv is the binding
time of the structure, (bt ...) are the binding times of
the arguments arg ...
(_eval lv diff body)
the body becomes available
at level lv then it is delayed for diff levels
(_freevar lv arg)
a free variable arg at level
lv
(_if lv bl e1 e2 e3)
the conditional at level
lv, bl is the binding time of the branches
e2 and e3, e1 is the condition
(_lift0 lv val)
delay the value val to level
lv
(_lift lv diff value)
the value becomes
available at level lv, then it is delayed for another
diff levels
(_lambda lv v* e)
non-memoized lambda abstraction at
level lv, formals v*, body e
(_lambda_memo lv arity label fvs bts f)
memoized lambda
abstraction at level lv, arity is a list of
symbols serving as stubs for variable names, label is the
unique label of the lambda, fvs is a list of the (values
of the) free variables, f is a function that maps the
values of the free variables and the variable names generated from
arity to a new body
(_make-cell_memo lv lab bt arg)
creates a memoized
reference cell at level lv, with unique label lab,
bt is the binding time of the argument arg
(_op lv op arg ...)
the operator op applied to
arg ... at level lv
(_op_pure lv op arg ...)
the pure operator op
applied to arg ... at level lv
(_s_t_memo lv sel v)
a selector or test for a memoized
datastructure at level lv, sel is the selector or
test function, v is the argument
(_vlambda lv (fixed-var ...) var body)
same as
_lambda
, but for variable arity; the list of
fixed-var names the obligatory arguments and var
names the optional argument list
(_vlambda_memo lv fixed-vars var label vvs bts f)
memoized lambda
abstraction with variable arity functions, see
_lambda_memo
and _vlambda
for explanation
Good starting points for the study of partial evaluation are Jones, Gomard, and Sestoft’s textbook [18], Consel and Danvy’s tutorial notes [6], Mogensen and Sestoft’s encyclopedia chapter [23], and Gallagher’s tutorial notes on partial deduction [12]. Further material can be found in the proceedings of the Gammel Avernæs meeting (PEMC) [1, 11], in the proceedings of the ACM conferences and workshops on Partial Evaluation and Semantics-Based Program Manipulation (PEPM) [15, 4, 25, 27, 24, 5, 8], and in special issues of various journals [16, 17, 21, 28]. A comprehensive volume on partial evaluation appeared in the Lecture Notes of Computer Science series [9]. Sestoft maintains an online bibliography [26].
The above paragraph is taken from the introduction to the 1998 Symposium on Partial Evaluation [10] which is a collection of concise articles characterizing the state of the art, stating challenging problems, and outlining promising directions for future work in partial evaluation.
The following publications explain various parts of the PGG system.
Cogen in Six Lines [29] explains how to derive a handwritten multi-level cogen from a multi-level specializer and applies this to the construction of a continuation-based handwritten multi-level cogen. This is done in continuation-passing style and in direct style with control operators.
Towards Specialization of Full Scheme [31] explains the specialization of eval, apply, and call/cc. It relies on a binding-time analysis that allows for higher-order primitive operations.
Implementing Memoization for Partial Evaluation [30] gives details about the implementation strategy for partially static values in PGG.
Correctness of a Region-Based Binding-Time Analysis [32] defines and proves correct a binding-time analysis for a lambda calculus with side-effects.
Sound Specialization in the Presence of Computational Effects [22] defines a specialization calculus based on Moggi’s computational lambda calculus and shows how to implement it. This calculus is the basis of PGG’s specialization algorithm.
The frontend of PGG is similar to the frontend of a Scheme compiler.
The first pass renames all variables, expands macros, expands backquotes, transforms named lets to letrecs, and collects mutable variables. The second pass performs assignment conversion, eliminating all (set! v e) operations in favor of (cell-set! v e’), replacing all uses of v by (cell-ref v), and changing the definition of v accordingly. The third pass performs lambda lifting. The fourth pass transforms to abstract syntax and performs eta expansion.
The next phase is binding-time analysis. It consists of type inference, effect analysis (if cell-set! and friends have been used), construction of the binding-time constraints, solution of the constraints, and the introduction of memoization points.
Finally, the backend produces the generation extension.
Syntax errors are not dealt with gracefully.
Do not use identifiers that end with
([-_][0-9]+)+
(interpreted as a regular expression for the
regexp library) for procedures and global variables.
Most parts of the system have been developed while the author was at Tübingen University. Special thanks to Michael Sperber for unwaveringly testing the system, pushing it to its limits, suggesting new features, finding many problems (as well as some surprising features), and supplying some bug fixes. Thanks also to Simon Helsen and Frank Knoll who suffered through various versions of the system.
[1] |
Dines Bjørner, Andrei P. Ershov, and Neil D. Jones, editors.
Partial Evaluation and Mixed Computation, Amsterdam, 1988.
North-Holland.
|
[2] |
Anders Bondorf.
Automatic autoprojection of higher order recursive equations.
Science of Computer Programming, 17:3–34, 1991.
|
[3] |
Anders Bondorf and Olivier Danvy.
Automatic autoprojection of recursive equations with global variables
and abstract data types.
Science of Computer Programming, 16(2):151–195, 1991.
|
[4] |
Charles Consel, editor.
Proceedings of the 1992 ACM SIGPLAN Workshop on Partial
Evaluation and Semantics-Based Program Manipulation, San Francisco, CA, June
1992. Yale University.
Report YALEU/DCS/RR-909.
|
[5] |
Charles Consel, editor.
Proceedings of the 1997 ACM SIGPLAN Symposium on Partial
Evaluation and Semantics-Based Program Manipulation, Amsterdam, The
Netherlands, June 1997. ACM Press.
|
[6] |
Charles Consel and Olivier Danvy.
Tutorial notes on partial evaluation.
In Proceedings of the 1993 ACM SIGPLAN Symposium on Principles
of Programming Languages, pages 493–501, Charleston, South Carolina,
January 1993. ACM Press.
|
[7] |
Olivier Danvy.
Semantics-directed compilation of nonlinear patterns.
Information Processing Letters, 37(6):315–322, March 1991.
|
[8] |
Olivier Danvy, editor.
Proceedings of the ACM SIGPLAN Workshop on Partial Evaluation
and Semantics-Based Program Manipulation PEPM ’99, San Antonio, Texas, USA,
January 1999.
BRICS Notes Series NS-99-1.
|
[9] |
Olivier Danvy, Robert Glück, and Peter Thiemann, editors.
Dagstuhl Seminar on Partial Evaluation 1996, number 1110 in
Lecture Notes in Computer Science, Schloß Dagstuhl, Germany, February
1996. Springer-Verlag.
|
[10] |
Olivier Danvy, Robert Glück, and Peter Thiemann, editors.
1998 Symposium on Partial Evaluation, volume 30 of ACM
Computing Surveys. ACM Press, September 1998.
|
[11] |
Andrei P. Ershov, Dines Bjørner, Yoshihiko Futamura, K. Furukawa, Anders
Haraldsson, and William Scherlis, editors.
Special Issue: Selected Papers from the Workshop on Partial
Evaluation and Mixed Computation, 1987 (New Generation Computing, vol. 6,
nos. 2,3). Ohmsha Ltd. and Springer-Verlag, 1988.
|
[12] |
John Gallagher.
Tutorial on specialisation of logic programs.
In Schmidt [25], pages 88–98.
|
[13] |
Robert Glück and Jesper Jørgensen.
Efficient multi-level generating extensions for program
specialization.
In Doaitse Swierstra and Manuel Hermenegildo, editors, International Symposium on Programming Languages, Implementations, Logics and
Programs (PLILP ’95), number 982 in Lecture Notes in Computer Science,
pages 259–278, Utrecht, The Netherlands, September 1995. Springer-Verlag.
|
[14] |
Fritz Henglein.
Efficient type inference for higher-order binding-time analysis.
In John Hughes, editor, Proc. Functional Programming Languages
and Computer Architecture 1991, number 523 in Lecture Notes in Computer
Science, pages 448–472, Cambridge, MA, 1991. Springer-Verlag.
|
[15] |
Paul Hudak and Neil D. Jones, editors.
Proceedings of the ACM SIGPLAN Symposium on Partial Evaluation
and Semantics-Based Program Manipulation PEPM ’91, New Haven, CT, USA, June
1991. ACM.
SIGPLAN Notices 26(9).
|
[16] |
Journal of Functional Programming 3(3), special issue on partial
evaluation, July 1993.
Neil D. Jones, editor.
|
[17] |
Journal of Logic Programming 16 (1,2), special issue on partial deduction,
1993.
Jan Komorowski, editor.
|
[18] |
Neil Jones, Carsten Gomard, and Peter Sestoft.
Partial Evaluation and Automatic Program Generation.
Prentice-Hall, 1993.
|
[19] |
Richard Kelsey, William Clinger, and Jonathan Rees.
Revised5 report on the algorithmic language Scheme.
Higher-Order and Symbolic Computation, 11(1):7–105, 1998.
|
[20] |
Richard A. Kelsey and Jonathan A. Rees.
A tractable Scheme implementation.
Lisp and Symbolic Computation, 7(4):315–335, 1995.
|
[21] |
Lisp and Symbolic Computation 8 (3), special issue on partial evaluation,
1995.
Peter Sestoft and Harald Søndergaard, editors.
|
[22] |
Julia Lawall and Peter Thiemann.
Sound specialization in the presence of computational effects.
In Proceedings of the Theoretical Aspects of Computer Software,
number 1281 in Lecture Notes in Computer Science, pages 165–190, Sendai,
Japan, September 1997. Springer-Verlag.
|
[23] |
Torben �. Mogensen and Peter Sestoft.
Partial evaluation.
In Allen Kent and James G. Williams, editors, Encyclopedia of
Computer Science and Technology, volume 37, pages 247–279. Marcel Dekker,
270 Madison Avenue, New York, New York 10016, 1997.
|
[24] |
William Scherlis, editor.
Proceedings of the ACM SIGPLAN Symposium on Partial Evaluation
and Semantics-Based Program Manipulation PEPM ’95, La Jolla, CA, USA, June
1995. ACM Press.
|
[25] |
David Schmidt, editor.
Proceedings of the 1993 ACM SIGPLAN Workshop on Partial
Evaluation and Semantics-Based Program Manipulation, Copenhagen, Denmark,
June 1993. ACM Press.
|
[26] |
Peter Sestoft.
Bibliography on partial evaluation.
Available through URL
ftp://ftp.diku.dk/pub/diku/dists/jones-book/partial-eval.bib.Z.
|
[27] |
Peter Sestoft and Harald Søndergaard, editors.
Proceedings of the 1994 ACM SIGPLAN Workshop on Partial
Evaluation and Semantics-Based Program Manipulation, Orlando, Fla., June
1994. University of Melbourne, Australia.
Technical Report 94/9, Department of Computer Science.
|
[28] |
Theoretical Computer Science, special issue on partial evaluation, 1998.
Charles Consel, editor.
|
[29] |
Peter Thiemann.
Cogen in six lines.
In Kent Dybvig, editor, Proceedings of the 1996 International
Conference on Functional Programming, pages 180–189, Philadelphia, PA, May
1996. ACM Press, New York.
|
[30] |
Peter Thiemann.
Implementing memoization for partial evaluation.
In Herbert Kuchen and Doaitse Swierstra, editors, International
Symposium on Programming Languages, Implementations, Logics and Programs
(PLILP ’96), number 1140 in Lecture Notes in Computer Science, pages
198–212, Aachen, Germany, September 1996. Springer-Verlag.
|
[31] |
Peter Thiemann.
Towards partial evaluation of full Scheme.
In Gregor Kiczales, editor, Reflection’96, pages 95–106, San
Francisco, CA, USA, April 1996.
|
[32] |
Peter Thiemann.
Correctness of a region-based binding-time analysis.
In Proceedings of the 1997 Conference on Mathematical
Foundations of Programming Semantics, volume 6 of Electronic Notes in
Theoretical Computer Science, page 26, Pittsburgh, PA, March 1997. Carnegie
Mellon University, Elsevier Science BV.
URL: http://www.elsevier.nl/locate/entcs/volume6.html.
|
apply, 3
binding time
skeleton, 2
specification of, 3,
3ii
cell-ref, 3
cell-set!, 3
cogen-driver, 2,
2ii, 2iii
continue, 2
define-data, 2,
2ii, 2iii,
2iv, 3
define-without-memoization, 2,
3
eval, 3
get-residual-program, 2
heapsize, 2
memoization point, 3, 3ii
deferred, 3
multi-level specialization, 2,
3, 3ii
partially static, 2, 3
pure, 3
specialization
modular programs, 3
suspend, 2, 3