We began our discussion with a short discussion about what this paper was about and what the motivation for it is. A new way of doing macros had not been expressed for quite some time (1992/3) and since that time Racket had slowly evolved and improved the macro system they had. This system that had been growing had not been formalized or verified, and so this paper described how this system works in a manner that is more descriptive than the C implementation itself in an effort to clarify some of the work that has been done in the macro system thus far.
We talked about the structure definitions and the usage of local syntax value here. This was a classic instance of a need for this, as it allows the constructor and other pieces to be discovered during compilation.
We talked some about how the way Racket is set up is that it provides the compiler’s abilities to the user and makes them reusable, allowing things that were previously impossible or very difficult to be done (I believe make-template- pattern-var was an example or related to this ). One of the things this allows is, for instance, is the creation of structures that can themselves be called like functions.
We then for some time talked specifically about defines and the various ways they are manipulated, represented, or reasoned about depending on the definition contexts in which they exist. The traditional usage of define is simple enough, however, when a define is used in, say, a lambda, the define function acts as if it were a let binding (I think we showed how it is expanded to a letrec) and several can be used to make elegant and clear definitions of a series of identifiers within a function. This can be done without this context-dependent expansion of define, however it requires a much more verbose style of lets and nesting and... is gross comparatively. Even this form, however, was slightly limited - or perhaps the way this is implemented and interpreted is. We saw this by examining the define* macro which is able to redefine already bound identifiers and noting that it only worked within a make-package scope within the area you wish to use these. This method is not currently implemented, or the reason to may resist implementing it, is that there is some overhead associated with the make-package call that may hurt performance more than is acceptable.
We had a slight detour and talked about an idea expressed in Peter Van Roy’s book “Concepts, Techniques, and Models of Computer Programming” - namely that at times people wish to write imperative code and convert it into a functional style, when really we should strive to write in the more understandable, functional patterns and then convert that into imperative code that is optimized for the hardware upon which it is running. We discussed how traditionally imperative programming is thought to be faster, and although historically this may have been true, this was largely because the only computing model commonly targeted was the single processor model. This model of course favors imperative design, but is less common (perhaps even scarce by now) with the proliferation of multiprocessor machines with numerous cores and hyper threads. These more modern architectures are better suited for a functional paradigm, where work can more easily be divided for workers since the imperative ties and constraints that normally accompany mutation are all but gone (when possible).
We continued on and got into more of the details of how the expansion, parsing, and other aspects work. One of the things that stood out was the point in this paper that there should be a layer between reading and parsing where expansion can take place (and allegedly later another layer is added in here as well).
Lingering Questions / Thoughts
Will the Racket Macro system (the core parts that make it work) ever be done evolving, do you think? Will it reach some state where it can express all it needs to?
I took cs 484 here (parallel processing) - which was great - but after this exposure to more functional programming and seeing it’s strengths for parallelizability, I’m amazed we didn’t discuss any functional programming paradigms in that class (okay, I’m not surprised, functional programming seems to be more polarizing than sushi or politics, but still). At other universities is there more of an emphasis on the relationship between functional programming and teaching parallelization? Or is this parallelizability of functional code just a fact most people choose to ignore because they think functional programming “is dumb”?
Macros that Work Together
The last time that new macro features were introduced into the Scheme and Lisp language world was back in 1992, so Racket represents a large and significant change to the macro community. The first section we went through pretty quickly since we had discussed much of it in previous classes. We talked briefly about structures as we had in a previous discussion, but it was made clear this time that they use syntax-local-value. We then discussed patterns and Templates. Patterns are not just matched, they are actually compiled to code. Each pattern variable is compiled into its own macro. We then went over the example from the reading with the g and e variables. These variables were passes in as inputs, but the let-syntax and make-pattern-variable macros turned each of them into their own macros. It was also mentioned here that the structs use similar features to return macros for each variable. Another key idea behind all of this work was that with Racket, they are attempting to make the compiler reusable by macro writers. An impromptu example was then given to show that runtime and compile time structures can operate as macros. In the example the prop:procedure property allowed for the structs to be called as macros, and thus use all of the features allotted to macros in Racket.
The class system was then briefly discussed since we had seen examples of it in previous discussions. In it, the define and lambda forms are reused, and the public and private sections of the code are each put onto the stop-list that is later used by the expander. A thorough example was then given regarding the different ways of defining variables using define, let, let*, and packages. The define operator simply assigns everything at the same time, and within the same context (if the defines are in the same context). The let operator performs a temporary binging within the scope of the parenthesis at the same time. The let* operator is similar to the let operator, except it binds everything in order. It was stated that define should always be used because trying to worry about getting everything correct with let and let* is too much to worry about. The only problem with define is that it wont let you redefine variables or introduce new contexts. The way around this is to use packages and define*, which Jay implied that he implemented himself. This allows for variables to be mutated and new contexts to be introduced, but it is tedious to have to add additional package information. In other words, the package-begin that was used, allowed the user to encapsulate the scope and expose it again. Jay said that they plan on building into the Racket language the ability to simply write define* to mutate variables and introduce new contexts. This define* would essentially extend the language tower to implement packages around a define statement behind the scenes. An additional comment was made that the let operator can shadow previous bindings, but define cannot. Also, the define operator actually compiles to let. While discussing these different operators we also discussed using internal definitions, as mentioned in the paper. Internal definitions make it so that you don't have to worry about which of the operators should be used. They also make it much easier to move around internal variables within macros.
After this example we had a discussion on imperative code versus functional code. Kimball quoted the author of a book that said that something like the perfect programming language would be completely imperative to the user, and completely functional to the computer. This started the debate, which essentially led to functional code being better in many cases for the user so that the user knows what it going on in the background so that they can manipulate it in the best way possible.
The rest of the discussion was spent on the formal model of the core Racket language. We discussed the distinction between names, values, symbols, and variables. The names are like natural numbers (meaning ASCII code), they simply provide an easier way for the user to interact with the program. The example given for this was “The map is not the territory.” This means that what names represent is a map to something that has been previously defined, not the things they previously defined. The primitives of the core language are not represented by names though, they are actual pointers to operations. When the eval operation is called, the names used are substituted for what they actually represent. We then moved to talk about the syntax. The operator make-syntax takes in a value and returns syntax. The stx-e operator gets contextual information on one layer of the syntax object. The quote operator is actually a form of the stx-e operator. The parser was then discussed. It was mentioned that the idea of the parser acting as its own separate thing is non-normal, it should really be an integrated part of the entire macro transformation. This transformation consists of read, expand, parse, and eval. The parser implemented in Racket is an identifier-driven parser, which adds additional information to the identifiers as it goes along. The parser adds the resolve operator, which acts as a single point of extraction that hides symbols.
The expander implemented the hygienic condition as we have seen in previous versions of Scheme. As the expander expands operators such as lambda, new names are given to the identifiers. As discussed previously, the expansion happens recursively. We then went over an example of a lambda lambda lambda operation. In this example (1 2 3) returned '(1 2 3). I think this was the example which we stepped through to the end, and there were time-stamps attached to identifiers that went as high as 81, meaning there were 81 transcription steps, which is quite impressive. Is it normal to have that many transcription steps, or do they normally go much higher than that in large complex macros? The binding features of Racket were then discussed. It was mentioned that transformers can actually be bound to to values, which was kind of surprising to me. We talked about the problems with renaming, and determined that we had to mark everything to eliminate any chances of making mistakes with bindings. The compile-time bindings were then discussed. I thought it was interesting that everything previous to this point in the paper was available through Scheme, only in different ways. The last part we discussed was the definition contexts, specifically the new-def, def-bind, lexpand, and lvalue operators that were introduced by Racket. It was funny that this paper ended with no justification for why Racket is better than any other language. It just cited some related works, which we didn't discuss because we had already read most of them, then ended. There wasn't even a conclusion.