BYU - Spring 2013 - CS 630 - 5/30 - Discussion

2 views
Skip to first unread message

Matthew Ashcraft

unread,
May 31, 2013, 5:51:06 PM5/31/13
to byu-cs-630-spring-2013

Taming Macros

Starting off the discussion, we talked a little about eager languages. Eager languages are those in which the arguments passed into a function are evaluated as they are passed in. This is the case for most languages in use today. If I understand that correctly, that means that macros are not eager languages because their arguments are not evaluated until they are used. We then went over one of the statements in the paper, which basically summed up the power of macros. It basically said that programmers should use an embedded programming language for any given problem domain, and express their solution for the problem. The rest of the introduction of the paper was similar to topics we had already covered in class, so we basically skipped to talking about type systems. Type systems are essentially automated verifiers for properties. The type systems run at compile time, and should have little affect on the speed of the compilation. Because they need to be fast, the type systems reject the more complicated types that take more time to compile. This can actually severely limit the number or run-able programs that a language can use.

We then moved on to discussing the power of macros. As it turns out, macros are too strong to implement type systems. The places where macros are too powerful occurs in either the use or the definition of the macro. First if the user of the macro uses syntactic forms that the macro is not expecting, then the macro most likely wont even report an error. This comes back to the problem that there is no way to indicate what input any given macro takes. We talked shortly about the incr macro example, where an incorrect input type would not be caught until the set! operation is called within the macro. In that case the error message printed would only say that you passed the wrong type to the set! operator, and would make no mention of the macro.

Contracts were then brought up. It was stated that one of the problems with Racket is that not everything uses contracts, so some of the features still have bugs. Due to my lack of understanding regarding contracts, an example was given in Racket. The contracts basically define inputs and outputs to macros, but they are more powerful than a type system. The contracts allow the macro to check for any desired input and output conditions. However, the contracts must be inserted at the inputs and outputs of macros they are to be used in. This is tedious, and is too much to ask of macro writers. Additionally, the contracts don't find errors in parts of the code that are not run, which is opposite of the type system. Another poor aspect of the contracts is they can increase the runtime by a factor of three.

After discussing the contracts, we moved onto the model of Scheme macros. Macros are the surface language which expand into the core language. Their model tags each macro with app so that they don't expand into the name and use of a previous macro. After discussing some additional information we veered off on a tangent of why more people don't use Racket, which already has many safety features built in. Our conclusion was that Lisp macro writers never write macros complicated enough to use most of the additional features offered by Racket. Scheme users on the other hand, simply don't write macros that take longer than a weekend to write. Upon returning from the tangent we went on to discuss the semantics of their model. The entire expansion of the program is represented as one big step. Within the representation, all of the terms are actually expanded one at a time until the core language is reached. All pairs are appended with the appropriate app, or expanded in a macro. The only problems that exist with their model appear either when you can't expand into the core language, or the right-hand-side of the macro doesn't exist. The match operator goes through each pattern. For each pattern it searches for a valid transcription. Then when the smallest k variable is found, then it returns. The transcribed lists represent transcribing each element of the list one at a time. We then went over a match and transcribe example in Racket. I got lost pretty soon after we started, but did get the general idea that the match and transcription are done using search algorithms, similar to depth first search, with substitution of matches.

The next topic of our discussion was stuck terms. Stuck terms are uses of macros that don't match a clause in the given semantics rules. The examples given in this paper included the quote operator and the lambda operator. If either of these were passed two identifiers, they wouldn't know what to do and would thus be stuck. The core of this paper was then discussed, which revolved around shapes. Shapes are types and complex structures. Shapes work from the outside in, recursively type checking everything as they go. Regarding shapes, it was mentioned that you don't have to test an entire macro if you know that the input being given to a macro is within the range of the macros valid inputs. The shape type system works by building a macro environment for each of the macro definitions, then type checking them with the macro templates.
Lastly the soundness of their model was discussed. The soundness of their model was proved using reduction theorems of progress and preservation. The preservation theorem states that if x expands to x` in one step, the type is valid, and x had type t when evaluated, then x` will have type t. The progress theorem works on the induction of the type checkers judgment. We reviewed some of the lemmas also, but came to the conclusion that lemma 2 is the only one which really mattered. All of this work was done simply to ensure that stuck terms are not type checked.

After we finished up the paper we had a short discussion on programming languages. It was mentioned that XML came about from s-expr users moving to the web in the 80s and bringing some of their programming styles with them. The advantages of typed systems was also discussed. Though type checking might seem important, few widely used programming languages today actually implement it properly. Anything that bases values off of the current environment violates type checking systems. This could include simple programming tools such as unions. The entire class discussion was interesting. Toward the end it seemed to emphasize the restrictiveness of type checking systems, more than their actual use. I think they have their place in error checking, but to me it feels like they are too restrictive to use everywhere. As was said in class, type checking systems exist because the language writers have little faith in the programmers.

Andrew Kent

unread,
May 31, 2013, 7:00:31 PM5/31/13
to byu-cs-630-...@googlegroups.com

We began the discussion going over some of the simple cases that showed why macros can be difficult to debug and what kind of situations might bring about strange issues for the macro user. We then covered some of the basics about what kinds of assumptions we were making for this system and what parts of scheme we were omitting for this model.

During this introduction as we were introducing the idea of why a type system is what you would use to verify these properties about macros we were interested in we took a small detour talking more broadly about what a type system offers and what the costs are. We discussed how in the formal, logically proven world there is a very small set of problems which can be attacked effectively (in the big picture), discussed how a larger category up from there was the realm in which typed environments could handle problems, and then how outside of that was an even larger realm where non-strictly typed languages were able to deal with problems. This idea of limitations, of pros and cons for type systems, as well as being broadly interesting, tied into one of the early introduced points in the paper. Because the paper was introducing a type system to try and “tame macros” they were invariably in the same step limiting the macros they were typing to some degree. I still think this is a very interesting area for discussion or thought - what the costs of typing are and why you would want it (if at all). In previous discussion I have had with Jay he described a type system as a tool that you may wish to utilize to help solve some particular problem (or perhaps to enable better optimizations, etc...), but that just like any other tool available while programming you don’t want to always be forced to use it if it doesn’t fit the situation (I may have embellished a little, close enough). Anyway, so we talked somewhat about this idea of costs and gains associated with type systems in general and then bore down deep into the details of how the type system in this paper worked.

We reviewed all the greek-filled diagrams and discussed the various details which this paper used to describe their macro type system. Slight side note: I’m always amazed at how confusing some of these figures can be while reading a paper initially and then noticing how much more sense they make while they are being described in class. Perhaps that just comes with knowing what kind of things to be on the lookout for (for instance, I have seen the logical judgement trees before, but in this paper for some reason, I didn’t realize that’s what some of the initial diagrams were... and now it’s so obvious). Anyway, there were a lot of little details into how it worked, but it really boiled down into a rather simple and elegant system. It only utilized 2 types (definitions and expressions) and then had a similarly simple description of shapes to define how a particular s-expression could be fitted for a potential shape.

At some point during all of this we also had a discussion regarding s-expressions and their similarity to XML. I keep being blown away at the amount of computer science history I’ve been missing out on without the PLT-lensed insight into it all. Seeing XML as a creation of those who once ruled the AI research realm with Lisp was a fascinating discovery - I’m curious to see what other languages, features, trends, etc... have been started by a now surprisingly forgotten family of programming languages.

Anyway, all in all the discussion and review of this paper helped me understand some of the more technical details (both implementation and notational) of the macro system and gave me a little more insight into how a type system can be used to verify numerous kinds of properties about programs. If I remember right I think Jay said that this exact system isn’t used but a similar concept is used today in racket (syntax vs shapes?). Is the newer way of looking at the issue somewhat derived from this paper or it’s line of thinking? Or did it just turn out to be similar to a solution that was discovered separately all together?


Jay McCarthy

unread,
Jun 1, 2013, 6:20:36 AM6/1/13
to Andrew Kent, byu-cs-630-...@googlegroups.com
We'll talk about this in a few papers. But basically, it was seen as
impractical to actually enforce the type system on the whole language,
so instead you opt-in on a per-macro basis and the enforcement is more
contract-like.


--
Jay McCarthy <j...@cs.byu.edu>
Assistant Professor / Brigham Young University
http://faculty.cs.byu.edu/~jay

"The glory of God is Intelligence" - D&C 93
Reply all
Reply to author
Forward
0 new messages