One thing I've been debating with regards to the design of the Gentzen typeface is whether or not I should keep most symbols to be strictly geometric or introduce some »calligraphic« flair. For example, consider this test I just tried for the design of a box symbol intended to be used as the symbol for modal necessity:
The weights probably do not match AMS Euler at the moment, but I'll adjust that later. At a very large size, the third version (on the right) does seem slightly plausible, but at smaller sizes on screen the differences are negligible. I'll try seeing how it prints tomorrow. Additionally I worry that straying from geometric shapes will hinder recognition. For example, I also experimented briefly with some variations on a circle to be used as the lax modality, and found anything other than a perfect circle risked being mistaking for a badly done »o«.
So perhaps, some symbols in Gentzen shouldn't try to be too fancy; reader input is welcome. Still it isn't clear offhand whether symbols like the turn-style are safe to make calligraphic or not. More experimentation will be required.
»Comrade JX8P« informed me that this image came from the satirical British science show »Look Around You«, in particular the episode on the element calcium.
I received my first image-based spam today that was clearly designed to thwart optical character recognition. This is just getting out of hand.
Okay, I'm being a little sarcastic, but I was just thinking that while parser generators make the lives of us that implement languages frequently easier, are they really the best thing for »production« language implementations. Mostly I'm thinking about this in terms of error reporting to the user rather than in terms of performance. In particular, during a discussion in PLClub on Friday there was a question of whether the new breed of parser generators become available (elkhound, frown, ml-antlr, sugar, etc.) whether the ones that offer or implicitly provide arbitrary lookahead can actually produce comprehensible error messages. At worst, you will be facing an O(nk) number of ways to resolve a rule (where n is the number of tokens in your language and k the lookahead). Maybe it doesn't turn out to be that bad in practice. Still it seems like one could perhaps provide the most informative error messages by writing the parser by hand.
Of course the argument against, is quite similar to the arguments for not programming in assembly language when it gives you the ability to achieve maximal performance. Using a parser generator allows you to express the parsing process in a more abstract way which means that you get all the nice things like easier maintenance, it is possible to actually perform analysis of your grammar. I guess the issue is that perhaps some more thought should be put into giving developers a way to »profile« their grammars (»is there anywhere where lookahead gets unmanageable?«, etc.) and more sophisticated methods for customizing the error reporting process (for example ml-yacc only reports errors in terms of its tokens, which are not necessarily meaningful to the user).
Anyway, I'm about to give Aaron's new lexing and parsing tools for SML/NJ a try. I'm not sure whether I will make the leap from ml-yacc to ml-antlr just yet, but I'll probably give ml-ulex a try, just because ml-lex doesn't allow comments in the rule section of the file.
I should spend some time reading over the Dragon Book tomorrow just to remind myself about the trade-offs with traditional parsing techniques. There actually turns out to be a second edition of the Dragon Book that was just released, but I didn't find the first edition useful enough to pay a 100 USD for the new one yet. I admittedly remember very little of what was covered concerning parsing algorithms in my undergraduate compilers course.
Might COMEFROM be a categorical dual of GOTO?
Lately, in terms of »research« I've been working on a draft of the InforML manual. It is one-half specification, one half tutorial. The general idea being that if I have thought through everything well enough to explain how to program in the language, I probably understand it well enough to know exactly what needs to be done in implementing it (or rather extending/modifying AspectML to become InforML). So far this has worked really well, as every time I sit down to rewrite the bits of my tutorial, I keep coming with things to put on my list of design questions.
Another part of the exercise has been to focus on trimming out anything in the language that isn't actually necessary to complete my dissertation. However, if the feature is already in AspectML I'll probably leave it in unless its interactions with the extensions for InforML are not well understood and/or would require sitting down and doing some involved proofs to verify.
I think I'm converging and hope to have a kind of complete tutorial and specification ready by Friday, but we'll see.
I just noticed the other day that Linotype will be releasing a sans-serif version of Palatino in the near future based upon designs by Zapf and Kobayashi. It is very reminiscent of Optima, but the edges are much softer and organic. Very nice.
I added a note about this to the Palatino Wikipedia entry.