Archive for March, 2008

Gentzen Symbol released

I just finished hastily packaging up a prototype release of Gentzen Symbol (I can't avoid working on my ICFP paper all day).  I would of course be interested in any feedback or problems people might have if they try using it, or suggestions on how to improve the design of the glyphs for future releases or development.

Comments (3)

Hands on with Fonts & Encodings (finally)

In September, I mentioned being quite excited by the new O'Reilly book Fonts & Encodings. Today, I was tempted to buy it from the campus bookstore at EPFL, but decided to do a little research because it would be nearly a 60% markup if I bought it here versus in the US. Fortunately, the EPFL library had purchased a copy which was not checked out.

Strangely, enough it is described as being translated from French by Scott Horne, but as far as I can tell there is no French edition (and if there is, I find it strange that the EPFL library and bookstore only had the English edition). Perhaps only the original manuscript by Yannis was in French.

I've only just begun to delve into it, at 1017 pages it will take some time to review it in depth, but if you are serious about typography I think this is a book that you will definitely want to own. It really covers the entire spectrum: Unicode text and how it works, through setting up fonts to display your text in operating systems and software, all the way to designing/editing/hinting fonts. I haven't looked at it in enough depth to be absolutely certain, but I am pretty sure it has nearly enough information on most formats that you could write software for them as well. It is truly a wondrous tome.

I will have to see what more I have to say once I've spent more time with it. One reason I thought of picking it up today was that it has quite a nice introduction to using METATYPE1, which with some luck I may use to start on a true meta-font for Gentzen Symbol. At the same time I guess I will try to package up the Type 1 PostScript version of Gentzen Symbol from my dissertation, assuming that there is anyone out there truly interested in using it in their own documents.

Comments (2)

Best quote from a paper this week

Programming in OMeta would be very frustrating if all productions were defined in the same namespace: two grammars might unknowingly use the same name for two productions that have different purposes, and one of them would certainly stop working! (Picture one sword-wielding grammar decapitating another, Highlander-style: “There can be only one!”)

From Warth and Piumarta's OMeta: an Object-Oriented Language for Pattern Matching.

Comments

Correction on existential unpacking

(WordPress ate my first draft, grrr.)

So after thinking about it further, I was incorrect, and it is possible to explicitly unpack existentials in Scala. As with some other languages, it is done via pattern matching. The following example illustrates how:

  1. val x : List[T] forSome { type T } = List(42)
  2. val w = x match { case y : List[u] => ((z : u) => z)(y.head) }

However, in practice this functionality seems to be rather fragile. For example, the following two variations are rejected:

  1. val x : T forSome { type T } = 42
  2. val w = x match { case y : u => ((z : u) => z)(y) }

and

  1. val x : List[List[T]] forSome { type T } = List(List(42))
  2. val w = x match { case y : List[List[u]] => ((z : u) => z)(y.head.head) }

In both cases it reports that it cannot find the type u. In the second case, it could be attributed to erasure, as there is no way dynamically to guarantee that the contents of List are in turn also a List. However, the first seems reasonable, so it should probably be reported as a bug.

Comments

Higher-rank, impredicative polymorphism in Scala

One project that I spent some time thinking about prior arriving in Lausanne was whether I might be able to put together a Scala library for manipulating data-structures with binding based around the use of parametricity to help enforce the adequacy of function spaces (ala Boxes Go Bananas). However, one problem is that Scala does not have higher-rank polymorphism like GHC. The initial thought was, "well, Scala has higher-rank existential types, so adding the dual case should be pretty straightforward to add to the typechecker". However, it turns out it is not so straightforward to add universal quantification to the implementation of the subtyping algorithm and ensure that constraints can be solved unambiguously. It still may make sense to try and add them in some fashion.

Later, while working on Featherweight Scala, it occurred to me that subtyping is very likely to be undecidable in Scala because its language of types is essentially as expressive as full F<:. Benjamin agreed that there existed a straightforward reduction to languages with only bounded existential quantification rather than bounded universal quantification, but did not know offhand if there is a paper that gives it.

So I spent a bit of time thinking about encoding universal quantification using existential quantification, and as alluded to in TAPL (the answer to exercise 24.3.3), it can be done, but there is no completely local encoding of universals into existentials. To obtain the classical equivalences between the negation of universal quantification and existential quantification (∀α.τ implies ¬∃α.¬τ) in a constructive setting you have to use the double-negation encoding, which essentially means CPS converting your program. The other classical equivalence under negation, ∃α.τ implies ¬∀α.¬τ, is constructively valid which is why it is existential quantification is macro-expressible in languages with universal quantification.

Since I am not aware of any papers that illustrate the encoding of universals into existentials, I figured I would explain it here. The encoding of the universal type pretty much follows from the equivalence:

|∀α.τ| = (∃α.|τ|→0)→0

Here I am using 0 for the void or empty type and using the standard constructive encoding of negation:

|¬τ| = |τ| → 0

The term level encodings are not so difficult, but does require a little thought:

|Λα.e : ∀α.τ|k = k (λx:∃α.|τ|→0.let (α,k') = unpack x in |e : τ|k')

The term encoding has the form |e : τ|k, where τ is the type of the term and k is a continuation threaded through the encoding. The first thing to note is that because type abstractions are values, we always immediately pass them to the continuation. Given that,
if we look at the type we need to give the encoding of type abstraction, it is a function that that takes an existentially quantified continuation, and never returns. The only way to never return is to use the continuation. This tells us just about everything else we need to know about the encoding.

Because the continuation is existentially quantified, it is not possible to invoke it as is, so we must unpack it first. This is how we simulate the introduction of a type variable without using type abstraction. The next problem is that in order to call the continuation, we need to apply it to something with the correct type. The continuation takes values of type |τ|, possibly referencing a free type variable α, which is fortunately what the encoding of the type abstraction's body gives us. Therefore, we can use the unpacked continuation k' as the continuation argument of the recursive call to the encoding.

Given that we know how to encode type abstractions, encoding type applications is straightforward:

|e[τ] : τ'|k = |e : ∀α.τ'{α/τ}|(λv:(∃α.τ'{α/τ}→0)→0.v (pack(τ,k) as ∃α.(τ'{α/τ}→0)))

We encode the term to be applied and pass it a new continuation, that packs up the original continuation and applies it to function to which the term evaluates. I am writing {α/τ} to mean replace τ with α. I am using curly braces rather than the usual square brackets to avoid confusion with the square brackets that is commonly used for type application.

While I am almost certain that the encoding above is correct, it would be good at some point to prove it. This is the sort of proof for which Twelf is very well suited, so that is probably the route I would take.

Okay, so that is the theoretical half out of the way. Now, given the above, can we implement higher-rank impredicative polymoprhism in Scala? Yes, with a minor caveat.

First, I'll define an abstract class that defines the type of bounded universally quantified values, and methods for creating and destructing them.

  1. abstract class Universal {
  2. type Univ[Bound,Body[_]]
  3.  
  4. def createUniv[Bound,Body[_]]
  5. (impl : ((Body[A] => Nothing) forSome { type A <: Bound }) => Nothing) :
  6. Univ[Bound,Body]
  7.  
  8. def applyUniv[Bound,Body[_],A<:Bound](arg : Univ[Bound,Body]) : Body[A]
  9. }

This should be fairly straightforward given the encoding described above. The only tricky bit is the use of a higher-kinded type parameter (

  1. Body[_]
) to give the body of the universal type. The other important point to note is that it is not necessary to completely CPS convert the implementation because Scala provides primitives for non-local control transfers.

One possible implementation of the above abstract class is the following:

  1. class UniversalImpl extends Universal {
  2. type Univ[Bound,Body[_]] =
  3. ((Body[A] => Nothing) forSome { type A <: Bound }) => Nothing
  4.  
  5. def createUniv[Bound,Body[_]]
  6. (impl : ((Body[A] => Nothing) forSome { type A <: Bound }) => Nothing) :
  7. Univ[Bound,Body] = impl
  8.  
  9. def applyUniv [Bound,Body[_],A<:Bound](univ : Univ[Bound,Body]) : Body[A] = {
  10. case class Control(arg : Body[A]) extends Throwable
  11. val res = try {
  12. univ((arg:Body[A]) => throw new Control(arg))
  13. } catch {
  14. case Control(arg) => arg
  15. case e => throw e
  16. }
  17. res
  18. }
  19. }

The implementation of the abstract type and the code for

  1. createUniv
are trivial. The implementation of
  1. applyUniv
is a little more interesting. Here, we create a local case class
  1. Control
, extending
  1. Throwable
so that it may be thrown as an exception, that will hold a value of the desired result type. We then just pass the representation of the type abstraction a continuation that throws a new instance of
  1. Control
. We immediately catch it and return the value stored within. If we happen to catch some other sort of
  1. Throwable
we just re-throw it.

And that's it. It is worth looking at a few examples of how this might be used. The first thing that is necessary is to create a concrete implementation of Universal:

  1. val u : Universal = new UniversalImpl

Given

  1. u
, we can create an instance of the polymorphic identity function:

  1. type Id[T] = T => T
  2. val id =
  3. u.createUniv[AnyRef,Id](
  4. (k: (Id[A] => Nothing) forSome { type A }) =>
  5. k(x => x))
  6. val idString = u.applyUniv[AnyRef,Id,String](id)
  7. val idStringList = u.applyUniv[AnyRef,Id,List[String]](id)
  8. println(idString("Foo"))
  9. println(idStringList(List("Foo", "Bar", "Baz")))

The first thing that needs to be done is to define a type function for representation the body of the universally quantified type. For the polymorphic identity function we call it

  1. Id[T]
. The rest is pretty straightforward. It ought to be possible to make the above code a little less verbose if Scala allowed type parameters to methods to be curried: it should be possible for it to infer that the first two arguments to
  1. u.applyUniv
are
  1. AnyRef
and
  1. Id
from the argument
  1. id
. However, because it will not be able to infer the last type parameter, we have to give all three. It might also be desirable in some case to not have to give a type definition to define the body of the a universal type; this could be achieve by extending Scala with support for anonymous type functions.

Now for that caveat I mentioned. When we defined the type abstraction,

  1. u.createUniv[AnyRef,Id](
  2. (k: (Id[A] => Nothing) forSome { type A }) =>
  3. k(x => x))

a significant difference between how this code is written, and how it would have been written in terms of my original explanation of the encoding, is that there is no explicit unpacking of the existential. All existential unpacking in Scala is implicit. This is nice in some ways, but it means that we have no way to give a name to type that the existential is hiding. Therefore, when constructing a function to pass to the continuation,

  1. k(x => x)

it must be the case that Scala can infer the domain of this function's type, because it is not possible to write a type annotation for

  1. x
without having a name for the existentially hidden type. I think in most cases this should not be possible, as Scala knows the type that
  1. k
is expecting as an argument, but there may be some corner cases I have not considered.

(Does anyone know how to prevent WordPress from completely screwing up less than and greater than inside of <pre> tags?)

Comments (4)

The Esszet

The blog at Hoefler & Frere-Jones has a nice little piece on the German glyph ß, which in turn has a link to a much more detailed article on Typefoundry.

Comments

Lazy environment variables

I was just thinking that it would be really useful if command-line shells supported lazy environment variables. Lately, because of my work on Scala I will often find myself entering a line something like

  1.  
  2. export PATH=/home/linuxsoft/apps/java-ibm-1.5/bin:$PATH
  3. ...

This is, despite the write-once promises of Jave (well, JVM bytecode), Scala will fail to build or a test will fail on specific implementations of the Java runtime and VM. I have been doing this so frequently, I finally decided to write some ZSH shell scripts to make it a little less work.

Just having a short macro that does the above for all the various Java runtimes is not ideal, because then my

  1. PATH
keeps getting longer and longer. ZSH might be smart about this when caching lookups, but it is inelegant. Another solution is to write something that does a search and replace on my
  1. PATH
as a string. However, the most elegant solution would simply be to not perform expansion on the contents of
  1. PATH
until it must be passed as part of an exec

ZSH can do a lot, so maybe it already has some feature that approximates this, but it would be nice if I could just write something like

  1.  
  2. lazy export PATH=$JAVA_BIN:$PATH
  3. export JAVA_BIN=/home/linuxsoft/apps/java-ibm-1.5/bin
  4. ...

And then my scripts can just operate on

  1. JAVA_BIN
rather than having to modify
  1. PATH
directly.

Update: I just noticed that setting the variable

  1. JAVACMD
is enough for most purposes, but the above concept still seems reasonable.

Comments (2)