Archive for languages

Higher-rank, impredicative polymorphism in Scala

One project that I spent some time thinking about prior arriving in Lausanne was whether I might be able to put together a Scala library for manipulating data-structures with binding based around the use of parametricity to help enforce the adequacy of function spaces (ala Boxes Go Bananas). However, one problem is that Scala does not have higher-rank polymorphism like GHC. The initial thought was, "well, Scala has higher-rank existential types, so adding the dual case should be pretty straightforward to add to the typechecker". However, it turns out it is not so straightforward to add universal quantification to the implementation of the subtyping algorithm and ensure that constraints can be solved unambiguously. It still may make sense to try and add them in some fashion.

Later, while working on Featherweight Scala, it occurred to me that subtyping is very likely to be undecidable in Scala because its language of types is essentially as expressive as full F<:. Benjamin agreed that there existed a straightforward reduction to languages with only bounded existential quantification rather than bounded universal quantification, but did not know offhand if there is a paper that gives it.

So I spent a bit of time thinking about encoding universal quantification using existential quantification, and as alluded to in TAPL (the answer to exercise 24.3.3), it can be done, but there is no completely local encoding of universals into existentials. To obtain the classical equivalences between the negation of universal quantification and existential quantification (∀α.τ implies ¬∃α.¬τ) in a constructive setting you have to use the double-negation encoding, which essentially means CPS converting your program. The other classical equivalence under negation, ∃α.τ implies ¬∀α.¬τ, is constructively valid which is why it is existential quantification is macro-expressible in languages with universal quantification.

Since I am not aware of any papers that illustrate the encoding of universals into existentials, I figured I would explain it here. The encoding of the universal type pretty much follows from the equivalence:

|∀α.τ| = (∃α.|τ|→0)→0

Here I am using 0 for the void or empty type and using the standard constructive encoding of negation:

|¬τ| = |τ| → 0

The term level encodings are not so difficult, but does require a little thought:

|Λα.e : ∀α.τ|k = k (λx:∃α.|τ|→0.let (α,k') = unpack x in |e : τ|k')

The term encoding has the form |e : τ|k, where τ is the type of the term and k is a continuation threaded through the encoding. The first thing to note is that because type abstractions are values, we always immediately pass them to the continuation. Given that,
if we look at the type we need to give the encoding of type abstraction, it is a function that that takes an existentially quantified continuation, and never returns. The only way to never return is to use the continuation. This tells us just about everything else we need to know about the encoding.

Because the continuation is existentially quantified, it is not possible to invoke it as is, so we must unpack it first. This is how we simulate the introduction of a type variable without using type abstraction. The next problem is that in order to call the continuation, we need to apply it to something with the correct type. The continuation takes values of type |τ|, possibly referencing a free type variable α, which is fortunately what the encoding of the type abstraction's body gives us. Therefore, we can use the unpacked continuation k' as the continuation argument of the recursive call to the encoding.

Given that we know how to encode type abstractions, encoding type applications is straightforward:

|e[τ] : τ'|k = |e : ∀α.τ'{α/τ}|(λv:(∃α.τ'{α/τ}→0)→0.v (pack(τ,k) as ∃α.(τ'{α/τ}→0)))

We encode the term to be applied and pass it a new continuation, that packs up the original continuation and applies it to function to which the term evaluates. I am writing {α/τ} to mean replace τ with α. I am using curly braces rather than the usual square brackets to avoid confusion with the square brackets that is commonly used for type application.

While I am almost certain that the encoding above is correct, it would be good at some point to prove it. This is the sort of proof for which Twelf is very well suited, so that is probably the route I would take.

Okay, so that is the theoretical half out of the way. Now, given the above, can we implement higher-rank impredicative polymoprhism in Scala? Yes, with a minor caveat.

First, I'll define an abstract class that defines the type of bounded universally quantified values, and methods for creating and destructing them.

  1. abstract class Universal {
  2. type Univ[Bound,Body[_]]
  4. def createUniv[Bound,Body[_]]
  5. (impl : ((Body[A] => Nothing) forSome { type A <: Bound }) => Nothing) :
  6. Univ[Bound,Body]
  8. def applyUniv[Bound,Body[_],A<:Bound](arg : Univ[Bound,Body]) : Body[A]
  9. }

This should be fairly straightforward given the encoding described above. The only tricky bit is the use of a higher-kinded type parameter (

  1. Body[_]
) to give the body of the universal type. The other important point to note is that it is not necessary to completely CPS convert the implementation because Scala provides primitives for non-local control transfers.

One possible implementation of the above abstract class is the following:

  1. class UniversalImpl extends Universal {
  2. type Univ[Bound,Body[_]] =
  3. ((Body[A] => Nothing) forSome { type A <: Bound }) => Nothing
  5. def createUniv[Bound,Body[_]]
  6. (impl : ((Body[A] => Nothing) forSome { type A <: Bound }) => Nothing) :
  7. Univ[Bound,Body] = impl
  9. def applyUniv [Bound,Body[_],A<:Bound](univ : Univ[Bound,Body]) : Body[A] = {
  10. case class Control(arg : Body[A]) extends Throwable
  11. val res = try {
  12. univ((arg:Body[A]) => throw new Control(arg))
  13. } catch {
  14. case Control(arg) => arg
  15. case e => throw e
  16. }
  17. res
  18. }
  19. }

The implementation of the abstract type and the code for

  1. createUniv
are trivial. The implementation of
  1. applyUniv
is a little more interesting. Here, we create a local case class
  1. Control
, extending
  1. Throwable
so that it may be thrown as an exception, that will hold a value of the desired result type. We then just pass the representation of the type abstraction a continuation that throws a new instance of
  1. Control
. We immediately catch it and return the value stored within. If we happen to catch some other sort of
  1. Throwable
we just re-throw it.

And that's it. It is worth looking at a few examples of how this might be used. The first thing that is necessary is to create a concrete implementation of Universal:

  1. val u : Universal = new UniversalImpl


  1. u
, we can create an instance of the polymorphic identity function:

  1. type Id[T] = T => T
  2. val id =
  3. u.createUniv[AnyRef,Id](
  4. (k: (Id[A] => Nothing) forSome { type A }) =>
  5. k(x => x))
  6. val idString = u.applyUniv[AnyRef,Id,String](id)
  7. val idStringList = u.applyUniv[AnyRef,Id,List[String]](id)
  8. println(idString("Foo"))
  9. println(idStringList(List("Foo", "Bar", "Baz")))

The first thing that needs to be done is to define a type function for representation the body of the universally quantified type. For the polymorphic identity function we call it

  1. Id[T]
. The rest is pretty straightforward. It ought to be possible to make the above code a little less verbose if Scala allowed type parameters to methods to be curried: it should be possible for it to infer that the first two arguments to
  1. u.applyUniv
  1. AnyRef
  1. Id
from the argument
  1. id
. However, because it will not be able to infer the last type parameter, we have to give all three. It might also be desirable in some case to not have to give a type definition to define the body of the a universal type; this could be achieve by extending Scala with support for anonymous type functions.

Now for that caveat I mentioned. When we defined the type abstraction,

  1. u.createUniv[AnyRef,Id](
  2. (k: (Id[A] => Nothing) forSome { type A }) =>
  3. k(x => x))

a significant difference between how this code is written, and how it would have been written in terms of my original explanation of the encoding, is that there is no explicit unpacking of the existential. All existential unpacking in Scala is implicit. This is nice in some ways, but it means that we have no way to give a name to type that the existential is hiding. Therefore, when constructing a function to pass to the continuation,

  1. k(x => x)

it must be the case that Scala can infer the domain of this function's type, because it is not possible to write a type annotation for

  1. x
without having a name for the existentially hidden type. I think in most cases this should not be possible, as Scala knows the type that
  1. k
is expecting as an argument, but there may be some corner cases I have not considered.

(Does anyone know how to prevent WordPress from completely screwing up less than and greater than inside of <pre> tags?)

Comments (4)

Lazy environment variables

I was just thinking that it would be really useful if command-line shells supported lazy environment variables. Lately, because of my work on Scala I will often find myself entering a line something like

  2. export PATH=/home/linuxsoft/apps/java-ibm-1.5/bin:$PATH
  3. ...

This is, despite the write-once promises of Jave (well, JVM bytecode), Scala will fail to build or a test will fail on specific implementations of the Java runtime and VM. I have been doing this so frequently, I finally decided to write some ZSH shell scripts to make it a little less work.

Just having a short macro that does the above for all the various Java runtimes is not ideal, because then my

  1. PATH
keeps getting longer and longer. ZSH might be smart about this when caching lookups, but it is inelegant. Another solution is to write something that does a search and replace on my
  1. PATH
as a string. However, the most elegant solution would simply be to not perform expansion on the contents of
  1. PATH
until it must be passed as part of an exec

ZSH can do a lot, so maybe it already has some feature that approximates this, but it would be nice if I could just write something like

  2. lazy export PATH=$JAVA_BIN:$PATH
  3. export JAVA_BIN=/home/linuxsoft/apps/java-ibm-1.5/bin
  4. ...

And then my scripts can just operate on

rather than having to modify
  1. PATH

Update: I just noticed that setting the variable

is enough for most purposes, but the above concept still seems reasonable.

Comments (2)

Focusing and higher-order abstract syntax

I figured I would take a few minutes to point out that Noam Zeilberger had an excellent paper in POPL this year on Focusing and higher-order abstract syntax. I've spent some time in the past trying to better understand Andreoli's material on focusing and Girard's* Ludics from the judgmental perspective that I became accustomed to in my formative years, but I never really came up with anything. I found Noam's computational interpertation of focusing quite pleasing.

One low tech solution that has really been helping my wrists lately is that I obtained adjustable height desk for my office. The desks that came with my office were definitely far too tall for me (even when my chair at its maximum height).

* Girard seems to have redecorated his website since I last visited.


More notes on Featherweight Scala

As I mentioned the other day, Featherweight Scala has a typing rule that allows assigning a path (x.l1...ln) a singleton type:

Γ ⊦ p : T
Γ ⊦ p : p.type

(I should note that for some stupid reason WordPress seems to prevent MathML from working, making the rather pleasant <mfrac> tag useless). This is essentially the rule that everyone else uses, but in all the other literature I've looked at (other than νObj) the singleton is labeled with T (something like ST(p)) or T must have a specific type (int, base, etc.).

As I pointed out, Featherweight Scala's version of the rule makes it difficult to write meaningful inversion lemmas. Secondly, the naïve subject reduction theorem that could be obtained would be something like:

If Γ ⊦ t : T and t → t' then exists some T' such that Γ ⊦ t' : T'.

Which feels rather unsatisfactory. Given the subsumption rule for singletons

Γ ⊦ p : T
Γ ⊦ p.type <: T

It is plausible that a variant of the theorem where T is a subtype of T' would hold, but that would run counter to the expectations that reduction will only allow us to more precisely characterize the type of result. Thinking about it I suppose it might be reasonable to state the theorem as

If Γ ⊦ t : T and t → t' then there exists some T' such that Γ ⊦ t' : T' and
if there exists some p such that T = p.type then Γ ⊦ p : T', otherwise Γ ⊦ T' <: T.

Which I am starting to believe is the best that could be proven. This theorem nicely captures the discontinuities in subject reduction introduced by singleton types. You keep making steps getting a more and more precise type, and then suddenly you "jump" from one type to another, followed by some period where the type becomes more refined, then another "jump", and so forth.

These discontinuities are quite a problem, because it means that after taking a step it may not be possible in some cases to put the resulting term back into an evaluation context and remain well-typed.

νObj [1], the predecessor to Featherweight Scala, has nearly the same problem, but the language has been constructed just right so that typing always becomes more precise. The same changes cannot be made to Featherweight Scala and retain the property that the language is a subset of Scala. Aspinall's original work [2] on on singleton types makes use of an erasure predicate, but his singletons are labeled with a type making its definition trivial. Stone and Harper's work on singleton types [3] solves the problem by using a subject reduction theorem that looks roughly like

If Γ ⊦ t : T and t → t' then that Γ ⊦ t = t' : T.

but I am not sure there is a pleasant notion of term equality in Featherweight Scala, because all reductions either consult the heap or extend it. Singletons are also slightly more tractable in their setting because they require any term being used as a singleton type to have the base type "b".

[1] Martin Odersky, Vincent Cremet, Christine Röckl, and Matthias Zenger. A Nominal Theory of Objects with Dependent Types. Tech report 2002.
[2] David Aspinall. Subtyping with singleton types. CSL 1994
[3] Christopher Stone and Robert Harper. Extensional equivalence and singleton types. ACM TOCL 2006.

Comments (1)

On Arc

From Paul Graham's Arc release annoucement:

Over the years my appreciation for lists has increased. In exploratory programming, the fact that it's unclear what a list represents is an advantage, because you yourself are unclear about what type of program you're trying to write. The most important thing is not to constrain the evolution of your ideas. So the less you commit yourself in writing to what your data structures represent, the better.

Clearly Paul has been living in a cave for the past thirty-five years and did not hear about those quaint ideas, abstract data types and representation independence.

He seems to miss the obvious that using a list to represent a point is quite constraining because you are forced to require one coordinate be the first element of the list and the other coordinate be the second element of the list, and if you ever want to change it, good luck. I suppose his rebuttal would be »oh, it just takes too long or too many tokens to define an ADT for points«.

I really wonder how long it is going to take people to truly learn that telling the compiler about the structure of the data you work with is just as important as telling it about the control flow.

Comments (7)


There is a current thread on Lambda the Ultimate at the moment on predictions for 2008. I will have to remember to look at it again later this year.

Most of the predictions concerning Scala are positive.

I would have to agree with Sean that there have been many years I've heard that concurrency will be the big thing, but I do not feel there has been too many big developments in the area recently. Robin Milner has recently released a draft book on his theory of bigraphs, his newer generic model for concurrency. There is actually an even newer draft of the book, but that is not presently linked from his website, so I am not sure whether I should publish the link.



I gave a mild promise of more programming languages related material after depositing my dissertation, but so far I have not really come through on that.  I am not sure how much I will cover in this particular posting, but we'll see.

Firstly, I am have been merging material on generalized parametricity that I revised and expanded, as part of my dissertation, back into a journal version of my LICS 2005 paper.  My plan current plan is to submit it to LMCS.

I have just barely started on creating a Coq formalization of Featherweight Scala. It is something that could probably done in Twelf, but I am trying to believe that Coq is more modular at the moment. However, I am currently feeling somewhat disillusioned because I am told that defining part of an AST like the following

Inductive exp : Set :=
| exp_tuple : list exp -> exp

where list is the list data type that is part of Coq basis library, will not allow one to prove the sorts of theorems one would like. Instead, I am told I should write a mutually inductive AST, something like the following

Inductive exp : Set :=
| exp_tuple : exp_list -> exp
with exp_list : Set :=
| exp_list_nil : exp_list
| exp_list_cons : exp -> exp_list -> exp_list.

Doing things this way almost negates the benefits one would assume you get from having polymorphism in Coq. Now I am told for the latter definition it is possible to make clever use of Coq's coercions to convert exp_lists back and forth between lists, but I'll have to see how well this works for Featherweight Scala. Maybe, maybe Delphin will be usable someday.

My main project so far other than getting settled in, dealing with Swiss bureaucracy, and preparing a talk on generalized parametricity for the Tresor seminar, has been to start thinking about and to soon start implementing virtual classes for Scala. Virtual classes are kind of strange things, I did not really understand them until I started working on the problem here at EPFL. The idea is basically that you define some »module« class

class A {
  class Inner <: { val x : Int = 42 }

with an inner class, and then you may inherit from the module class and override the inner class. The idea is for that to look something like the following in Scala

class B extends {
  class Inner <: { val y : Int = -42 }

Here B.Inner has both a member x and a member y. Things become more interesting when you include inheritance from the inner class

class A {
  class Inner <: { val x : Int = 42 }
  class Innie extends Inner { val z : Int = 0 }
class B extends {
  class Inner <: { val y : Int = -42 }

Now B.Innie also has a member y because we have overridden its parent class. The idea is that for Scala, this kind of behavior can just be treated as complex syntactic sugar for abstract type members and trait composition. However, it is getting late here, so I'll talk more about that some other time.

Comments (6)

In Helvetica now

I have many things I would like to write about, but I figured I would take a few seconds to express my annoyance that because my hostname now ends in ».ch«, Google automatically assumes that I want my interface and searches to be in German by default.  Which I  probably also really annoys the third of the country where German is not the primary language.

Comments (10)

I come bearing links

While I have been in hiding, toiling on my dissertation, a rather large number of links have been accumulating in my to-do list. I'll hopefully write more on the status of my dissertation soon.


A few weeks ago I became frustrated with the capabilities of FontBook on Mac OS X and Gnome Specimen and gucharmap under X11. I spent some time searching and came across Linotype's FontExplorer X. My best description of it would be iTunes for fonts. Not only does it let you manage, examine, and compare the fonts you do have, it lets your preview and purchase fonts from Linotype.

I also came across the UnicodeChecker application (again MacOS X only) which perhaps provides a more detailed examination and exploration of the properties of Unicode glyphs.


I'm not sure where I read about it, but I recently came across, "The Online Encyclopedia of Western Signs and Ideograms". I haven't needed to consult it yet, but it is fairly interesting just to see what it gives you when asking for a random sign.

I've known about this for a while, but didn't think to mention it until now: the entirety of the Fifteenth edition of the Chicago Manual of Style can be found online. Useful when you don't want to carry around the hardcopy, and searchable. Unfortunately, looking at it again now, the fact that it says in the corner »University of Pennsylvania« makes me believe that is this probably not a free service. And indeed, I just verified this with my desktop, which does not have a cached cookie from inside the Penn network.


From Lambda the Ultimate, I learned about a paper giving a functional description of the »galleys« layout algorithm used by the lout document processing system.

This also reminded me of the earlier post, also by Neel, on a functional description of TeX's formula layout algorithm.

I also went looking and Kingston has a nice paper on the design of lout, and website on the eventual successor to lout, Nonpareil. lout is an interesting system, though for some reason when I first encountered it many years ago I didn't seem to think it produced as nice of output as TeX. I haven't tried to reevaluate this at the moment, and see whether it has changed or I was simply wrong in the first place.

Also, something else on Lambda the Ultimate led to me to discover the Juno-2 structured drawing editor. I've only had a chance to read about it briefly, but it sounds quite a bit like what I have envisioned for the font editor I've been thinking about for a while, but haven't found the time to seriously start yet.

Finally, while writing this section, I came across another Lambda the Ultimate post on Skribe, a Scheme based document formatting system. It however it seems to be more of a macro layer on top on HTML and LaTeX, rather than a fully integrated system.

Comments (2)

Linkage dump

From Daring Fireball:

Typographica has some discussion on a new serifed typeface Gina.

I wish I had known about, OcaIDE, a very promising Eclipse plugin for OCaml before starting the ICFP Programming Contest. If nothing else, it would have saved a bunch of time looking up standard library functions on the web.

Comments (3)

« Previous Page« Previous entries « Previous Page · Next Page » Next entries »Next Page »