Archive for theory

Resumption

That was a longer hiatus than I had intended. Partly because not everything went according to plan.

My original plan was that, upon returning from my vacation, I would spend my remaining time at EPFL writing a technical report explaining everything I knew about the problems with Scala Classic. Instead, on my first day back in the office, Martin came by with a draft of a new formal system, DOT (for Dependent Object Types), that he came up with while he was on vacation. After about four weeks I managed to root out pretty much all of the obvious problems with DOT, but another four weeks was not enough to get the proofs and the metatheory into a state that we were happy with. I am not really sure what will happen with DOT in the long term.

There are a few clever things about DOT that avoid some of the problems I encountered with Scala Classic. For example, Scala Classic only had intersection types while DOT has both intersection and union types, which solves some problems with computing the members of types.  However, with Scala Classic I was trying to model a minimal subset Scala language as it exists, without adding any new features that many never be supported by the full language. I have no idea if we will see union types in the actual Scala implementation anytime soon.

The other thing DOT does is give up the goal of trying to be a minimal subset of Scala and throws out quite a few important things from a user perspective. For example, there are no methods (only λ-abstractions), no existentials, no built-in inheritance or mix-in composition (though you can encode it by hand), and object fields must be syntactically values.

This last restriction is particularly important because it solves by fiat the problems that arose in Scala Classic from using paths that have an empty type in dependent type projections. However, it also means that you may not be able to directly encode some Scala programs into DOT without an effect or termination analysis. Therefore, while DOT has the potential to be a useful intermediary step, there will still be more work to be done to provide a semantics for a more realistic subset of the Scala language.

I have been in Copenhagen for a little over a month now, but I am not really ready to comment on the research I have been doing here yet.  So in the meantime I'll probably focus more on fonts and typography.

Comments (1)

Out with the cheese, in with the pastry

I have unfortunately been quite busy the past several months, and have not had as much time to write about what I have been doing as I would have liked.  For the most part, I have been splitting my time between teaching and research.  I will hopefully go into more detail about the latter in the near future.

However, I figured I should take some time now, before I leave for my first proper vacation in a year, to announce that I have accepted a postdoc position at ITU in Copenhagen with Carsten Schürmann.  This has been in the works for a while now, but this week I finally was able to make it official.  My plan is to start at ITU at the beginning of March, where I will be working on things relating to the LF family of logical frameworks, Twelf, and Delphin.

There are probably a vanishingly small number of you that care about what this means for Scala Classic, but I hope to write something much more detailed about it when I get back from my vacation.  I'll leave you in suspense with the short answer: despite all my efforts, it simply is not possible to prove it sound without making it a very different language.  Which is rather unfortunate.

Comments (4)

Even better than the real thing

Scala Classic

Several weeks ago I decided that instead of calling the core calculus that I have been working on Featherweight Scala, which could be confusing given that there is already a different calculus with that name, that I would call it Scala Classic.

I had hoped to submit a paper on Scala Classic to POPL 2009, but it has been taking too long to work out the metatheory. Partly because I am developing a mechanized proof of type soundness in Twelf, and on my first attempt at representing heaps I think tried to be too clever.  However, a more traditional treatment of heaps leads to the need to explicitly represent typing contexts, rather than the implicit treatment of contexts more commonly used in Twelf.

This afternoon I finally finished the proof of the theorem that is traditionally called "substitution".  However, the proof of type preservation will also require another theorem formalizing a substitution-like property for singleton types.  I am hoping that I can prove that theorem more quickly now that I've built up all the explicit context theory.  I have not thought much about whether there are any unusual theorems needed to prove progress.

In any event, I would ideally have the proofs finished by the beginning of August.  My current plan is to try submitting the paper to PLDI (which is even in Europe this year), unless some obviously better venue comes to my attention.

One of the reasons I would really like to have the proofs done by August is that I will be teaching a PhD level course here at EPFL this fall on mechanized sepcifications and proofs, with a strong emphasis on deductive systems. It also looks like I may fill in some for a masters level course based on TAPL.  So between those two things, I am not going to have nearly as much time for research as would be ideal.

Comments

Featherweight Scala lives

With everything else I've been having to deal with the past couple months, I have not made nearly as much progress on Featherweight Scala as I would have liked. Fortunately, in the past two weeks I have finally started to get back on track.

Probably what makes working out the design of Featherweight Scala so difficult is that I am trying my best to commit to it being a valid subset of Scala. It seems like whenever I try to simplify one part of the language, it just introduces a complication elsewhere. Still, I think so far that I have been able to ensure that it is a subset with two exceptions.

The first exception is that some programs in Featherweight Scala will diverge in cases where the Scala compiler will generate a terminating program. This is because in programs that attempt to access an uninitialized field, Scala will return null, while Featherweight Scala diverges. Since the program many never attempt to use the null value, it may simple terminate without raising a NullPointerException.

The second exception arises because type checking in Featherweight Scala's type system is most likely undecidable. Given that the design is in flux, I haven't precisely verified this, but it seems quite plausible given its expressive power is similar to languages where type checking is known to be undecidable. Because type checking is undecidable, and the Scala language implementation attempts to have its type checking phase be terminating, there should be some programs that can be shown to be well-typed in Featherweight Scala that the Scala compiler will reject.

I think the other thing I've determined is that I do not understand F-bounded quantification as well as I thought I did. At least, I find myself worried about the cases where the typing rules presently require  templates, that have not yet been verified to be well-formed, to be introduced as hypotheses while verifying their own correctness. So I need to do some additional reading to see how some other languages deal with this issue. Certainly I can put my mind at ease by making the type system more restrictive, but I do not want to be so restrictive that it is no longer reasonable to call the language Featherweight Scala.

Update: Okay, so after looking at Featherweight Generic Java and νObj again more closely, they both do essentially the same kind of thing. In FGJ, the class table is well-formed if all its classes are well-formed, but checking a class's well-formedness may require presupposing that that class tale is well-formed. In νObj, the well-formedness of recursive record types is checked under the assumption that the record's self-name has the type of the record. Certainly the same sort of things come up when verifying the well-formedness of recursive functions and recursive types, but at least there what is being assumed is at a higher level of abstraction: when checking a recursive function you assume that the function has some well-formed type, or when checking a recursive type you assume that the type has some well-formed kind.

Comments

A new species

I spent some time last night looking more closely at Carette and Uszkey's new paper on species, as mentioned recently on Lambda the Ultimate. I mostly concentrated on the first few sections and the Haskell bit at the end. They say many tantalizing things, and I had kind of hoped the punchline would be something like "here is how you can write code in a beautifully functional style over graphs and cyclic structures" (assuming I understood the potential of species correctly).  Perhaps in a follow up paper.

I also found it interesting that their dot product operator sounds more like the traditional Cartesian product type (one of each type), and their Cartesian product type sounds much more like an intersection type (the value is a structure having both types). Perhaps it is a notational holdover from previous work on species. It also seem to make for some nice concrete examples of the "locative" aspects of Girard's Ludics.

Now back to reviewing journal and ICFP papers...

Comments (2)

An unusual definition of total ordering

One of the things that will show up in the imminently forthcoming Scala 2.7.1 release candidate, is the addition of traits for representing equivalence relations, partial orderings, and total orderings. Previously, the trait

  1. Ordered
was used for representing totally ordered things:

  1. trait Ordered[A] {
  2. def compare(that: A): Int
  3. def < (that: A): Boolean = (this compare that) < 0
  4. def > (that: A): Boolean = (this compare that) > 0
  5. def <= (that: A): Boolean = (this compare that) <= 0
  6. def >= (that: A): Boolean = (this compare that) >= 0
  7. def compareTo(that: A): Int = compare(that)
  8. }

However, the

  1. Ordered
trait does not provide a representation of a total ordering. Therefore, the new trait Ordering:

  1.  
  2. trait Ordering[T] extends PartialOrdering[T] {
  3. def compare(x: T, y: T): Int
  4. override def lteq(x: T, y: T): Boolean = compare(x, y) <= 0
  5. override def gteq(x: T, y: T): Boolean = compare(x, y) >= 0
  6. override def lt(x: T, y: T): Boolean = compare(x, y) < 0
  7. override def gt(x: T, y: T): Boolean = compare(x, y) > 0
  8. override def equiv(x: T, y: T): Boolean = compare(x, y) == 0
  9. }
  10.  

The tricky part however, was writing description of the properties required of something that implements the

  1. Ordering
trait. When one normally thinks of a total ordering one thinks of a relation that is

  • anti-symmetric,
  • transitive,
  • and total.

The problem is that

  1. Ordering
is not defined in terms of a binary relation, but a binary function producing integers (
  1. compare
). If the first argument is less than the second the function returns a negative integer, if they are equal in the ordering the function returns zero, and if the second argument is less than the first the function returns a positive integer. Therefore, it is not straightforward to express these same properties. The best I could come up with was

    1. compare(x, x) == 0
    , for any
    1. x
    of type
    1. T
    .
    1. compare(x, y) == z
    and
    1. compare(y, x) == w
    then
    1. Math.signum(z) == -Math.signum(w)
    , for any
    1. x
    and
    1. y
    of type
    1. T
    and
    1. z
    and
    1. w
    of type
    1. Int
    .
  • if
    1. compare(x, y) == z
    and
    1. lteq(y, w) == v
    and
    1. Math.signum(z) >= 0
    and
    1. Math.signum(v) >= 0
    then
    1. compare(x, w) == u
    and
    1. Math.signum(z + v) == Math.signum(u)
    ,
    for any
    1. x
    ,
    1. y
    , and
    1. w
    of type
    1. T
    and
    1. z
    ,
    1. v
    , and
    1. u
    of type
    1. Int
    .

Where

  1. Math.signum
returns
  1. -1
if its input is negative,
  1. 0
if its input is
  1. 0
, and
  1. 1
if its input is positive.

The first property is clearly reflexivity. I call the third property transitivity. I am not sure what to call the second property. I do not think a notion of totality is required because it is assumed you will always get an integer back from

  1. compare
rather than it throwing an exception or going into an infinite loop.

It would probably be a good exercise to prove that given these properties on

  1. compare
hold if and only if
  1. lteq
(defined above in terms of
  1. compare
) has all the normal properties of a total ordering.

Comments

Focusing and higher-order abstract syntax

I figured I would take a few minutes to point out that Noam Zeilberger had an excellent paper in POPL this year on Focusing and higher-order abstract syntax. I've spent some time in the past trying to better understand Andreoli's material on focusing and Girard's* Ludics from the judgmental perspective that I became accustomed to in my formative years, but I never really came up with anything. I found Noam's computational interpertation of focusing quite pleasing.

One low tech solution that has really been helping my wrists lately is that I obtained adjustable height desk for my office. The desks that came with my office were definitely far too tall for me (even when my chair at its maximum height).

* Girard seems to have redecorated his website since I last visited.

Comments

Type safety and singletons

I am continuing to make progress on the proof of soundness for Featherweight Scala, but the most significant difficulty I am encountering involves the rule that allows paths, expressions of the form x.l1.l2...ln, to be given singleton types:

Γ ⊦ p: T


Γ ⊦ p: p.type

It seems to make any defining any sort of useful inversion principle impossible. For example, the term x.l1 where x has a field val l1 : Int = 3 will reduce to 3. In the proof of preservation, you would expect to be able to use inversion to show that if x.l1 is well typed with some type T that x has a field l1 with type T, or at least some subtype of T. However, with singletons it is possible that the type of x.l1 is x.l1.type. In that case, the only thing you can determine by inversions is that x has a field called l1 and it has some type. Many similar problems arise determining what would be seemingly straightforward facts because it is impossible to relate two types in a meaningful way.

There must be a resolution to the problem, because I know there are other type systems that allow singletons in this fashion, but I have not been able to hit upon the lemma to prove that will allow smoothing over the discontinuities singletons introduce. For now I think I will try proving safety with the above typing rule omitted, and see whether there are any other problematic features of the language.

Comments (4)

Research continues

My wrist are still not doing so well, so I am trying out using handwriting recognition in the evenings. Blogging is obviously falling by the wayside. So for now, I am mostly focusing on continuing my proof of soundness for Featherweight Scala on paper, which should be helpful if I ever get back to the Coq version of the proof.

This afternoon, I wrote a follow-up to a thread on the TYPES list concerning the applicability of my research on generalizing parametricity. I hope it made some sense. At least I have not yet received a serious criticism of the idea.

Comments (5)

Prognostication

There is a current thread on Lambda the Ultimate at the moment on predictions for 2008. I will have to remember to look at it again later this year.

Most of the predictions concerning Scala are positive.

I would have to agree with Sean that there have been many years I've heard that concurrency will be the big thing, but I do not feel there has been too many big developments in the area recently. Robin Milner has recently released a draft book on his theory of bigraphs, his newer generic model for concurrency. There is actually an even newer draft of the book, but that is not presently linked from his website, so I am not sure whether I should publish the link.

Comments

« Previous entries Next Page » Next Page »