Archive for languages

Public access LogiQL

We now have a web based tutorial for LogiQL, the descendent of Datalog we have been developing at LogicBlox. Given my high standards, I feel we still have more work to do on the language. However, as our newly revised website points out, it is being successfully used it solve very real business problems already.

Comments

Resumption

That was a longer hiatus than I had intended. Partly because not everything went according to plan.

My original plan was that, upon returning from my vacation, I would spend my remaining time at EPFL writing a technical report explaining everything I knew about the problems with Scala Classic. Instead, on my first day back in the office, Martin came by with a draft of a new formal system, DOT (for Dependent Object Types), that he came up with while he was on vacation. After about four weeks I managed to root out pretty much all of the obvious problems with DOT, but another four weeks was not enough to get the proofs and the metatheory into a state that we were happy with. I am not really sure what will happen with DOT in the long term.

There are a few clever things about DOT that avoid some of the problems I encountered with Scala Classic. For example, Scala Classic only had intersection types while DOT has both intersection and union types, which solves some problems with computing the members of types.  However, with Scala Classic I was trying to model a minimal subset Scala language as it exists, without adding any new features that many never be supported by the full language. I have no idea if we will see union types in the actual Scala implementation anytime soon.

The other thing DOT does is give up the goal of trying to be a minimal subset of Scala and throws out quite a few important things from a user perspective. For example, there are no methods (only λ-abstractions), no existentials, no built-in inheritance or mix-in composition (though you can encode it by hand), and object fields must be syntactically values.

This last restriction is particularly important because it solves by fiat the problems that arose in Scala Classic from using paths that have an empty type in dependent type projections. However, it also means that you may not be able to directly encode some Scala programs into DOT without an effect or termination analysis. Therefore, while DOT has the potential to be a useful intermediary step, there will still be more work to be done to provide a semantics for a more realistic subset of the Scala language.

I have been in Copenhagen for a little over a month now, but I am not really ready to comment on the research I have been doing here yet.  So in the meantime I'll probably focus more on fonts and typography.

Comments (1)

Out with the cheese, in with the pastry

I have unfortunately been quite busy the past several months, and have not had as much time to write about what I have been doing as I would have liked.  For the most part, I have been splitting my time between teaching and research.  I will hopefully go into more detail about the latter in the near future.

However, I figured I should take some time now, before I leave for my first proper vacation in a year, to announce that I have accepted a postdoc position at ITU in Copenhagen with Carsten Schürmann.  This has been in the works for a while now, but this week I finally was able to make it official.  My plan is to start at ITU at the beginning of March, where I will be working on things relating to the LF family of logical frameworks, Twelf, and Delphin.

There are probably a vanishingly small number of you that care about what this means for Scala Classic, but I hope to write something much more detailed about it when I get back from my vacation.  I'll leave you in suspense with the short answer: despite all my efforts, it simply is not possible to prove it sound without making it a very different language.  Which is rather unfortunate.

Comments (4)

The other Scala

A post by James Iry on the Scala mailing list today reminded me of the existence of FF Scala (along with its companion FF Scala Sans). I haven't received confirmation, but I suspect that Programming in Scala does not, unfortunately,  use any members of the FF Scala family. I was sorely tempted to purchase the fonts, until I saw the prices. Maybe someday.

And there are plenty of other Scalas about. In San Francisco:

Alas, we did not succeed in getting Martin to eat there while he was at JavaOne. Photo courtesy of Robby Findler.

Comments

Modules in Scala

I just saw a thread on Lambda the Ultimate where I think the expressive power of Scala in comparison to Standard ML's module system was misrepresented. I don't want to go into all of the issues at the moment, but I figured out would point out that you can get the same structural typing, opaque sealing, and even the equivalent of SML's where type clause.

For example, consider the following SML signature:

  1.  
  2. signature Nat = sig
  3. type t
  4. val z: t
  5. val s: t -> t
  6. end
  7.  

This signature can be translated in to Scala as:

  1.  
  2. type Nat = {
  3. type T
  4. val z: T
  5. def s(arg: T): T
  6. }
  7.  

It is then possible to create an implementation of this type, and opaquely seal it (hiding the definition of T). In SML:

  1.  
  2. structure nat :> Nat = struct
  3. type t = int
  4. val z = 0
  5. fun s n = n + 1
  6. end
  7.  

In Scala:

  1.  
  2. val nat : Nat = new {
  3. type T = Int
  4. val z = 0
  5. def s(arg: Int) = arg + 1
  6. }
  7.  

In many cases when programming with SML modules it is necessary or convenient to give a module that reveals the definition of an abstract type. In the above example, this can be done by adding a where type clause to the first line:

  1.  
  2. structure nat :> Nat where type t = int = struct
  3. ...
  4.  

We can do the same thing in Scala using refinements:

  1.  
  2. val nat : Nat { type T = Int } = new {
  3. ...
  4.  

Great, right? Well, almost. The problem is that structural types are still a bit buggy in Scala compiler at present. So, while the above typechecks, you can't quite use it yet:

  1.  
  2. scala> nat.s(nat.z)
  3. java.lang.NoSuchMethodException: $anon$1.s(java.lang.Object)
  4. at java.lang.Class.getMethod(Class.java:1581)
  5. at .reflMethod$Method1(<console>:7)
  6. at .<init>(<console>:7)
  7. at .<clinit>(<console>)
  8. at RequestResult$.<init>(<console>:3)
  9. at RequestResult$.<clinit>(<console>)
  10. at RequestResult$result(<console>)
  11. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  12. at sun.reflec...
  13.  

There were some issues raised about how faithful an encoding of SML functors, and well-known extensions for higher-order functors, one can get in Scala. Indeed, off the top of my head it is not entirely clear. So I need to think more about that before I write some examples.

Comments (7)

Revisiting higher-rank impredicative polymorphism in Scala

I've been meaning to write this up for a while, but it seems like there has always been something else I really ought to be doing. So I expect this will be a bit more terse than I might like. Anyway, when I wrote about encoding higher-rank universal quantification in Scala back in March, I used a rather elaborate scheme involving the use of Scala's first-class existential quantifiers. While this was the point of the exercise, surprisingly, no one called me on the fact that if you just want higher-rank impredicative polymorphism in Scala, there is a much simpler encoding. Maybe it was obvious to everyone or no one read closely enough to think of raising the issue. So today, I'll explain the better way to encode them.

First we can define an infinite family of traits to represent n-ary universal quantification, much like Scala represents n-ary functions types:

  1.  
  2. trait Univ1[Bound1,Body[_]] {
  3. def Apply[T1<:Bound1] : Body[T1]
  4. }
  5.  
  6. trait Univ2[Bound1,Bound2,Body[_,_]] {
  7. def Apply[T1<:Bound1,T2<:Bound2] : Body[T1,T2]
  8. }
  9.  
  10. // ... so on for N > 2
  11.  

Really, the key to this encoding is the use of higher-kinded quantification to encode the binding structure of the quantifier.

Now it is possible to write some examples similar to what I gave previously, but more concisely:

  1.  
  2. object Test extends Application {
  3. def id[T](x : T) = x
  4.  
  5. type Id[T] = T => T
  6. val id = new Univ1[Any,Id]
  7. { def Apply[T <: Any] : Id[T] = id[T] _ }
  8.  
  9. val idString = id.Apply[String]
  10. val idStringList = id.Apply[List[String]]
  11.  
  12. println(idString("Foo"))
  13. println(idStringList(List("Foo", "Bar", "Baz")))
  14.  
  15. type Double[T] = T => (T, T)
  16. val double = new Univ1[Any,Double]
  17. { def Apply[T <: Any] : Double[T] = (x : T) => (x, x) }
  18.  
  19. val doubleString = double.Apply[String]
  20. val doubleStringList = double.Apply[List[String]]
  21.  
  22. println(doubleString("Foo"))
  23. println(doubleStringList(List("Foo", "Bar", "Baz")))
  24. }
  25.  

As I mentioned previously, this example would be much improved by support for anonymous type functions in Scala. I am pretty sure Scala will eventually support them, as they would not require any deep changes in the implementation. They could be just implemented by desugaring to a higher-kinded type alias with a fresh name, but depending on when that desugaring is performed, it is possible that it would result in poor error messages. Supporting curried type functions is also quite desirable, but given my current knowledge of the internals, that seems like adding them will require some more elaborate changes.

I think Vincent Cremet was the first person to suggest this sort of encoding, and I vaguely recall reading about it on one of the Scala mailing lists, but I could not find the message after a little bit of time spent searching.

Comments (4)

A new species

I spent some time last night looking more closely at Carette and Uszkey's new paper on species, as mentioned recently on Lambda the Ultimate. I mostly concentrated on the first few sections and the Haskell bit at the end. They say many tantalizing things, and I had kind of hoped the punchline would be something like "here is how you can write code in a beautifully functional style over graphs and cyclic structures" (assuming I understood the potential of species correctly).  Perhaps in a follow up paper.

I also found it interesting that their dot product operator sounds more like the traditional Cartesian product type (one of each type), and their Cartesian product type sounds much more like an intersection type (the value is a structure having both types). Perhaps it is a notational holdover from previous work on species. It also seem to make for some nice concrete examples of the "locative" aspects of Girard's Ludics.

Now back to reviewing journal and ICFP papers...

Comments (2)

The problem with being a PL researcher

The problem with being a programming languages researcher is that you can wind up like a blacksmith who spends all of his time tweaking his hammer and anvil, instead of producing less esoteric artifacts.

Comments (2)

Best quote from a paper this week

Programming in OMeta would be very frustrating if all productions were defined in the same namespace: two grammars might unknowingly use the same name for two productions that have different purposes, and one of them would certainly stop working! (Picture one sword-wielding grammar decapitating another, Highlander-style: “There can be only one!”)

From Warth and Piumarta's OMeta: an Object-Oriented Language for Pattern Matching.

Comments

Correction on existential unpacking

(WordPress ate my first draft, grrr.)

So after thinking about it further, I was incorrect, and it is possible to explicitly unpack existentials in Scala. As with some other languages, it is done via pattern matching. The following example illustrates how:

  1. val x : List[T] forSome { type T } = List(42)
  2. val w = x match { case y : List[u] => ((z : u) => z)(y.head) }

However, in practice this functionality seems to be rather fragile. For example, the following two variations are rejected:

  1. val x : T forSome { type T } = 42
  2. val w = x match { case y : u => ((z : u) => z)(y) }

and

  1. val x : List[List[T]] forSome { type T } = List(List(42))
  2. val w = x match { case y : List[List[u]] => ((z : u) => z)(y.head.head) }

In both cases it reports that it cannot find the type u. In the second case, it could be attributed to erasure, as there is no way dynamically to guarantee that the contents of List are in turn also a List. However, the first seems reasonable, so it should probably be reported as a bug.

Comments

« Previous entries Next Page » Next Page »