## Even more modules in Scala

In response to Rossberg's challenge to encode the following:

``` signature S =
sig
structure A : sig type t; val x : t end
val f : A.t -> int
end
structure M =
struct
structure A = struct type t = int; val x = 666 end
fun f x = x
end
structure N = M :> S
```

In Scala, the above becomes:

``` type S = {
val A: { type T; val x: T }
def f(arg: A.T): Int
}
val M = new {
val A = new { type T = Int; val x = 666 }
def f(arg: Int) = arg
}
val N = M : S
```

And using

`N.f`
on
`N.A.x`
is even well-typed, but still encounters the same problems with evaluation I mentioned earlier. And yes, the type
`N.A.T`
is hidden:

``` scala> val x : Int = M.A.x
x: Int = 666
scala> val x : Int = N.A.x
<console>:7: error: type mismatch;
found   : N.A.T
required: Int
val x : Int = N.A.x
^
scala> val x : N.A.T = 42
<console>:7: error: type mismatch;
found   : Int(42)
required: N.A.T
val x : N.A.T = 42
^
```

As for how Scala, solves the double vision problem, I would look at the work on νObj, and the forthcoming paper on Featherweight Scala.

Update: Looking more closely at the definition of "double vision" as defined by Dreyer, the Scala implementation cannot currently handle his motivating example. I will need to think about how the example works in Featherweight Scala, which to my knowledge shouldn't have a problem with at least defining the type signatures. I think the short answer is that νObj and Featherweight Scala (the non-algorithmic version) solve the problem in a vacuous fashion, as the type checking problem for them is undecidable. Scala proper possibly could resolve the problem by using a less restrictive cycle detection algorithm.

## Functors in Scala

Following on my earlier entry on modules in Scala, I'll give an encoding of Standard ML style functors here. You can get a pretty close approximation by using `class` constructor arguments. However, I am going to cheat a little to get the closest encoding I think is possible by using the experimental support for dependent method types. You can get this by running `scala` or `scalac` with the option `-Xexperimental`. It works okay at least some of the time, but no one has the time at the moment to commit to getting it in shape for general consumption.

So here is my example of how the encoding works. First, the SML version:

``` signature Eq = sig
type t
val eq: t -> t -> bool
end
signature RicherEq = sig
include Eq
val neq: t -> t -> bool
end
functor mkRicherEq(arg : Eq) :> RicherEq where type t = arg.t = struct
type t = arg.t
val eq = arg.eq
fun neq x y = not (eq x y)
end
```

We can transliterate this example into Scala as:

``` type Eq = {
type T
def eq(x: T, y: T): Boolean
}
type RicherEq = {
type T
def eq(x: T, y: T): Boolean
def neq(x: T, y: T): Boolean
}
def mkRicherEq(arg: Eq) : RicherEq { type T = arg.T } = new {
type T = arg.T
def eq(x: T, y: T) = arg.eq(x, y)
def neq(x: T, y:T) = !eq(x, y)
}
```

The only problem I discovered is that it is not possible to define `RicherEq` in terms of `Eq` as we could in SML:

``` scala> type RicherEq = Eq { def neq(x: T, y: T): Boolean }
<console>:5: error: Parameter type in structural refinement may
not refer to abstract type defined outside that same refinement
type RicherEq = Eq { def neq(x: T, y: T): Boolean }
^
<console>:5: error: Parameter type in structural refinement may
not refer to abstract type defined outside that same refinement
type RicherEq = Eq { def neq(x: T, y: T): Boolean }
^
```

Why this restriction exists I don't know. In fact, this sort of refinement should work in the current version of Featherweight Scala, so perhaps it can be lifted eventually.

I still need to think about higher-order functors, and probably spend a few minutes researching existing proposals. I think this is probably something that cannot be easily supported in Scala if it will require allowing method invocations to appear in paths. However, off hand that only seems like it should be necessary for applicative higher-order functors, but again I definitely need to think it through.

## Modules in Scala

I just saw a thread on Lambda the Ultimate where I think the expressive power of Scala in comparison to Standard ML's module system was misrepresented. I don't want to go into all of the issues at the moment, but I figured out would point out that you can get the same structural typing, opaque sealing, and even the equivalent of SML's `where type` clause.

For example, consider the following SML signature:

``` signature Nat = sig
type t
val z: t
val s: t -> t
end
```

This signature can be translated in to Scala as:

``` type Nat = {
type T
val z: T
def s(arg: T): T
}
```

It is then possible to create an implementation of this type, and opaquely seal it (hiding the definition of `T`). In SML:

``` structure nat :> Nat = struct
type t = int
val z = 0
fun s n = n + 1
end
```

In Scala:

``` val nat : Nat = new {
type T = Int
val z = 0
def s(arg: Int) = arg + 1
}
```

In many cases when programming with SML modules it is necessary or convenient to give a module that reveals the definition of an abstract type. In the above example, this can be done by adding a `where type` clause to the first line:

``` structure nat :> Nat where type t = int = struct
...
```

We can do the same thing in Scala using refinements:

``` val nat : Nat { type T = Int } = new {
...
```

Great, right? Well, almost. The problem is that structural types are still a bit buggy in Scala compiler at present. So, while the above typechecks, you can't quite use it yet:

``` scala> nat.s(nat.z)
java.lang.NoSuchMethodException: \$anon\$1.s(java.lang.Object)
at java.lang.Class.getMethod(Class.java:1581)
at .reflMethod\$Method1(<console>:7)
at .<init>(<console>:7)
at .<clinit>(<console>)
at RequestResult\$.<init>(<console>:3)
at RequestResult\$.<clinit>(<console>)
at RequestResult\$result(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflec...
```

There were some issues raised about how faithful an encoding of SML functors, and well-known extensions for higher-order functors, one can get in Scala. Indeed, off the top of my head it is not entirely clear. So I need to think more about that before I write some examples.

## Featherweight Scala lives

With everything else I've been having to deal with the past couple months, I have not made nearly as much progress on Featherweight Scala as I would have liked. Fortunately, in the past two weeks I have finally started to get back on track.

Probably what makes working out the design of Featherweight Scala so difficult is that I am trying my best to commit to it being a valid subset of Scala. It seems like whenever I try to simplify one part of the language, it just introduces a complication elsewhere. Still, I think so far that I have been able to ensure that it is a subset with two exceptions.

The first exception is that some programs in Featherweight Scala will diverge in cases where the Scala compiler will generate a terminating program. This is because in programs that attempt to access an uninitialized field, Scala will return `null`, while Featherweight Scala diverges. Since the program many never attempt to use the `null` value, it may simple terminate without raising a `NullPointerException`.

The second exception arises because type checking in Featherweight Scala's type system is most likely undecidable. Given that the design is in flux, I haven't precisely verified this, but it seems quite plausible given its expressive power is similar to languages where type checking is known to be undecidable. Because type checking is undecidable, and the Scala language implementation attempts to have its type checking phase be terminating, there should be some programs that can be shown to be well-typed in Featherweight Scala that the Scala compiler will reject.

I think the other thing I've determined is that I do not understand F-bounded quantification as well as I thought I did. At least, I find myself worried about the cases where the typing rules presently require  templates, that have not yet been verified to be well-formed, to be introduced as hypotheses while verifying their own correctness. So I need to do some additional reading to see how some other languages deal with this issue. Certainly I can put my mind at ease by making the type system more restrictive, but I do not want to be so restrictive that it is no longer reasonable to call the language Featherweight Scala.

Update: Okay, so after looking at Featherweight Generic Java and νObj again more closely, they both do essentially the same kind of thing. In FGJ, the class table is well-formed if all its classes are well-formed, but checking a class's well-formedness may require presupposing that that class tale is well-formed. In νObj, the well-formedness of recursive record types is checked under the assumption that the record's self-name has the type of the record. Certainly the same sort of things come up when verifying the well-formedness of recursive functions and recursive types, but at least there what is being assumed is at a higher level of abstraction: when checking a recursive function you assume that the function has some well-formed type, or when checking a recursive type you assume that the type has some well-formed kind.

## Revisiting higher-rank impredicative polymorphism in Scala

I've been meaning to write this up for a while, but it seems like there has always been something else I really ought to be doing. So I expect this will be a bit more terse than I might like. Anyway, when I wrote about encoding higher-rank universal quantification in Scala back in March, I used a rather elaborate scheme involving the use of Scala's first-class existential quantifiers. While this was the point of the exercise, surprisingly, no one called me on the fact that if you just want higher-rank impredicative polymorphism in Scala, there is a much simpler encoding. Maybe it was obvious to everyone or no one read closely enough to think of raising the issue. So today, I'll explain the better way to encode them.

First we can define an infinite family of traits to represent n-ary universal quantification, much like Scala represents n-ary functions types:

``` trait Univ1[Bound1,Body[_]] {
def Apply[T1<:Bound1] : Body[T1]
}
trait Univ2[Bound1,Bound2,Body[_,_]] {
def Apply[T1<:Bound1,T2<:Bound2] : Body[T1,T2]
}
// ... so on for N > 2
```

Really, the key to this encoding is the use of higher-kinded quantification to encode the binding structure of the quantifier.

Now it is possible to write some examples similar to what I gave previously, but more concisely:

``` object Test extends Application {
def id[T](x : T) = x
type Id[T] = T => T
val id = new Univ1[Any,Id]
{ def Apply[T <: Any] : Id[T] = id[T] _ }
val idString = id.Apply[String]
val idStringList = id.Apply[List[String]]
println(idString("Foo"))
println(idStringList(List("Foo", "Bar", "Baz")))
type Double[T] = T => (T, T)
val double = new Univ1[Any,Double]
{ def Apply[T <: Any] : Double[T] = (x : T) => (x, x) }
val doubleString = double.Apply[String]
val doubleStringList = double.Apply[List[String]]
println(doubleString("Foo"))
println(doubleStringList(List("Foo", "Bar", "Baz")))
}
```

As I mentioned previously, this example would be much improved by support for anonymous type functions in Scala. I am pretty sure Scala will eventually support them, as they would not require any deep changes in the implementation. They could be just implemented by desugaring to a higher-kinded type alias with a fresh name, but depending on when that desugaring is performed, it is possible that it would result in poor error messages. Supporting curried type functions is also quite desirable, but given my current knowledge of the internals, that seems like adding them will require some more elaborate changes.

I think Vincent Cremet was the first person to suggest this sort of encoding, and I vaguely recall reading about it on one of the Scala mailing lists, but I could not find the message after a little bit of time spent searching.