Archive for hacking

Literally dependent types

Given that the formalization of Scala Classic has ground to a halt, for reasons I may go into later, I spent part of today hacking on the Scala compiler itself to add support for singleton literals. Currently, Scala allows singleton types for stable identifiers. My modification allows literals to be used in singleton types. I can't promise that it will be in the forthcoming Scala 2.7.2 release, but I would like it to be.

Overall it was far less work than I was expecting.  Scala already internally supports what it calls "constant types", there is just no way to write them in Scala source code presently.  Consequently, most of the effort was in extending the parser.

Given my modifications, it is now possible to write code like the following:

  1.  
  2. scala> val x : "foo".type = "foo"
  3. x: "foo".type = foo
  4.  

What I was not expecting was that out-of-the-box things like the following would work:

  1.  
  2. scala> val x : "foobar".type = "foo" + "bar"
  3. x: "foobar".type = foobar
  4. scala> val y : 10.type = 2 * 5
  5. y: 10.type = 10
  6. scala> def frob(arg : 10.type) : 6.type = arg - 4
  7. frob: (10.type)6.type
  8.  

Unfortunately the excitement soon passes when you realize all the things you can't do with singleton literals (yet). Even if we turn on the experimental dependent method support, you can't write things like

  1.  
  2. def add(arg : Int) : (arg + 5).type = arg + 5
  3.  

because these are exactly what they are called, singleton literals, not full-blown dependent types.

One cute example, based on a use suggested by Sean McDirmid, would be that some people might do something like the following with implicits:

  1.  
  2. implicit stringToColor(arg : String) : java.awt.Color = java.awt.Color.getColor(arg)
  3.  

However, with singleton literals you can tighten it up by ensuring that for some strings it will never fail:

  1.  
  2. implicit redToColor(arg : "red".type) : java.awt.Color = java.awt.Color.RED
  3. implicit blueToColor(arg : "blue".type) : java.awt.Color = java.awt.Color.BLUE
  4.  

Happily, the type inferencer already chooses the most specific implicit conversions.

In any event, they will hopefully become more useful as the type system continues to grow. I am also sure someone will probably come up with a clever use for them that hasn't occurred to me yet. If so, let me know.

Comments (5)

Functors in Scala

Following on my earlier entry on modules in Scala, I'll give an encoding of Standard ML style functors here. You can get a pretty close approximation by using class constructor arguments. However, I am going to cheat a little to get the closest encoding I think is possible by using the experimental support for dependent method types. You can get this by running scala or scalac with the option -Xexperimental. It works okay at least some of the time, but no one has the time at the moment to commit to getting it in shape for general consumption.

So here is my example of how the encoding works. First, the SML version:

  1.  
  2. signature Eq = sig
  3. type t
  4. val eq: t -> t -> bool
  5. end
  6.  
  7. signature RicherEq = sig
  8. include Eq
  9. val neq: t -> t -> bool
  10. end
  11.  
  12. functor mkRicherEq(arg : Eq) :> RicherEq where type t = arg.t = struct
  13. type t = arg.t
  14. val eq = arg.eq
  15. fun neq x y = not (eq x y)
  16. end
  17.  

We can transliterate this example into Scala as:

  1.  
  2. type Eq = {
  3. type T
  4. def eq(x: T, y: T): Boolean
  5. }
  6.  
  7. type RicherEq = {
  8. type T
  9. def eq(x: T, y: T): Boolean
  10. def neq(x: T, y: T): Boolean
  11. }
  12.  
  13. def mkRicherEq(arg: Eq) : RicherEq { type T = arg.T } = new {
  14. type T = arg.T
  15. def eq(x: T, y: T) = arg.eq(x, y)
  16. def neq(x: T, y:T) = !eq(x, y)
  17. }
  18.  

The only problem I discovered is that it is not possible to define RicherEq in terms of Eq as we could in SML:

  1.  
  2. scala> type RicherEq = Eq { def neq(x: T, y: T): Boolean }
  3. <console>:5: error: Parameter type in structural refinement may
  4. not refer to abstract type defined outside that same refinement
  5. type RicherEq = Eq { def neq(x: T, y: T): Boolean }
  6. ^
  7. <console>:5: error: Parameter type in structural refinement may
  8. not refer to abstract type defined outside that same refinement
  9. type RicherEq = Eq { def neq(x: T, y: T): Boolean }
  10. ^
  11.  

Why this restriction exists I don't know. In fact, this sort of refinement should work in the current version of Featherweight Scala, so perhaps it can be lifted eventually.

I still need to think about higher-order functors, and probably spend a few minutes researching existing proposals. I think this is probably something that cannot be easily supported in Scala if it will require allowing method invocations to appear in paths. However, off hand that only seems like it should be necessary for applicative higher-order functors, but again I definitely need to think it through.

Comments (2)

Modules in Scala

I just saw a thread on Lambda the Ultimate where I think the expressive power of Scala in comparison to Standard ML's module system was misrepresented. I don't want to go into all of the issues at the moment, but I figured out would point out that you can get the same structural typing, opaque sealing, and even the equivalent of SML's where type clause.

For example, consider the following SML signature:

  1.  
  2. signature Nat = sig
  3. type t
  4. val z: t
  5. val s: t -> t
  6. end
  7.  

This signature can be translated in to Scala as:

  1.  
  2. type Nat = {
  3. type T
  4. val z: T
  5. def s(arg: T): T
  6. }
  7.  

It is then possible to create an implementation of this type, and opaquely seal it (hiding the definition of T). In SML:

  1.  
  2. structure nat :> Nat = struct
  3. type t = int
  4. val z = 0
  5. fun s n = n + 1
  6. end
  7.  

In Scala:

  1.  
  2. val nat : Nat = new {
  3. type T = Int
  4. val z = 0
  5. def s(arg: Int) = arg + 1
  6. }
  7.  

In many cases when programming with SML modules it is necessary or convenient to give a module that reveals the definition of an abstract type. In the above example, this can be done by adding a where type clause to the first line:

  1.  
  2. structure nat :> Nat where type t = int = struct
  3. ...
  4.  

We can do the same thing in Scala using refinements:

  1.  
  2. val nat : Nat { type T = Int } = new {
  3. ...
  4.  

Great, right? Well, almost. The problem is that structural types are still a bit buggy in Scala compiler at present. So, while the above typechecks, you can't quite use it yet:

  1.  
  2. scala> nat.s(nat.z)
  3. java.lang.NoSuchMethodException: $anon$1.s(java.lang.Object)
  4. at java.lang.Class.getMethod(Class.java:1581)
  5. at .reflMethod$Method1(<console>:7)
  6. at .<init>(<console>:7)
  7. at .<clinit>(<console>)
  8. at RequestResult$.<init>(<console>:3)
  9. at RequestResult$.<clinit>(<console>)
  10. at RequestResult$result(<console>)
  11. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  12. at sun.reflec...
  13.  

There were some issues raised about how faithful an encoding of SML functors, and well-known extensions for higher-order functors, one can get in Scala. Indeed, off the top of my head it is not entirely clear. So I need to think more about that before I write some examples.

Comments (7)

Revisiting higher-rank impredicative polymorphism in Scala

I've been meaning to write this up for a while, but it seems like there has always been something else I really ought to be doing. So I expect this will be a bit more terse than I might like. Anyway, when I wrote about encoding higher-rank universal quantification in Scala back in March, I used a rather elaborate scheme involving the use of Scala's first-class existential quantifiers. While this was the point of the exercise, surprisingly, no one called me on the fact that if you just want higher-rank impredicative polymorphism in Scala, there is a much simpler encoding. Maybe it was obvious to everyone or no one read closely enough to think of raising the issue. So today, I'll explain the better way to encode them.

First we can define an infinite family of traits to represent n-ary universal quantification, much like Scala represents n-ary functions types:

  1.  
  2. trait Univ1[Bound1,Body[_]] {
  3. def Apply[T1<:Bound1] : Body[T1]
  4. }
  5.  
  6. trait Univ2[Bound1,Bound2,Body[_,_]] {
  7. def Apply[T1<:Bound1,T2<:Bound2] : Body[T1,T2]
  8. }
  9.  
  10. // ... so on for N > 2
  11.  

Really, the key to this encoding is the use of higher-kinded quantification to encode the binding structure of the quantifier.

Now it is possible to write some examples similar to what I gave previously, but more concisely:

  1.  
  2. object Test extends Application {
  3. def id[T](x : T) = x
  4.  
  5. type Id[T] = T => T
  6. val id = new Univ1[Any,Id]
  7. { def Apply[T <: Any] : Id[T] = id[T] _ }
  8.  
  9. val idString = id.Apply[String]
  10. val idStringList = id.Apply[List[String]]
  11.  
  12. println(idString("Foo"))
  13. println(idStringList(List("Foo", "Bar", "Baz")))
  14.  
  15. type Double[T] = T => (T, T)
  16. val double = new Univ1[Any,Double]
  17. { def Apply[T <: Any] : Double[T] = (x : T) => (x, x) }
  18.  
  19. val doubleString = double.Apply[String]
  20. val doubleStringList = double.Apply[List[String]]
  21.  
  22. println(doubleString("Foo"))
  23. println(doubleStringList(List("Foo", "Bar", "Baz")))
  24. }
  25.  

As I mentioned previously, this example would be much improved by support for anonymous type functions in Scala. I am pretty sure Scala will eventually support them, as they would not require any deep changes in the implementation. They could be just implemented by desugaring to a higher-kinded type alias with a fresh name, but depending on when that desugaring is performed, it is possible that it would result in poor error messages. Supporting curried type functions is also quite desirable, but given my current knowledge of the internals, that seems like adding them will require some more elaborate changes.

I think Vincent Cremet was the first person to suggest this sort of encoding, and I vaguely recall reading about it on one of the Scala mailing lists, but I could not find the message after a little bit of time spent searching.

Comments (4)

An unusual definition of total ordering

One of the things that will show up in the imminently forthcoming Scala 2.7.1 release candidate, is the addition of traits for representing equivalence relations, partial orderings, and total orderings. Previously, the trait

  1. Ordered
was used for representing totally ordered things:

  1. trait Ordered[A] {
  2. def compare(that: A): Int
  3. def < (that: A): Boolean = (this compare that) < 0
  4. def > (that: A): Boolean = (this compare that) > 0
  5. def <= (that: A): Boolean = (this compare that) <= 0
  6. def >= (that: A): Boolean = (this compare that) >= 0
  7. def compareTo(that: A): Int = compare(that)
  8. }

However, the

  1. Ordered
trait does not provide a representation of a total ordering. Therefore, the new trait Ordering:

  1.  
  2. trait Ordering[T] extends PartialOrdering[T] {
  3. def compare(x: T, y: T): Int
  4. override def lteq(x: T, y: T): Boolean = compare(x, y) <= 0
  5. override def gteq(x: T, y: T): Boolean = compare(x, y) >= 0
  6. override def lt(x: T, y: T): Boolean = compare(x, y) < 0
  7. override def gt(x: T, y: T): Boolean = compare(x, y) > 0
  8. override def equiv(x: T, y: T): Boolean = compare(x, y) == 0
  9. }
  10.  

The tricky part however, was writing description of the properties required of something that implements the

  1. Ordering
trait. When one normally thinks of a total ordering one thinks of a relation that is

  • anti-symmetric,
  • transitive,
  • and total.

The problem is that

  1. Ordering
is not defined in terms of a binary relation, but a binary function producing integers (
  1. compare
). If the first argument is less than the second the function returns a negative integer, if they are equal in the ordering the function returns zero, and if the second argument is less than the first the function returns a positive integer. Therefore, it is not straightforward to express these same properties. The best I could come up with was

    1. compare(x, x) == 0
    , for any
    1. x
    of type
    1. T
    .
    1. compare(x, y) == z
    and
    1. compare(y, x) == w
    then
    1. Math.signum(z) == -Math.signum(w)
    , for any
    1. x
    and
    1. y
    of type
    1. T
    and
    1. z
    and
    1. w
    of type
    1. Int
    .
  • if
    1. compare(x, y) == z
    and
    1. lteq(y, w) == v
    and
    1. Math.signum(z) >= 0
    and
    1. Math.signum(v) >= 0
    then
    1. compare(x, w) == u
    and
    1. Math.signum(z + v) == Math.signum(u)
    ,
    for any
    1. x
    ,
    1. y
    , and
    1. w
    of type
    1. T
    and
    1. z
    ,
    1. v
    , and
    1. u
    of type
    1. Int
    .

Where

  1. Math.signum
returns
  1. -1
if its input is negative,
  1. 0
if its input is
  1. 0
, and
  1. 1
if its input is positive.

The first property is clearly reflexivity. I call the third property transitivity. I am not sure what to call the second property. I do not think a notion of totality is required because it is assumed you will always get an integer back from

  1. compare
rather than it throwing an exception or going into an infinite loop.

It would probably be a good exercise to prove that given these properties on

  1. compare
hold if and only if
  1. lteq
(defined above in terms of
  1. compare
) has all the normal properties of a total ordering.

Comments

Correction on existential unpacking

(WordPress ate my first draft, grrr.)

So after thinking about it further, I was incorrect, and it is possible to explicitly unpack existentials in Scala. As with some other languages, it is done via pattern matching. The following example illustrates how:

  1. val x : List[T] forSome { type T } = List(42)
  2. val w = x match { case y : List[u] => ((z : u) => z)(y.head) }

However, in practice this functionality seems to be rather fragile. For example, the following two variations are rejected:

  1. val x : T forSome { type T } = 42
  2. val w = x match { case y : u => ((z : u) => z)(y) }

and

  1. val x : List[List[T]] forSome { type T } = List(List(42))
  2. val w = x match { case y : List[List[u]] => ((z : u) => z)(y.head.head) }

In both cases it reports that it cannot find the type u. In the second case, it could be attributed to erasure, as there is no way dynamically to guarantee that the contents of List are in turn also a List. However, the first seems reasonable, so it should probably be reported as a bug.

Comments

Higher-rank, impredicative polymorphism in Scala

One project that I spent some time thinking about prior arriving in Lausanne was whether I might be able to put together a Scala library for manipulating data-structures with binding based around the use of parametricity to help enforce the adequacy of function spaces (ala Boxes Go Bananas). However, one problem is that Scala does not have higher-rank polymorphism like GHC. The initial thought was, "well, Scala has higher-rank existential types, so adding the dual case should be pretty straightforward to add to the typechecker". However, it turns out it is not so straightforward to add universal quantification to the implementation of the subtyping algorithm and ensure that constraints can be solved unambiguously. It still may make sense to try and add them in some fashion.

Later, while working on Featherweight Scala, it occurred to me that subtyping is very likely to be undecidable in Scala because its language of types is essentially as expressive as full F<:. Benjamin agreed that there existed a straightforward reduction to languages with only bounded existential quantification rather than bounded universal quantification, but did not know offhand if there is a paper that gives it.

So I spent a bit of time thinking about encoding universal quantification using existential quantification, and as alluded to in TAPL (the answer to exercise 24.3.3), it can be done, but there is no completely local encoding of universals into existentials. To obtain the classical equivalences between the negation of universal quantification and existential quantification (∀α.τ implies ¬∃α.¬τ) in a constructive setting you have to use the double-negation encoding, which essentially means CPS converting your program. The other classical equivalence under negation, ∃α.τ implies ¬∀α.¬τ, is constructively valid which is why it is existential quantification is macro-expressible in languages with universal quantification.

Since I am not aware of any papers that illustrate the encoding of universals into existentials, I figured I would explain it here. The encoding of the universal type pretty much follows from the equivalence:

|∀α.τ| = (∃α.|τ|→0)→0

Here I am using 0 for the void or empty type and using the standard constructive encoding of negation:

|¬τ| = |τ| → 0

The term level encodings are not so difficult, but does require a little thought:

|Λα.e : ∀α.τ|k = k (λx:∃α.|τ|→0.let (α,k') = unpack x in |e : τ|k')

The term encoding has the form |e : τ|k, where τ is the type of the term and k is a continuation threaded through the encoding. The first thing to note is that because type abstractions are values, we always immediately pass them to the continuation. Given that,
if we look at the type we need to give the encoding of type abstraction, it is a function that that takes an existentially quantified continuation, and never returns. The only way to never return is to use the continuation. This tells us just about everything else we need to know about the encoding.

Because the continuation is existentially quantified, it is not possible to invoke it as is, so we must unpack it first. This is how we simulate the introduction of a type variable without using type abstraction. The next problem is that in order to call the continuation, we need to apply it to something with the correct type. The continuation takes values of type |τ|, possibly referencing a free type variable α, which is fortunately what the encoding of the type abstraction's body gives us. Therefore, we can use the unpacked continuation k' as the continuation argument of the recursive call to the encoding.

Given that we know how to encode type abstractions, encoding type applications is straightforward:

|e[τ] : τ'|k = |e : ∀α.τ'{α/τ}|(λv:(∃α.τ'{α/τ}→0)→0.v (pack(τ,k) as ∃α.(τ'{α/τ}→0)))

We encode the term to be applied and pass it a new continuation, that packs up the original continuation and applies it to function to which the term evaluates. I am writing {α/τ} to mean replace τ with α. I am using curly braces rather than the usual square brackets to avoid confusion with the square brackets that is commonly used for type application.

While I am almost certain that the encoding above is correct, it would be good at some point to prove it. This is the sort of proof for which Twelf is very well suited, so that is probably the route I would take.

Okay, so that is the theoretical half out of the way. Now, given the above, can we implement higher-rank impredicative polymoprhism in Scala? Yes, with a minor caveat.

First, I'll define an abstract class that defines the type of bounded universally quantified values, and methods for creating and destructing them.

  1. abstract class Universal {
  2. type Univ[Bound,Body[_]]
  3.  
  4. def createUniv[Bound,Body[_]]
  5. (impl : ((Body[A] => Nothing) forSome { type A <: Bound }) => Nothing) :
  6. Univ[Bound,Body]
  7.  
  8. def applyUniv[Bound,Body[_],A<:Bound](arg : Univ[Bound,Body]) : Body[A]
  9. }

This should be fairly straightforward given the encoding described above. The only tricky bit is the use of a higher-kinded type parameter (

  1. Body[_]
) to give the body of the universal type. The other important point to note is that it is not necessary to completely CPS convert the implementation because Scala provides primitives for non-local control transfers.

One possible implementation of the above abstract class is the following:

  1. class UniversalImpl extends Universal {
  2. type Univ[Bound,Body[_]] =
  3. ((Body[A] => Nothing) forSome { type A <: Bound }) => Nothing
  4.  
  5. def createUniv[Bound,Body[_]]
  6. (impl : ((Body[A] => Nothing) forSome { type A <: Bound }) => Nothing) :
  7. Univ[Bound,Body] = impl
  8.  
  9. def applyUniv [Bound,Body[_],A<:Bound](univ : Univ[Bound,Body]) : Body[A] = {
  10. case class Control(arg : Body[A]) extends Throwable
  11. val res = try {
  12. univ((arg:Body[A]) => throw new Control(arg))
  13. } catch {
  14. case Control(arg) => arg
  15. case e => throw e
  16. }
  17. res
  18. }
  19. }

The implementation of the abstract type and the code for

  1. createUniv
are trivial. The implementation of
  1. applyUniv
is a little more interesting. Here, we create a local case class
  1. Control
, extending
  1. Throwable
so that it may be thrown as an exception, that will hold a value of the desired result type. We then just pass the representation of the type abstraction a continuation that throws a new instance of
  1. Control
. We immediately catch it and return the value stored within. If we happen to catch some other sort of
  1. Throwable
we just re-throw it.

And that's it. It is worth looking at a few examples of how this might be used. The first thing that is necessary is to create a concrete implementation of Universal:

  1. val u : Universal = new UniversalImpl

Given

  1. u
, we can create an instance of the polymorphic identity function:

  1. type Id[T] = T => T
  2. val id =
  3. u.createUniv[AnyRef,Id](
  4. (k: (Id[A] => Nothing) forSome { type A }) =>
  5. k(x => x))
  6. val idString = u.applyUniv[AnyRef,Id,String](id)
  7. val idStringList = u.applyUniv[AnyRef,Id,List[String]](id)
  8. println(idString("Foo"))
  9. println(idStringList(List("Foo", "Bar", "Baz")))

The first thing that needs to be done is to define a type function for representation the body of the universally quantified type. For the polymorphic identity function we call it

  1. Id[T]
. The rest is pretty straightforward. It ought to be possible to make the above code a little less verbose if Scala allowed type parameters to methods to be curried: it should be possible for it to infer that the first two arguments to
  1. u.applyUniv
are
  1. AnyRef
and
  1. Id
from the argument
  1. id
. However, because it will not be able to infer the last type parameter, we have to give all three. It might also be desirable in some case to not have to give a type definition to define the body of the a universal type; this could be achieve by extending Scala with support for anonymous type functions.

Now for that caveat I mentioned. When we defined the type abstraction,

  1. u.createUniv[AnyRef,Id](
  2. (k: (Id[A] => Nothing) forSome { type A }) =>
  3. k(x => x))

a significant difference between how this code is written, and how it would have been written in terms of my original explanation of the encoding, is that there is no explicit unpacking of the existential. All existential unpacking in Scala is implicit. This is nice in some ways, but it means that we have no way to give a name to type that the existential is hiding. Therefore, when constructing a function to pass to the continuation,

  1. k(x => x)

it must be the case that Scala can infer the domain of this function's type, because it is not possible to write a type annotation for

  1. x
without having a name for the existentially hidden type. I think in most cases this should not be possible, as Scala knows the type that
  1. k
is expecting as an argument, but there may be some corner cases I have not considered.

(Does anyone know how to prevent WordPress from completely screwing up less than and greater than inside of <pre> tags?)

Comments (4)

Lazy environment variables

I was just thinking that it would be really useful if command-line shells supported lazy environment variables. Lately, because of my work on Scala I will often find myself entering a line something like

  1.  
  2. export PATH=/home/linuxsoft/apps/java-ibm-1.5/bin:$PATH
  3. ...

This is, despite the write-once promises of Jave (well, JVM bytecode), Scala will fail to build or a test will fail on specific implementations of the Java runtime and VM. I have been doing this so frequently, I finally decided to write some ZSH shell scripts to make it a little less work.

Just having a short macro that does the above for all the various Java runtimes is not ideal, because then my

  1. PATH
keeps getting longer and longer. ZSH might be smart about this when caching lookups, but it is inelegant. Another solution is to write something that does a search and replace on my
  1. PATH
as a string. However, the most elegant solution would simply be to not perform expansion on the contents of
  1. PATH
until it must be passed as part of an exec

ZSH can do a lot, so maybe it already has some feature that approximates this, but it would be nice if I could just write something like

  1.  
  2. lazy export PATH=$JAVA_BIN:$PATH
  3. export JAVA_BIN=/home/linuxsoft/apps/java-ibm-1.5/bin
  4. ...

And then my scripts can just operate on

  1. JAVA_BIN
rather than having to modify
  1. PATH
directly.

Update: I just noticed that setting the variable

  1. JAVACMD
is enough for most purposes, but the above concept still seems reasonable.

Comments (2)

Research update

I finished a draft of "Generalizing Parametricity Using Information-flow" for LMCS this past weekend. That should leave me with fewer distractions to work on something for ICFP.

My work on Featherweight Scala is moving along, sometimes more slowly than others. Other than the issues with singletons I have already discussed, one problem with the original definition of Featherweight Scala is that it did not truly define a subset of Scala proper – that is, there were many valid Featherweight Scala programs that were not valid Scala programs.

So I have been working on trying to get Featherweight Scala to be as close to a subset of Scala proper as is possible without making it far too complicated to be a useful core calculus. Some of this has involved changing the definition of Featherweight Scala. Some of this has involved lobbying for changes to Scala proper to make it more uniform (so far I've succeed with extending traits to allow value declarations to be overridden).

Through the whole process I've also been spending a lot of time typing things into the Scala interpreter to figure out how Scala proper treats them. For better or worse, I've actually managed to uncover a fair number of bugs in Scala proper doing this. I've almost reached the point where I treat it as a bit of a game: can I from thinking about the properties of the Scala type system come up with some corner case that will exhibit an unsoundness in the language (a ClassCastException, a NoSuchMethodException, etc.), or at least crash the compiler?

Last week I came up with a pretty nice one where I used the fact that Scala should not allow inheritance from singleton types to launder types in an unsafe fashion (now fixed in trunk!). Prior to that I came up with something fun where you could trick the compiler into letting you inherit from Nothing (which is supposed to be the bottom of the subtyping hierarchy). Today I got to thinking about Scala's requirements that paths in singleton types must be stable – all components of a path must be immutable. So for example in

  1.  
  2. var x = 3
  3. val y : x.type = x // ill-typed

The path x is not stable because x is a mutable variable. However, Scala does allow lazy value declarations to be stable.

  1.  
  2. var x = 3
  3. lazy val y = x
  4. val z : y.type = y // allowed, at present

Note that y is not evaluated until it is first needed, and in this case its evaluation involves a mutable reference. Furthermore, add into the mix that Scala also provides call-by-name method arguments (lazy values are call-by-need). So I started riffing on the idea of whether I could violate stability by changing the mutable reference between the definition of the lazy value, or a call-by-name argument, and its use. In retrospect, I am leaning at present towards the belief that there is no way this should be exploitable from a theoretical standpoint. That does not mean that the implementation is necessarily in alignment with theory, however. I did manage to hit upon a combination in my experiments that resulted in a NoSuchMethodException, so the exercise was not a complete letdown.

I should point out that these things do not really reflect on the quality of the Scala language as whole. It is a rich language that does not yet have a complete formal model, and as such, in the implementation process it can be easy to overlook some particularly perverse abuses of the language.

Comments

Research update

I have been making reasonably steady progress on my formalization of Featherweight Scala in Coq. I have just about seven hundred lines of definitions, with only few more things to make explicit. At that point I should be ready to start on actually proving progress and preservation.

I volunteered to triage issues submitted to the Scala bugtracking system this month, so that has been taking a little bit of my time. Mostly this just involves checking that what was submit is truly a bug and reproducible, before assigning it to the relevant person.

I am also responsible for the interpreter and the ScalaDoc, so I have also spent the time fixing a few bugs that have shown up in the interpreter. The big one of the past week is that for 2.6.2 the plan is to finally interoperate with Java generics. As a consequence, the compiler now defaults to generating class files in the format used in version 1.5 of the JVM and above. However, this broke the nightly tests run against the 1.4 JVM. The interpreter would compile and run fine until it tried to load one of the class files itself had compiled, which had an incompatible version number. So I added a check that if the interpreter is running on a pre-1.5 JVM, to force it to generate compatible class files.

An undergrad e-mailed me to ask about working on the Literate Scala semester project that I had posted. I met with him briefly last week for mostly administrative purposes. That should be interesting.

I should be moving forward on the implementation of virtual classes, but I have been avoiding it because I am not yet sure how to best modify the AST to represent them. I will hopefully have chance to discuss it with Martin this week.

I have also been spending time thinking about the next major project, an effect system for Scala. I have been thinking hard about how to add one while still maintaining reasonable backward compatibility and so that it is not exceedingly verbose. It is a very difficult problem. Furthermore, one of the most widely used effect systems that exists in programming languages, Java's throws clauses, is generally reviled by programmers. One thought I had was that maybe a whiz-bang effect system with effect polymorphism, a rich language of effects, and hooks for adding user defined effects is just overkill. Maybe programmers would be happy with being able to write an annotation like @pure on their methods and closures, and leave it at that. Enforcing purity would not be too difficult I think (ignoring the well known problem of the termination effect), and be much easier for programmers to reason about.

My wrists have also started acting up again, so I am not entirely sure how to deal with that, yet continue pushing my current research projects forward.  It has been suggested to me to look into voice recognition software, but I am not sure how compatible that would be with my current workflow.  Not to mention that I can only imagine how much it would annoy the people in the adjoining offices given that they have already expressed displeasure at all the beeping caused by emacs on my desktop computer.

Comments (4)

« Previous entries Next Page » Next Page »