Archive for theory

Retraction on Coq and lists

Okay, so everything I said the other day about mutually defining your ASTs in Coq with custom lists, and then using Coq's implicit coercions to move back and forth between lists as defined in Coq's standard library, does not work.  Firstly, the coercions will not always be inserted when you will need them, and even inserting the coercions manually does not work out well as Coq will then not be able to determine that recursive functions you define on your ASTs will necessarily terminate.

So I have switched back to using lists from the standard library and hoping that defining custom induction principles based upon Yves Bertot's suggestion in the discussion here will work for everything I need to do.

Comments (2)

Defense against the dark arts

This Thursday at 9AM  I will defend my thesis against my committee, and anyone else that shows up and attempts to ask me difficult questions. But really, who would want to get up that early to do that?

Assuming that I can successfully disarm two assistant and two full professors,  I will be all set to start working on Scala related things at EPFL.  Of course, I've bought a plane ticket to Geneva, Switzerland for the 26th, so unless my rhetorical-fu is truly mediocre, that will be happening one way or another.

Comments (3)

Jean-Yves Girard’s evolving opinion on disjunction

1995: "Only a masochist would state AB  when he knows A."

2003: "Only a moron would state AB  if he has obtained A."

Comments (1)

Totally explicit

Some of you may have looked closely enough at the dump from InforML I posted a few weeks back and noticed that not only was I using de Bruijn Indices as I said I was switching to in April, but that I was also using explicit substitutions. After some amount of debugging and complications with how I had converted my named mixed-prefix unification algorithm over to de Bruijn Indices, I concluded that it might be best to go with something closer to the literature.

So I started revising my code to work something like Dowek, Hardin, Kirchner, and Pfenning's Unification via explicit substitutions: The case of higher-order patterns. However, I soon found that de Bruijn Indices become vastly more complicated to work with in a polymorphic setting. Using explicit substitutions did make it easier to track down a number of bugs, because substitution could be traced as part of the type normalization. Still it was a long hard process of moving forward.

So about two weeks ago I decided that global inference is really only a bonus in the overall scheme of what my dissertation is trying to show. Not entirely coincidentally this coincided with having a much stronger deadline commitment. So I scrapped inference and began working on just getting a bidirectional type-checker working for InforML, and now I nearly have the entire language type-checking. So one lesson here is to always write a type-checker first. I kind of am aware of this rule, but given the way I was attempting to start from AspectML, which doesn't have a type-checker for the source language, I wound up directly starting on inference. However, at this point I've probably rewritten the code such that barely anything from AspectML remains anyway.

In addition to tackling the easier problem of type-checking versus global type-inference, I was able to expedite the implementation by taking advantage of the Twelf Logical Framework. Specifically, a number of decision problems required by type-checking, most importantly polymorphic subsumption (which requires unification), are converted into Twelf logic programs/queries and then evaluated using the tabled logic programming engine. This simplifies many things considerably. However, this has the drawback that ill-typed programs may very well cause Twelf to diverge in search of a proof that doesn't exist, not to mention error messages that have no bearing on the input program if it does terminate without finding a solution. I think I may be able to improve the termination behavior by making the specification of some parts of the InforML language more algorithmic, but that will probably have to wait.

The next big step is to write a simple compiler for InforML that will spit out Scheme code. This means that I can avoid writing my own evaluator and implementing some of the primitive operations will be easier than they would be if I implemented as part of an interpreter. I'm hoping that getting this compilation step mostly working won't take more than a few days, which means that I can then focus on writing examples and dissertating.

I may take a little time to try implementing some local type inference just so that I can avoid writing an number of terribly obvious type annotations, but again I might just live with it if it doesn't get too out of hand while writing examples.

Comments

Practical Foundations of Mathematics

While taking a break from hacking, I discovered that Taylor's Practical Foundations of Mathematics can be read online. However, it is all HTML so much of the nicely typeset mathematics did not translate over. I have to admit that this does influence me away from buying the book just yet, simply because I can be less worried about it becoming extremely difficult to find when it goes out of print.

It is also fairly amusing to note that this review says that Taylor's book is the book »of which [Linderholm's] Mathematics Made Difficult was a parody«.

Comments

Technical reading

I learned this morning that the double negation interpretation of classical logic in intuitionistic logic originates with Kolmogorov, not Gödel. However, arguably this is mostly just a result of isolated research communities.

I came across this while browsing through Paul Taylor's Practical Foundations of Mathematics late last night/morning. The reviews of it on Amazon are pretty harsh, but I can see that his presentation of the material can be distracting to readers who are coming at it with less context. I suppose you could say that he has a rather chatty style. One example that jumped out at me was some discussion at the beginning of Chapter 2 paralleling math and programming, with a sudden parenthetical aside on the fruitfulness of linear logic. It makes perfect sense to me, and actually gives it a kind of personable feel. But I can definitely imagine the uninitiated getting annoyed because they don't know what linear logic is about and it isn't really covered anywhere in the book. Regardless, I am rather tempted to buy the book.

I finally decided I should learn about this Knuth-Bendix algorithm I'm always hearing about, so I visited the math/physics library this morning to pick up Computational Problems in Abstract Algebra, which contains the originating paper: »Simple word problems in universal algebras«.

Comments

« Previous Page « Previous Page Next entries »