Also, this morning the Journal of Functional Programming accepted our extended version of "Boxes Go Bananas" as ready for publication. Just need to track down any last typos...
Archive for September, 2006
One thing I've been thinking about a bit lately is whether it might be possible to smooth over quite a number of the problems with Twelf via essentially providing an additional meta-language. For example, something like a Haskell library that can interface with the Twelf server. It would be possible to essentially use the library to do things the way they are done now, but also use something like the parametricity trick from "Boxes Go Bananas" to programmatically construct some bits of proofs. I guess I'm kind of thinking of Delphin lite perhaps. But then it would be possible to write combinators for giving you "generic" sums and products and false elimination.
The primary problem is that while it would be possible to enforce that HOAS is used correctly, it would be difficult to ensure that only well-typed LF terms are manipulated. The other serious problem is that it would require building a considerable amount on intelligence into the system to be able to provide comprehensible error messages. As it is Twelf doesn't always provide particularly enlightening error messages, so it would perhaps even require adding heuristics to interrogate the twelf-server for additional information that most users don't know to consult.
However, perhaps after I've had more time to work with Coq this semester, I'll conclude that such effort is pointless. Adam Chlipala was pushing programming entirely in Coq while I was at ICFP. It is something I've certainly considered before, but I don't think I have any projects at the moment where I can really justify the learning curve for doing so.
Recently I became aware of a 2005 Intel white paper on ray tracing. It is intriguing mostly because, aside from some demoscene hacks, the impression I've gotten is that ray tracing has been far too computationally expensive for real-time rendering. Quoting somewhat selectively from Chapter 13 ("The Future") of Real-Time Rendering by Möller and Haines
Some argue that in the very long term, rendering may best be solved by some variant of ray tracing, in which huge numbers of rays sample the environment for the eye's view of each frame. And there will also be colonies on Mars, underwater cities, and personal jet packs.
However, on the preceding page they do say
For whatever reasons [...], Moore's Law is currently being beaten in this area. This situation seems unlikely to continue, and the application-stage processors are still following Moore's Law, but it is an interesting phenomenon. At a six-month doubling rate, an average frame of ABL [A Bug's Life] would be possible at real-time rates by 2007. We can always dream...
The book was published in 1999, but 2007 isn't far off.
In any event, the white paper claims that it isn't just increasing computational power that is making ray tracing viable. They argue, without much in the way of supporting mathematics, that for a fixed resolution as scene complexity increases, ray tracing scales logarithmically. Whereas raster techniques, presumably, scale linearly. I couldn't find a clear mention in the article of their conjectured scaling for raster techniques, but looking at their log-plot performance curves, I would surmise that it is at least a low polynomial.
Their experiments essentially supported this, that for sufficiently complex scenes a multithreaded software ray tracer running on a multicore processor would out perform a hardware raster renderer.
It makes me want to sit down an write a nice ray-tracer.
Also, I did spend a few minutes trying to get my colored pretty printing support for GHC working. I kind of succeeded. I could actually produce colors, but due to the way the Hughes-Peyton Jones pretty printer wants certain things to commute, I couldn't come up with a good way to prevent parts of the accumulated document from getting pushed inside of color blocks that they shouldn't.
So I'm going to read through the paper when I get back to Philly and hopefully figure out a way to integrate "rich text" into Hughes-Peyton Jones pretty printing.
We're nearing the half-way point of day two of the GHC hackathon. First SimonPJ gave a run through what "factorial" looks like through various stages of the compiler. Then SimonM gave us a tutorial on tracking down space leaks. SimonPJ is now explaining GHC as an API.
This afternoon it looks like I am going to work on improving the quality of pretty-printing for the Core language, as well as perhaps reviving "ExternalCore" which can be used if you want to write another front-end targeting the Core language.
Air travel is unsurprisingly as miserable as ever. But Portland is beautiful.
At the GHC Hackathon at the moment. Quite jet-lagged. I'll write more soonish.
If I had known that the world was going to take note of my musings on the merits of F# versus Scala quite so quickly I probably would have spent a bit more time trying to make it meatier and more profound. Mostly I just had a few minutes to kill before lunch 🙂 I suppose after I've had more of a chance to play with F# I should write up a more detailed comparison to make sure my current instincts are correct.
While I think Scala has much potential, I've been thinking the past few days that perhaps I might be happier with F#. There are some features that are unique to each, for example, I don't believe that F# has anything like Scala's implicit arguments yet. However, I'm still having a little trouble wrapping my head around the best way to structure some things in Scala. Additionally, writing Scala code somehow feels rather verbose, much like programming in Java. I'd probably be able to do both things better with some practice (converge on a design and write more concise code). Whereas with F# I feel much more confident that I could just sit down and get code written. And I think I'd be happier getting code done rather than using a slightly whizzier language.
So I've installed F# and will probably start doing a little hacking once I'm satisfied with my presentations.
Lately, I've been spending most of my time working on my thesis proposal presentation (Mark II) and my presentation for "Good Advice for Type-directed Programming". Currently, I think I am going to do all of my revised proposal presentation in Omnigraffle. I had a vague epiphany a week or two ago about how to explain most of my research with pictures rather than words. We'll see if that works any better. I am going to try to reuse some of my diagrams for the "Good Advice" talk, but may need to integrate that with some LaTeX as that might be a better way to typeset some of the code that will need to go in the tail end of the talk.
It's hard to believe I'll be leaving for ICFP in about a week.
The other day it occurred to me that I really kind of miss the command-line interactions older versions of AutoCAD (though I imagine current versions probably have it somewhere). At least it was nice to be able to give it exactly lengths, angles, and positions (relative or absolute) so that you could be sure you didn't accidentally eyeball something incorrectly.
I was noticing that OmniGraffle gives you the option to specify the geometry of elements via an inspector window, but somehow it didn't quite give the same feel. I'll have to load up Illustrator when I get home and check whether there options for exact geometry that I just haven't founded or noticed.