Recently I became aware of a 2005 Intel white paper on ray tracing. It is intriguing mostly because, aside from some demoscene hacks, the impression I've gotten is that ray tracing has been far too computationally expensive for real-time rendering. Quoting somewhat selectively from Chapter 13 ("The Future") of Real-Time Rendering by Möller and Haines
Some argue that in the very long term, rendering may best be solved by some variant of ray tracing, in which huge numbers of rays sample the environment for the eye's view of each frame. And there will also be colonies on Mars, underwater cities, and personal jet packs.
However, on the preceding page they do say
For whatever reasons [...], Moore's Law is currently being beaten in this area. This situation seems unlikely to continue, and the application-stage processors are still following Moore's Law, but it is an interesting phenomenon. At a six-month doubling rate, an average frame of ABL [A Bug's Life] would be possible at real-time rates by 2007. We can always dream...
The book was published in 1999, but 2007 isn't far off.
In any event, the white paper claims that it isn't just increasing computational power that is making ray tracing viable. They argue, without much in the way of supporting mathematics, that for a fixed resolution as scene complexity increases, ray tracing scales logarithmically. Whereas raster techniques, presumably, scale linearly. I couldn't find a clear mention in the article of their conjectured scaling for raster techniques, but looking at their log-plot performance curves, I would surmise that it is at least a low polynomial.
Their experiments essentially supported this, that for sufficiently complex scenes a multithreaded software ray tracer running on a multicore processor would out perform a hardware raster renderer.
It makes me want to sit down an write a nice ray-tracer.