When Ingo Wald began disclosing substantial algorithmic improvements to the efficiency of raytracing, he drew a lot of attention from researchers throughout academia and the computer graphics industry. Yet more progress has been made since then, so today the words "realtime raytracing" are no longer a joke, but a reference to specific techniques and algorithms. (I know no link to a good overview of the current state of the art, but you could always browse around the relocated ompf.org forum.)
The old question of whether ray tracing is the "better" rendering algorithm is still being debated. In spite of this, some brave or foolhardy adventurers have set out to see how far realtime raytracing can carry. And indeed it does carry quite far.
All of the existing realtime raytracers render models made from planar triangles, just like commonly available rasterization hardware (i.e. graphics cards) does. This is the pragmatic and sensible thing to do, given that the whole 3D graphics industry has standardized on the flat triangle as the fundamental rendering primitive. Realtime raytracing tries to differentiate itself with more physically accurate lighting, or special effects that rasterizers cannot properly handle.
The trend in realtime graphics (i.e. games, mostly) has been ever increasing geometric detail, i.e. ever more ever smaller triangles. On the one hand, the ever shrinking triangular facets are being used to model ever smaller wrinkles, nooks, and crannies. On the other hand, the triangles are shrinking to better approximate smooth, but curved, surfaces. In some sense, smooth curved surfaces do not contain a whole lot of information. In some sense, requiring so many tiny triangles to represent smooth curved surfaces is wasteful.
Before raytracing went realtime, it had another conceptual strength: it could fairly easily handle all kinds of different primitive objects. This advantage had to be given up over the course of the many significant optimizations which ultimately increased rendering speed by one or two orders of magnitude. To this day, realtime raytracing is limited to models made of flat triangular facets, and cannot easily be extended to handle other kinds of surfaces well.
This is where I, a non-pragmatic and non-sensible tinkerer, come in :-). I am trying to extend realtime raytracing to handle general quadrics, the simplest of all curved surfaces.
In a mathematical sense, quadric surfaces are the smallest step up from flat planes in terms of complexity. That is not so easy to explain without introducing quite a bit of mathematical terminology. So let me just say that planes are of polynomial degree one (which is usually called linear), while quadrics are of polynomial degree two. This means that the computational operations of addition, multiplication, and division are all that's needed to handle planes. Quadrics require square roots on top of that. (Both divides and square roots used to be slow, but computers have evolved and use much faster methods of calculation now.)
Both planes and quadrics are mathematically quite well understood; concise formulas (closed form as a mathematician would say) are well known for most anything you need to calculate. So in that sense, too, quadrics are "simple" as I stated above. As it turns out, quadrics are simple enough that they can be fitted into the established framework of realtime raytracing; at least as far as the basic algorithm is concerned (I still have more optimizations to do, but they all seem quite possible to transfer to quadrics).
This simplicity has drawbacks, too. If you imagine quadrics as elastic pieces of foil, and you wanted to build some given surface from such pieces, then you would find that quadrics are rather stiff and unwieldy. This does not matter for rendering, but is still a big problem that I'll have to address again later. Quadric surfaces can also be infinite along one or more directions of space, so they have to be trimmed in some way.
The ultimate goal is to let users model objects in some high level representation, and convert those objects automatically into piecewise quadric surfaces. In a perfect world, users would never have to know about quadrics. They should simply enjoy notably lower memory footprints and higher rendering speeds, when curved objects are tessellated into a few hundred quadrics rather than a few ten thousand triangles (that factor is probably between 20 and 50, rather than a full 100, but these are just guesses).
This is a problem that I could not really solve in the last 18 months. There are some prior results in the literature, for example the PhD theses of Baining Guo or Claudia Bangert. But I don't think those approaches are practical for my purposes.
I collected a few not-quite-solutions of my own, that I don't consider sufficient either, but which were tantalizingly close to failure or to success (depending on whether the glass is half full or half empty). There are still a few more things I could try ... but for now, I decided to focus on the rendering side. A proof of concept demo program may be useless, but it is flashier than a notebook full of proof of concept formulas. :-)
Even assuming I were able to program a nice and competitive realtime raytracer that handles general quadrics, a lot of (potential) issues need to be investigated before it would be actually useful. Here is an incomplete list:
- the modeling problem
- numerical accuracy issues
- texture coordinates (for general quadrics, i.e. with potentially weird topology)
- a real lighting model (solved by others, but might not always easily carry over to quadrics)
- feature, features, features ...
- user interface, or API
- infrastructure: a model editor at least, possibly middleware
- integration into existing software suites, e.g. Blender
So don't let me fool you into investing your venture capital. I have been pursuing this project primarily out of curiousity. I will soon have to direct my attention back to the kind of work that pays the bills. But until then I want to implement and test what I found so far and document it here.