Lazy evaluation is obviously the way to go. I am not sure however that I would lock the granularity at the individual face level though: it's probably not going to be flexible enough to go from close-up to far-away cases. For some good hints I would recommend taking a look at this and these. Hbr should be all you need, but i don't think you can ever get an efficient implementation without generating your limit samples in a non-recursive manner. Even if you replaced Hbr with the most optimal recursive code, you would still be very slow compared to an adaptive algorithm that switches to bi-cubic b-spline patches as early as possible. Put differently, if you are writing a ray-tracer, Hbr is a very good start, but you will likely have to write a layer that is functionally equivalent to our EvalLimit code, but works in a lazy evaluation mode, rather than our greedy implementation. This implementation is obviously a back-burner problem for us since we already have a very efficient ray-tracer that has been implementing all this for quite a while now (PRman...)
- I only have fairly naive solutions to these problem at the moment, and a lot of other priorities...
- We most likely would be targeting our implementation to texturing, articulation or simulation requirements, which likely will not be directly translatable to an efficient renderer...
Hope this helps !
The small detail I forgot ...
I'll think of the idea of adaptative lazy refinement (and intersecting bicubic patches whenever possible). However I have tried comparing adaptative and uniform refinement with the glViewer. I see significant differences between the two even for very simple shapes. The catmark cube is one of the worst: some parts of the shape show very poor refinement compared to the limit shape. With the more complex (chess pieces, car), Far is doing a better job.