Does this generally work? Or do some cards not like it?
Chhers
I don't get it.
What is desired effect?
You want some kind of progressive quality of image. I.e. it loads
firstly 256x256 mips, the image is unsharp and while user is watching
the quality improves due to loading of bigger textures?
--
wojtek
Yes. Rather than waiting for all the textures to load, the engine can
start right away.
When your textures as DDS files are 400mb, this takes a while.
Why is this unreasonable?
> What is desired effect?
>
> You want some kind of progressive quality of image. I.e. it
> loads firstly 256x256 mips, the image is unsharp and while user
> is watching the quality improves due to loading of bigger
> textures?
I think he's thinking about incremental LOD.
I never tried it, and I'm not sure if works reliable, but one can
theoretically use the texture parameters GL_TEXTURE_BASE_LEVEL
and GL_TEXTURE_MAX_LEVEL to select the mipmap levels to actually
use. By default the base level is 0 and max level is 1000. For
what you try to accomplish you must set the base level to the
most detailed texture level you got loaded.
However I'm pretty sure, that drivers are bug prone on that topic
(though I never tried it, like I told already).
Nevertheless in a real world application doing incremental
texture loading that way rarely makes sense, as textures are
mostly shared across large parts of a scene, thus all mipmap
levels are required. If it is visual details you're after forget
about ultra high resolution textures. Instead use multitexturing
and combine multiple textures with different scales.
Wolfgang Draxinger
--
E-Mail address works, Jabber: hexa...@jabber.org, ICQ: 134682867
Once again, some confusion about what I was asking. I thought I made it
fairly clear - I want to use a lower mipmap level simply because I do
not wish to wait for the higher mipmap level to load. Having to load it
in, and using an extension to turn off the higher mipmap is of no use
whatsoever.
I've seen so many apps stutter when they dynamically load textures, I
thought there must be a way to avoid it, and one way must be to avoid
loading the mipmaps all in one go.
I thought Wolfgang's answer was on target. Load a subset of the full
set of mipmaps. Tell OpenGL the maximum level is only what you
have loaded. If your application needs to load another level, then
load it, use glTexImage2D with the new "level" value, and tell OpenGL
the maximum level has increased by 1. Effectively, you need to skip
using gluMipmap functions and load the levels yourself.
For that matter, if the standard "box average" for computing mipmaps
is sufficient, load level 0 and compute your own levels 1, 2, ... . You
can even do this by rendering level N to a framebuffer object to get
level N+1.
--
Dave Eberly
http://www.geometrictools.com
You can pass a null pointer to glTexImage2d() and it will "allocate"
memory for the texture without uploading any data. After that you can
upload image data in your own time with glTexSubImage2D() (do it a
pixel at a time if you want to).
> Could I load say a 4096x4096 texture by loading 1 mipmap at a time? ie:
> Say I have loaded all mipmaps up to the 2048x2048 mipmap, then I just
> glBindTexture the texture, and then glTexImage the last mipmap (4096x4096)?
>
If you want to "add" an extra mipmap to an existing texture then I'm
not sure what will happen. In theory it should work but I've found
it's best not to do abnormal things if you're writing code for general
release.
That's not what Wolfgang appeared to be saying. However, my question was
whether this actually works across a range of [modern] graphics cards.
The problem is whether you can create a texture, display it a few times,
add mip-maps to the texture, the display it again without any side effects.
> and tell OpenGL
> the maximum level has increased by 1.
Is this something explicit?
> Effectively, you need to skip
> using gluMipmap functions and load the levels yourself.
My DDS textures contain all the mipmaps, its actually faster that way.
For some reason the delay is the process loading the DDS texture onto
the graphics card - I would've thought it would be a simple transfer
over the PCI-X bus, but it seems to take a while.