wxAutoBufferedPaintDC can cause very slow rendering if drawing bitmaps with alpha on MSW.
The reason is that the backing store bitmap used by wxAutoBufferedPaintDC is a DDB and not a DIB. If you draw an alpha bitmap onto the DC then AlphaBlt will use wxAlphaPixelData which when not using a DIB will make a copy of the whole backing store. It does this for each bitmap draw (can be pretty bad on wxAuiToolBar).
I've worked around it by making my own double buffering solution that uses this:
// Convert to DIB. This way, if transparent images are drawn then // you do not get a DDB=>DIB=>DDB conversion via temporary // expensive resource allocations! store.ConvertToDIB();
Thats meant I have to derive wxAuiToolBar and re-implement OnPaint. Would be nice to be able to avoid doing that.
Originally posted by @petebannister in #23585 (comment)
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.
For the record, this is probably the same problem as alluded to here:
Except that using depth of 24 doesn't work when alpha is effectively used, of course. OTOH fixing this would allow to remove the workaround in the code above.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.
KiCad also spends 90% of time drawing bitmaps while resizing
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.
After updating from 3.2.4 to 3.2.5, aDc.GetAsBitmap().ConvertToDIB();
causes a (recursive) assert
"deleting bitmap still selected into wxMemoryDC"
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.
It's normal that destroying a bitmap selected into a memory DC results in an assert, this really shouldn't be done.
The question is whether we should be always creating the backing store bitmap as a DIB or not and I am still not sure about the answer here. I think that in general using DIB should be faster because it is a better target for drawing into, which is done many times while composing it, and then it's only blitted to the screen once, but it's not at all impossible that there might be some situations in which drawing to it is fast anyhow but blitting it to screen is faster with DDB. I don't have any benchmarks to check it, however, and even if we had/I wrote one, there are so many different combinations of Windows version, display depth, video driver etc that it seems impossible to test all of them.
I think we could experiment with always using a DIB here in master, but I'm really not sure about doing this in 3.2.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.
BTW, I forgot to mention, but the change needed to always create DIPs is trivial:
diff --git a/src/common/dcbufcmn.cpp b/src/common/dcbufcmn.cpp index ba31002c1d..74710fe561 100644 --- a/src/common/dcbufcmn.cpp +++ b/src/common/dcbufcmn.cpp @@ -84,7 +84,7 @@ private: // we must always return a valid bitmap but creating a bitmap of // size 0 would fail, so create a 1*1 bitmap in this case - buffer->CreateWithLogicalSize(wxMax(w, 1), wxMax(h, 1), scale); + buffer->CreateWithLogicalSize(wxMax(w, 1), wxMax(h, 1), scale, 32); return buffer; }
so it's not at all a problem to do it, the problem is to determine whether it's really a good idea.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.