I'm new to UWP app development and trying to build an image processing app. I was using the code in -us/windows/uwp/audio-video-camera/imaging However I got exception like this:SoftwareBitmapSource::SetBitmapAsync only supports SoftwareBitmap with positive width/height, bgra8 pixel format and pre-multiplied or no alpha. in code await source.SetBitmapAsync(sbitmap); I'm wondering if this method really has so many limitations and if so if there is any alternative I should use that has the least limitations. The code snippet is as following
I tried to record all topics with rosbag record -a but it seems that there is a problem with the compression of depth images. The input format is mono8 or bgra8 respectively and the compression requires one channel 32f. Here is the error message:
UPDATE: Sorry for the confusion. This question more about situation when I set internalFormat in glTexImage2D to something like "bgra8" but videodriver internally convert data to another format, like "rgba8".
There are two things that you might call "texture formats". The first is the internalformat parameter. This is the real format of the image as OpenGL stores it. The format parameter describes part of the format of the pixel data you are providing with the data parameter.
To put it another way, format and type define what your data looks like. internalformat is how you're telling OpenGL to store your data. Let us call format and type the "pixel transfer format", while internalformat will be the "image format".
Or to put it another way, your pixel data that you give OpenGL can be stored in BGR order. But the OpenGL implementation decides on its own how to actually store that pixel data. Maybe it stores it in little-endian order. Maybe it stores it big-endian. Maybe it just arbitrarily rearranges the bytes. You don't know, and OpenGL does not provide a way to find out.
Generally most hardware doesn't suport 3-component texture formats so in this specific example you can make a reasonably safe assumption that it's converted. A 3-component OpenGL texture is actually 4-component in hardware but with the alpha component ignored/set to 255/whatever.
For a more general case, glTexSubImage2D performance can give you a good indication of whether or not the upload must go through a software conversion path. You'd go about this during program startup - just create an empty texture (via glTexImage2D with NULL data), then issue a bunch of glTexSubImage2D calls, each followed by a glFinish to ensure it completes. Ignore the first one because caches/states/etc are being setup for it, and time the rest of them. Repeat this for a few different formats and pick the fastest.
Another alternative - seeing as you've tagged this question "Windows" - is to create a D3D device at startup (just after you've created your window but before you init OpenGL) then use D3D to check for format support. While OpenGL allows for software conversion to a valid internal format, D3D doesn't - if it's not supported by the hardware you can't do it. Then destroy D3D and bring up OpenGL. (This technique can also be used for querying other stuff that OpenGL doesn't let you query directly).
It's actually a useful rule of thumb to cross-check what you're doing in OpenGL with the D3D documentation. If D3D can't do something then it can be a good indication that the something in question is not supported in hardware. Not always true, but a useful starting place.
I've written a simple node which reads in a png file with transparency and publishes it to a certain topic. The topic is then subscribed by mapviz to display the image as an overlay. While the image is shown, the background isn't shown as transparent despite setting CvImage's encoding to "bgra8" ( _br... ).
I've just had a look at the source code for the mapviz image plugin and you're right, you can see here that it doesn't currently support RGBA image rendering. There are two parts which will need to be modified in order to get this working if you want to adapt it.
A lot of Metal examples, including ones from Apple, use MTLPixelFormat.bgra8Unorm as the pixelFormat in the pipeline state. I was wondering if you had any insight as to whether or not this might be preferred over, say, MTLPixelFormat.rgba8Unorm, which has a more familiar RGBA ordering or even something like MTLPixelFormat.rgba8Unorm_srgb.
The reason is one of convenience for programmers. The X86 and the ARM are little endian processors. So when writing an unsigned integer as a literal in code they are the reverse order to how they appear in actual memory.
c80f0f1006