Frame Capture

0 views
Skip to first unread message

Crisoforo Schuhmacher

unread,
Aug 5, 2024, 3:31:48 AM8/5/24
to unitrewer
Myapplication performs several rendering operations on the first frame (I am using Metal, although I think the same applies to GLES). For example, it renders to targets that are used in subsequent frames, but not updated after that. I am trying to debug some of draw calls from these rendering operations, and I would like to use the 'GPU Capture Frame' functionality to do so. I have used it in the past for on-demand GPU frame debugging, and it is very useful.

Unfortunately, I can't seem to find a way to capture the first frame. For example, this option is unavailable when broken in the debugger (setting a breakpoint before the first frame). The Xcode behaviors also don't seem to allow for capturing the frame once debugging starts. There also doesn't appear to even be an API for performing GPU captures, in Metal APIs or the CAMetalLayer.


To tell Xcode to begin capturing a frame, add a breakpoint in Xcode at a line in your code somewhere before the point at which you want to start capturing a frame. Right-click the breakpoint, select Edit Breakpoint... from the pop-up menu, and add a Capture GPU Frame action to the breakpoint:


To indicate the start of the frame to capture, before the first occurrence of MTLCommandBuffer presentDrawable:, you can use the MTLCommandQueue insertDebugCaptureBoundary method. For example, you could invoke this method as soon as you instantiate the MTLCommandQueue, to immediately begin capturing everything submitted to the queue. Make sure the breakpoint in item 1 will be triggered before the point this code is invoked.


To indicate the end of the captured frame, you can either rely on the first normal occurrence of MTLCommandBuffer presentDrawable:, or you can add a second invocation of MTLCommandQueue insertDebugCaptureBoundary.


Finally, the MTLCommandQueue insertDebugCaptureBoundary method does not actually cause the frame to be captured. It just marks a boundary point, so you can leave it in your code for future debugging use. Wrap it in a DEBUG compilation conditional if you want it gone from production code.


Help! I always use this feature to capture a shot from some video footage from my DJI Phantom. It saves me time of having to snap lots of still pictures. When I used to use the "capture frame" drop down menu from the video playback bar in the library function, the frame I would capture would appear right next to the video in the line of photos at the bottom of the window. Now it shows up as a stack, and even though I have tried all the options under stacking when I right click on the selected frame, I only get a change in the way the stack icon shows up. I now don't see the captured file anywhere. This has only just recently been a problem so I am assuming an update changed something in my settings, or this is a bug. On a scale of 1-10 of my knowledge of this program, I am between a 2-3. So, if I am missing something simple, please keep that in mind. Thanks!


It may be that you have a filter set that is preventing the capture frame from showing up in the filmstrip at the bottom. Make sure that Library > Enable Filters is unchecked and then try capturing the frame again.


Next, it may be that stacks are collapsed by default. Do Photo > Stacking > Expand All Stacks. Now when you capture a frame, it should appear in a stack at the bottom, but with the stack already expanded.


Finally, if you are currently viewing a collection or smart collection that contains the video, that is, if the collection is selected as a source in the Collections panel in the left column of Library, then when you capture a frame, it won't show up at all, because LR doesn't automatically add the frame photo to the collection.


I have the same issue, it's not a that the video is in a collection or that Filters is unchecked. When you import it puts your files in a Import "bucket" that acts like a collection. If you right click on any image and select "go to folder in library" you will see the image you captured from video as expected.


Thanks, that's an one more thing to check. When you import, by default the current source is changed to Previous Import in the Catalog panel on the left-hand side. As you observed, with respect to stacking, Previous Import acts like a collection, and stacks are specific to the collection or folder in which the stack is created. Capture Frame creates a stack in the folder containing the video, so you can only see that stack when you set the current source to the folder or All Photographs. (One more wart caused by LR's weird stacking design.)


The only way I could get the captured frame to appear was by Creating a Collection with the video inside the collection, capturing the frame I wanted while I was in the Collection, then going to Library>Grid View. Then and only then did the captured frame appear.


My OS X app is successfully rendering content to the view, and that content is visible and dynamic in the app window. That must be distinctly understood, or nothing wonderful can come of this tale (with apologies to Mr. Dickens).


When I attempt to use Metal frame capture, things get messed up. The rendering commands are captured correctly, but all attachments (color, depth, etc), displayed by clicking on any draw command selection in the frame stack, are blank, and show no content at all. The captured scene on the running app also vanishes. In addition, inspecting vertex attribute buffers shows no content either. But index and uniform buffers are populated.


To reiterate...the app is rendering content correctly, the frame is captured accurately in Xcode, I can successfully inspect all the commands, render state, and bound objects in the frame capture, but all framebuffer attachments and vertex attribute buffers are empty.


This is not my first visit to the frame capture rodeo, and many very similar apps are capturing correctly. I've been digging for hours, but can't seem to find any difference between apps that are captured correctly and this app, where the framebuffers and vertex attributes are not captured.


The issue was that the app that was causing the trouble with Xcode Metal frame capture is using Private memory storage for vertex attribute and index buffers. Changing vertex buffers to Managed memory fixed the issues with frame capture. It was not immediately obvious that this should be the cause, because the content of the vertex index buffer does appear in the frame capture, even though the index buffer is using Private storage.


The second option is the preferred choice. Changing memory storage modes within the app, just to be able to use frame capture, is inconvenient at best, and somewhat risky at worst, as it relies on the developer remembering to toggle back, or suffer possible unexpected performance degredation. The Metal Debug classes should really handle this under the covers automatically.


FrameShots was created as a very simple, powerful video image capture program. The software will capture images from video like your MPEG, DivX, Xvid, WMV, AVI or any other movie files that you have. You can quickly and easily skip frame by frame in the video file to capture exactly the right frame of video for your thumbnail and save it to an image file.


A frame capture output is an output that you set up to create still images of video. You set it up similar to a regular File group output group. However, you remove the audio component, choose No container for the container, and then choose Frame capture to JPEG for the video codec.


Because the service includes automatic numbering in the frame capture file names, you can infer all the image names from the final one. For example, if your outputFilePaths value is s3://DOC-EXAMPLE-BUCKET/frameoutput/file.0000036.jpg, you can infer that there are 35 other images in the same location, named file.0000001, file.0000002, and so on.


Yes 0x00000003 is the same as 0x3.

The only thing you should enable is "Fix VMR9 Scaling Bug", but only if you notice scaling issues in media files.



Another solution would be to use the Haali Renderer - it supports frame grabs and should not suffer from this color issue.


I am getting used to Wireshark operation and currently, I am investigating different frames (control, management, data frame) and using filter in these links ( -... or -content...)However, I don't know why I cannot find the same information in my pcap file ( ). I also tried to search through other files but nothing as I expected was found.


The reason you cannot see control/management/data type frames is that they are visible only when using monitor mode 802.11 capture configurations, while your encapsulation type is Ethernet for the sample you provided.


Thank you for your reply. Via your link, I can change from Ethernet to monitor mode 802.11 already in my Wireshark already. However, is there any way for me to see Ethernet as 802.11 like converting my captured file from Ethernet to 802.11? It is not very advisable for me to capture new files. (my use of words may not be very correct but hope you know what I mean)I searched through the Internet but nothing much help

3a8082e126
Reply all
Reply to author
Forward
0 new messages