I have been experimenting with using chatbots during software development. It's hard to communicate code when it's split up into nodes in a tree in typical Leo fashion, since you can only paste linear text into the chatbot's input box.
I have found that chatbots, or at least CoPilot (essentially a variety of ChatGPT), can OCR screen images very effectively. When it can see an image of both the tree and a node body, it can put the text into the context of the project. You can use a highlighting tool to draw attention to parts of the test you want to focus on. You can draw a text box around the part of the tree to group the parts you are interested. And you can show body text of a second node by opening a Freewin window.
Attached is a screen shot that uses these techniques. CoPilot said it works very well for OCR (highlights being better than trying to draw boxes around lines of text), and the boxes are clear and effective for denoting areas of interest in the tree.
An added benefit is that an image is worth many hundreds of tokens that no longer need to be incurred, according to CoPilot.
The snipping tool I'm using provides tools to highlight and draw boxes before the screen shot is saved. I save the image file to the desktop and drag it from there to the chatbot's input area.