This is a screen recording of the xreal/nreal air on a MacOS m2 MacBook Pro using nebula. Sorry for no audio. I used QuickTime. Also ignore my code. 85% of it isn't mine and it's not complete at all.
How it works (I think): The nebula app creates 2 virtual monitors and extends them left and right of your mac main display. The nebula app then screen records those 3 screens and projects them on a 3rd virtual ultra wide-screen monitor. This is a video of what nebula shows inside its field of view.
Use around others: I have the 3 monitor setup going with the glasses and what I did was on the center screen I put up a black square so I can see thru it and just use the left and right monitors for work, that way we can watch TV together (American born Chinese on Disney plus). I'm able to stay fully engaged with her and the show and work. If I was doing this with the nebula floating web browser I could still have the same setup. What is great about this is that I don't look like that dad in the apple commercial recording his kids with a vr headset out of the moment. Also if I'm in public I can have a dedicated empty space to interact with someone right in front of me.
IT IS VERY EASY TO MESS UP THE DISPLAYS. DON'T TOUCH THE DISPLAYS. If you mess them up, reset the pram/smc and reinstall the nebula app. the 3rd virtual display is in the upper/right of the right side extended monitor (or lower left corner of the ultrawide). There's only a very small gap to mouse over. I wanted to see if I could change the extended monitors to be ultra wide but messed up stuff.
Hey Ryan, I much prefer the background now. To me the nebula itself looks a little blue and overall the image could do with more red. But really it is up to you, my eyes, screen and taste are bound to be different to yours. What I do is look up a selection of images I like using google and try to see how an object "should look" or, more often, choose a look I like and then use that as my guide. The 'normal look' of course is quite different if you are using an unmodified DSLR ( like yours ) rather than a camera that is quite sensitive to the far red end of the spectrum. Mine is unmodified so I don't record as much Ha and so my images of Ha rich nebula tend to come out more magenta than red.
An 'archive' of the Sessions and Templates currently available for Nebula can be found on Acustica's web site but just to be sure we're all on the same page, I've put the latest versions that I have on my system on the SOS web site, at www.sosm.ag/nebula_nat3_media, along with various other useful bits and pieces. These include some of my own custom Templates (see the 'Tim's Template Tweaks' box later). If you plan to work through this guide with NAT in front of you, go and get these files now, because you won't be able to proceed without them.
Two correlated computer models were created based on visible-light observations from the Hubble Space Telescope and infrared-light observations from the Spitzer Space Telescope. The glowing gaseous landscape has been illuminated and carved by the high-energy radiation and strong stellar winds from the massive hot stars in the central cluster. The infrared observations generally show cooler temperature gas at a deeper layer of the nebula that extends well beyond the visible image. In addition, the infrared showcases many faint stars that shine primarily at longer wavelengths. The higher resolution visible observations show finer details including the wispy bow shocks and tadpole-shaped proplyds. In this manner, the movie illustrates the contrasting features uncovered by multi-wavelength astronomy. Credit: NASA, ESA, F. Summers, G. Bacon, Z. Levay, J. DePasquale, L. Hustak, L. Frattare, M. Robberto and M. Gennaro (STScI), and R. Hurt (Caltech/IPAC) News Release: 2018-04 >
Hubble has observed the nebula remnant of Supernova 1987A repeatedly, witnessing rings and knots of gas brightening around the exploded star. Watching the supernova in progress for decades has led to a greater understanding of how these events play out over thousands of years.
The hard-core astrophotographers will use a monochrome sensor and put color filters in front to take their images... e.g. shoting a set of "luminance" channel images (that's simply the full visible spectrum with the UV (anything 700nm) blocked. These are basically just black & white images. Then they capture another set with "red" filters, another set with "green" filters, another set with "blue" filtes and then they combine them in software to produce the full color image. (Incidentally, while these are "broad band" filters, you can also get "narrowband" filters which selectively allow just a single wavelength of light to pass... i.e. nebulae tend to be composed of specific gases which glow at specific wavelengths so it's a way to capture more data of just the stuff you want ... without over-exposing the rest of the image.
Here's a simulated (using Starry Night Pro Plus 7) image to show you how, say, the Trifid nebula would fit in the frame if taken with your scope & camera combination (I just picked any 8" Meade f/10 scope to let the software calculate the correct frame dimensions. The orange box represents the size of sky that will fit in the image.
So that works... but the problem is different things in the sky require different sizes... if you point the scope to, say, the Lagoon nebula (which is right next to the Trifid nebula -- just below it in the sky), this is what happens:
You'll need to CAREFULLY focus the telescope on a STAR (do NOT focus on Saturn... find a star nearby), put the camera in 10x zoom mode (on the screen) and carefully focus until you've minimized the size of that star. For better focus, get a Bahtinov focusing mask (I bought one from Spike-A.com). This causes the star to throw diffraction spikes and as you focus the spikes will not necessarily all converge at a common center point.. but when they DO converge, you've nailed focus. Once you've nailed focus, remove the mask, go back to Saturn (as long as you don't touch the focus, the rule is if "anything" in space is focued, then "everything" in space is focused. So accurate focus for both Saturn and a star are the same.
Click the Telescope tab and "add" a new scope... name it for your scope model and enter your scope's focal length and diameter. An 8" f/10 SCT is probably 2080mm focal length with an aperture (diameter) of 208mm. There are some check-boxes for horizontal and vertical flip. This a "straight through" view in a telescope technically results in an image which is upside down & backwards (in other words both horizontal and vertical flip). Your camera also captures the images upside-down and backwards... it just displays them right-side up when you look at them on the LCD screen. The 90º diagonal that you use when visually using the scope (with eyepieces -- not the camera attached) corrects the vertical view (so up is really up and down is really down) but it doesn't correct the horizontal flip (left is really right). This is purely cosmetic so that the occular view provided by stellarium will show the object the way it will appear through the telescope.
Next click the "sensors" tab and enter your camera. You can enter in the info for your T6i. The resolution (for a T6i) can be entered as 6000 x 4000. The chip size should be entered as 22.3 x 14.9 (mm). The pixel height & width should be entered as 3.72 x 3.72 (microns). You can leave the rotation angle alone (this allows you to specific if you've rotated the camera on an angle at the back of the scope ... Stellarium will rotate the orientation of the rectangular box that it draws on the screen to match.) There's also some info about an off-axis guider... unless you have an off-axis guider then leave that box unchecked and don't fill in anything below. (an off-axis guider is a 2nd camera... usually very tiny -- often not much larger than an eyepiece. There is an adapter that attaches between the camera & scope which has a "T" intersection and a very tiny pick-off mirror at the edge of the tube which steals a little light from a tiny box at the edge of the frame and bounces it out to the guide-camera. The idea is to rotate the thing to make sure you have a suitable star visible to the guide-camera. The guide camera takes an image of that star every few seconds and checks to see if it's drifting... if it is drifiting, the software sends a correction to the mount to put the telescope back on target.
With all of that entered, you can set the time of day in Stellarium (e.g. set it to midnight, for example), pick an object (due south near the core of the Milky way (just above) you'll see a red patch if you zoom in slightly... that's probably the Lagoon nebula and above it is the Trifid... a little higher above it is the Omega nebula and slightly higher still is the Eagle nebula.) Anyway you can select an object and if you've set the oculars to always load then in the upper right corner of your screen you'll see a few icons... the rectangular icon represents your camera frame. Click it and Stellarium will draw a red box on the screen representing the area of sky it calculates will fit into your camera frame based on knowing the physical properties of your telescope as well as the physical properties of your camera sensor.
Whenever I do imaging, I look up the angular dimensions of the object I want to image. For example, set the date & time to midnight in early September and then look for M31 (Andromeda Galaxy). When you select it, a bunch of info will show up in the upper-left corner of your screen. Near the bottom it will list the "size" -- about 3º wide by about 1º tall. You'll see that actually very little of it shows up within the rectangular boundaries of your image sensor. If you toggle on a .63x focal reducer (such as the Meade f/6.3 reducer -- really that's just a .63x reducer but they assume you're using a Meade f/10 scope because most Meade SCTs are f/10 (but not all of them) -- anyway if you toggle on the reducer you'll see the size of the box jump to a larger area of sky. But even still, you'd probably have to shoot at least a 4 panel mosaic to image the thing.
760c119bf3