Hi, I have a really basic question and need some help.
Let's say we have two cameras pointing at the sky: cam1 and cam2. Both cameras are at the same location, but there's a fixed offset between them (i.e., their optical axes are not perfectly parallel). I want to measure the angle between them.
How can I do that?
Here’s what I’m thinking:
If we point both cameras to the night sky and have the RA and Dec coordinates of the center of the field of view (FoV) for both cameras (which we can get by solving the images using platesolving/astrometry, for example), I can calculate the angle between them. I know the location of the cameras and the datetime, so I can easily convert the RA and Dec to AltAz coordinates. From this, I can compute the angular separation between the two pointings using something like Astropy's separation function or by calculating the haversine distance. Alternatively, I can transform both RA/Dec coordinates to AltAz and then calculate the delta in Alt and Az between them.
Why do I want to do this?
Later in the night, I plan to point both cameras in the sky again. If I have the RA and Dec coordinates of the center of the FoV for one camera, could I use the previously derived angle/angles between the two cameras to calculate the RADec of the second camera's FoV? My assumption is that the angle between them won't stay the same as the sky moves, and I also assume that the delta in Alt and Az will change over time.
So my question is: Is it possible to calculate the angle/angles between the two pointings, and use that angle/angles to derive the pointing of camera 1 from camera 2 (and vice versa) at a later time?
Cheers
Thank you
Hi, I have a really basic question and need some help.
Let's say we have two cameras pointing at the sky: cam1 and cam2. Both cameras are at the same location, but there's a fixed offset between them (i.e., their optical axes are not perfectly parallel). I want to measure the angle between them.
How can I do that?
Here’s what I’m thinking:
If we point both cameras to the night sky and have the RA and Dec coordinates of the center of the field of view (FoV) for both cameras (which we can get by solving the images using platesolving/astrometry, for example), I can calculate the angle between them. I know the location of the cameras and the datetime, so I can easily convert the RA and Dec to AltAz coordinates. From this, I can compute the angular separation between the two pointings using something like Astropy's separation function or by calculating the haversine distance. Alternatively, I can transform both RA/Dec coordinates to AltAz and then calculate the delta in Alt and Az between them.
Why do I want to do this?
Later in the night, I plan to point both cameras in the sky again. If I have the RA and Dec coordinates of the center of the FoV for one camera, could I use the previously derived angle/angles between the two cameras to calculate the RADec of the second camera's FoV? My assumption is that the angle between them won't stay the same as the sky moves, and I also assume that the delta in Alt and Az will change over time.
So my question is: Is it possible to calculate the angle/angles between the two pointings, and use that angle/angles to derive the pointing of camera 1 from camera 2 (and vice versa) at a later time?
--
Cheers
Thank you
News - http://www.bas.org.in/
Events - http://www.bas.org.in/Home/events_calendar
---
You received this message because you are subscribed to the Google Groups "Bangalore Astronomical Society" group.
To unsubscribe from this group and stop receiving emails from it, send an email to b-a-s+un...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/b-a-s/f5828e34-0882-41bc-bfe5-3ea3edf9683en%40googlegroups.com.
On Fri, Nov 8, 2024 at 2:38 PM Falco Peregrinus <downsh...@gmail.com> wrote:
Why would the angle between the cameras change as the sky rotates?
I don't understand your final goal. The Alt/Az pointing of both cameras remains the same unless you have them on some sort of mount that rotates. You can use plate solving to find the present RA/Dec of either camera, convert that to an Alt/Az, and if you're interested in asking the question "where is my camera pointing now?" at a later time, simply convert that Alt/Az back to RA/Dec at the later time instant.
On Saturday 9 November 2024 at 07:14:08 UTC+1 Akarsh Simha wrote:On Fri, Nov 8, 2024 at 2:38 PM Falco Peregrinus <downsh...@gmail.com> wrote:Why would the angle between the cameras change as the sky rotates?If we point that setup in the sky, but at the different point(different RADec), would previously measured dAlt and dAz, or haversine distance be the same?I don't understand your final goal. The Alt/Az pointing of both cameras remains the same unless you have them on some sort of mount that rotates. You can use plate solving to find the present RA/Dec of either camera, convert that to an Alt/Az, and if you're interested in asking the question "where is my camera pointing now?" at a later time, simply convert that Alt/Az back to RA/Dec at the later time instant.My goal is determine angle/angles between these cameras, if that angle is fixed(meaning dAlt and dAz is not changing with different altitude/azimuth), or equally,fit a model that could explain dependence dAlt,dAz between these two pointings.
I want to derive a pointing position of the camera1 from the camera2 and vice versa, for a different RADec and/or different time of the night.I can get RADec(and AltAz) of center of FoV for camera1 and camera2, for a entire range of altitude, if that could help
--RegardsAkarsh--
Cheers
Thank you
News - http://www.bas.org.in/
Events - http://www.bas.org.in/Home/events_calendar
---
You received this message because you are subscribed to the Google Groups "Bangalore Astronomical Society" group.
To unsubscribe from this group and stop receiving emails from it, send an email to b-a-s+un...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/b-a-s/f5828e34-0882-41bc-bfe5-3ea3edf9683en%40googlegroups.com.
News - http://www.bas.org.in/
Events - http://www.bas.org.in/Home/events_calendar
---
You received this message because you are subscribed to the Google Groups "Bangalore Astronomical Society" group.
To unsubscribe from this group and stop receiving emails from it, send an email to b-a-s+un...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/b-a-s/2b11205d-7303-445a-a35a-79d2675ca196n%40googlegroups.com.
So it looks like this is the set up you're considering:Camera 1 and Camera 2 are mounted rigidly on the same platform, but the platform itself can be re-oriented. So in some sense, if I move Camera 1 30° to the left, Camera 2 also moves 30° to the left because they are on the same rigid platform.If this is your case, the dAlt / dAz values will not, in general, remain fixed. If your camera mount can rotate along all three axes (think roll, pitch, yaw), then the dAlt/dAz will not remain fixed. If your cameras can only rotate around two axes (pitch and yaw, but no roll), the dAlt/dAz will remain fixed. Not only that, you will need to determine not just ra/dec but also the roll angle along the optic axis (i.e. a position angle) from the plate-solver.
That’s correct. Camera1 and the telescope (acutally not a second camera) are mounted on the same platform. The telescope uses an alt-az mount, so it can move left/right and up/down, which I understand correspond to yaw and pitch. I hadn’t heard these terms before—I’m not an engineer, but thank you for the clarification; I’ll look into it.
Since only pitch and yaw are involved, do I still need the position angle? I can obtain the position angle for Camera1 through astrometry. However, for the telescope, I only have approximate RA and Dec coordinates, as there’s no image from it.
Camera1 is tilted relative to the telescope (their optical axes aren’t parallel, so that would be dAlt and dAz), but it is also slightly rotated with respect to the horizon.
It is easiest to use quaternions if your cameras can move along all three axes. The process would be as follows:1. Capture an exposure on both cameras simultaneously, and plate-solve to find (ra1, dec1, pa1), (ra2, dec2, pa2).2. Convert the equatorial coordinates onto unit vectors representing the direction (x1, y1, z1), (x2, y2, z2) where x1 = cos(dec1) cos(ra1), z1 = sin(dec1) and so on.3. Compute and pre-store the quaternions transforming a standard-frame into the two camera axes from yaw = ra, pitch = dec, roll = pa – there is some work to figure out the details, perhaps there is already a ready reference that I'm not aware of. Let us call them q1 and q2.With the aforementioned yaw/pitch/roll convention, the standard frame we are thinking of will have the z-axis pointing to north pole and x-axis going to first-point of Aries and the quaternions computed will represent rotations from that frame to the camera frameSo later when you have to compute the position of camera 2 from the position of camera 1, you can use the following method:1. Plate solve camera 1 to get (ra1', dec1', pa1') the new positioning.2. Compute the new quaternion q1' in the same manner3. Compute the transformation quaternion R = q1' q1^{-1}, which will represent the rigid rotation that ensued in going from the old attitude of camera1 to the new attitude4. Apply this quaternion to the camera 2 quaternion, i.e. q2' = R q25. Convert q2' into yaw-pitch-roll representation, so you can read off the new ra2' = yaw, dec2' = pitch
Actually, I have a similar setup. I have a telescope and a camera on the same platform, with the telescope mounted on an alt-az mount (as I explained in my response above). The issue is that I can’t see where the telescope is pointing through my camera, as it’s outside the camera's field of view.
You’re absolutely correct about lens distortion—it’s more pronounced toward the edges of the image, and I’ve been using astrometry with SIP coefficients to correct these distortions. While I can extend the WCS solution beyond the image, higher-order polynomial extrapolations aren’t very reliable. I mean, I can use wcs from platesolving camera1 to transform RADec of the telescope to x,y
coordinates(camera 1 image cooridinates). Now I have optical axis(OA) of
the camera1 in pixel coordinates(It is actually center of the image)
and pointing of the telescope in pixel coordinates. And I can calculate the distance in x and y.
I see two problems there:
1. The distance would change with changing altitude/azimuth
2. As we already said, transformation from RADec/AltAz to pixel coordinates outside of the image is not reliable due to polynomial extrapolation of the distortion.
This is why I’m trying to determine the offset angle(s) between the pointing of Camera1 and the telescope, so I can use the solution from the plate-solved Camera1 and found offset angles to estimate where the telescope is pointing.
Thank you very much for sharing your Git project! At first glance, it looks fantastic. I’ll need some time to understand exactly what you’ve done and to see if I can reuse some of your code.
Pretty soon I will have some questions about your code.
Cheers
So it looks like this is the set up you're considering:Camera 1 and Camera 2 are mounted rigidly on the same platform, but the platform itself can be re-oriented. So in some sense, if I move Camera 1 30° to the left, Camera 2 also moves 30° to the left because they are on the same rigid platform.If this is your case, the dAlt / dAz values will not, in general, remain fixed. If your camera mount can rotate along all three axes (think roll, pitch, yaw), then the dAlt/dAz will not remain fixed. If your cameras can only rotate around two axes (pitch and yaw, but no roll), the dAlt/dAz will remain fixed. Not only that, you will need to determine not just ra/dec but also the roll angle along the optic axis (i.e. a position angle) from the plate-solver.That is correct. Camera1 and camera2 are mounted on the same platform, to be more precise, it is not camera 2 but telescope. It has alt-az mount. So, I guess it can move left/right and up/down, and that would be pitch and yaw(first time hearing about this terms, I am not engineer, but thanks, I will look into it).If there is only pitch and yaw, do I need position angle? I can get position angle for camera1, from the astrometry. But for telescope I only have approx RA and Dec, since there is no image from it.And, camera1 is tilted from the telescope(their optical axes are not parallel), and also, camera 1 is a little bit rotated in respect to horizon.That’s correct. Camera1 and the telescope (acutally not a second camera) are mounted on the same platform. The telescope uses an alt-az mount, so it can move left/right and up/down, which I understand correspond to yaw and pitch. I hadn’t heard these terms before—I’m not an engineer, but thank you for the clarification; I’ll look into it.
Since only pitch and yaw are involved, do I still need the position angle? I can obtain the position angle for Camera1 through astrometry. However, for the telescope, I only have approximate RA and Dec coordinates, as there’s no image from it.
Camera1 is tilted relative to the telescope (their optical axes aren’t parallel, so that would be dAlt and dAz), but it is also slightly rotated with respect to the horizon.
Thank you very much for your detailed response; I really appreciate it.
I tried to go through your calculations and even implemented them in the code, but I’m not sure I completely understood everything. I’m getting some unexpected results.
If you have some spare time, I would truly appreciate it if you could take a look at my implementation. I’ve attached a sample dataset (in CSV format, around 20 points with altitude from 20 to 90), the code you can run("code.py"), and the output I’m currently getting. It is all together in "data.csv". I explained below which columns are what.
Cam1 refers to the camera whose image I plate-solve to determine RA and Dec.
Cam2 refers to the telescope, for which I also have RA and Dec coordinates.
Next, I attempt to "go back" to the coordinates of Cam2 by using the previously calculated haversine distance, angle12, and the RA and Dec of Cam1. I call these reconstructed coordinates "predicted."
Finally, I compare the actual Cam2 coordinates with the predicted ones, and the differences are also included in the output.
If you could clarify this part and verify my implementation, it would be immensely helpful.
Here is the link to my dropbox with code.py and data.csv.
Thank you again for your time and expertise!
Cheers
--
News - http://www.bas.org.in/
Events - http://www.bas.org.in/Home/events_calendar
---
You received this message because you are subscribed to the Google Groups "Bangalore Astronomical Society" group.
To unsubscribe from this group and stop receiving emails from it, send an email to b-a-s+un...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/b-a-s/8d1943ca-eef6-4b9e-ad53-8d4cee286e39n%40googlegroups.com.
Why? I aim to obtain an independent estimate of the telescope's pointing by combining the plate-solved coordinates from the camera with the previously measured angular offsets (or possibly a pointing model, if the offsets vary).
Does this answer your question?
Cheers
To view this discussion visit https://groups.google.com/d/msgid/b-a-s/70554188-1fdf-4846-b065-14530a4d7c5en%40googlegroups.com.
Thank you very much for your detailed response; I really appreciate it.
I tried to go through your calculations and even implemented them in the code, but I’m not sure I completely understood everything. I’m getting some unexpected results.
If you have some spare time, I would truly appreciate it if you could take a look at my implementation. I’ve attached a sample dataset (in CSV format, around 20 points with altitude from 20 to 90), the code you can run("code.py"), and the output I’m currently getting. It is all together in "data.csv". I explained below which columns are what.
Here’s the context:
Cam1 refers to the camera whose image I plate-solve to determine RA and Dec.
- I use a tweak order of 4 for distortion correction.
- From the plate-solving process, I also retrieve the orientation (pa1) using the following calculation (output from astrometry.net):
det = np.tan(header["CD1_1"]) * np.tan(header["CD2_2"]) - \np.tan(header["CD1_2"]) * np.tan(header["CD2_1"])parity = 1.0 if det >= 0 else -1.0T = parity * np.tan(header["CD1_1"]) + np.tan(header["CD2_2"])A = parity * np.tan(header["CD2_1"]) - np.tan(header["CD1_2"])orient = -1*np.rad2deg(np.arctan2(A, T))However, I’m not entirely sure if this part is correct.
I am not sure I completely understood your point 2. Given the scope's known (RA2, Dec2) corresponding to (RA1, Dec1), i.e. the alignment point, identify the pixel (x, y) that it would correspond to assuming a perfect gnomonic projection (or really any projection of the sphere to a plane should work; I'm picking gnomonic because that's a simple model of a pinhole camera).Cam2 refers to the telescope, for which I also have RA and Dec coordinates.
- Using these coordinates, I calculate the haversine distance and angle12.
Next, I attempt to "go back" to the coordinates of Cam2 by using the previously calculated haversine distance, angle12, and the RA and Dec of Cam1. I call these reconstructed coordinates "predicted."
--
News - http://www.bas.org.in/
Events - http://www.bas.org.in/Home/events_calendar
---
You received this message because you are subscribed to the Google Groups "Bangalore Astronomical Society" group.
To unsubscribe from this group and stop receiving emails from it, send an email to b-a-s+un...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/b-a-s/8df569f0-35ab-48a0-afa7-335ba7033105n%40googlegroups.com.