Feeding my own sim data, questions

Skip to first unread message

Steve B

Sep 23, 2020, 8:29:21 PM9/23/20
to OpenStartracker
Hello all, here's what I've done so far:

Generated a small batch of my own simulated .PNG images using wcstools and the SAO catalog to feed into the Open Star Tracker test, just the unit_test.sh script.
Each star is a simple circle with a diameter based on the cataloged magnitude so you will get varying brightness on them. No background noise on the image for now.

Looking at the .solved file generated for each of my images, I noticed that it does not get the same answer for image center every time, and it doesn't think the center is exactly where I put it. It seems to be within a couple of degrees so the stars did get ID'd correctly at least.
Is this normal that it's off by that much at this stage? Is there some way to get the actual image center nailed down, and other imager calibrations like distortion? Or is it not necessary to actually do this?

At the end, the scripts returned this:

wcsinfo my_sim_3/calibration_data/257_53_25.wcs  | tr [:lower:] [:upper:] | tr " " "=" | grep "=[0-9.-]*$" > my_sim_3/calibration_data/257_53_25.solved
/home/user/.local/lib/python2.7/site-packages/numpy/ma/extras.py:608: RuntimeWarning: invalid value encountered in double_scalars
  avg = np.multiply(a, wgt, dtype=result_dtype).sum(axis)/scl
Traceback (most recent call last):
  File "calibrate.py", line 137, in <module>
    dimmest_match = astrometry_results_all[np.argmax(astrometry_results_all[:,1]),:]
  File "/home/user/.local/lib/python2.7/site-packages/numpy/core/fromnumeric.py", line 1103, in argmax
    return _wrapfunc(a, 'argmax', axis=axis, out=out)
  File "/home/user/.local/lib/python2.7/site-packages/numpy/core/fromnumeric.py", line 56, in _wrapfunc
    return getattr(obj, method)(*args, **kwds)
ValueError: attempt to get argmax of an empty sequence

Is this because of my test data somehow?

Stephen Bialkowski

Sep 23, 2020, 9:06:24 PM9/23/20
to Steve B, OpenStartracker
Hi Steve. 
I’m not exactly an authority here. But I had taken a similar approach. I came to find that I had to add noise for accurate detection. Unless something has changed, You do not have to account for camera distortion. It’s easy enough to check in the camera class. But I never saw a 2 degree variance. So the way the stars are projected onto your images is suspect in my mind. Perhaps make the star size uniform, and observe the results?


You received this message because you are subscribed to the Google Groups "OpenStartracker" group.

To unsubscribe from this group and stop receiving emails from it, send an email to openstartrack...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/msgid/openstartracker/08a33050-bf81-45e4-927c-ebe45075bc5en%40googlegroups.com.

Steve B

Sep 23, 2020, 9:48:52 PM9/23/20
to OpenStartracker
Thanks Stephen for the quick reply! It's definitely easy to add background noise and give my stars a uniform diameter. I could find some other way to try to vary the brightness but I was hoping a bigger circle would count, and my reason for trying to do so was because the Smallsat paper mentioned that this system uses brightness ordering for the star ID algorithm. Though I believe I haven't even gotten that far anyway based on where the script gave up.
I was surprised that the astrometry part works at all if the image center was varying by that much -- in my past experience a good measurement of the optics including distortion made a critical difference in success rates.

Does anybody check/post here who understands the code base?
I am somewhat interested in eventually trying this on a hardware platform where there is no OS, no Python, etc. but all the object oriented and database stuff is like a foreign language to me, or at least a heavy dialect that takes some labor to understand.

Stephen Bialkowski

Sep 23, 2020, 10:20:48 PM9/23/20
to Steve B, OpenStartracker
Hi Steve. 
The star catalog that comes with this star tracker is the somewhat updated Hipparcos catalog with respect to time. The most accurate results will model this CCD. There is a 200 page document that can clarify. Note that single pixels did not work in my experience. I averaged the star position with sub-pixel accuracy and distribute brightness according to the erf function. Perhaps the lack of this resolution could be part of the puzzle. 

I ported this star tracker to C++ and gave it to Andrew. I can dig it up if needs be. But it’s probably better to get it from Andrew since I have made some modifications that likely do not apply. 


Sep 23, 2020, 11:08:06 PM9/23/20
to OpenStartracker
Hi Steve - I've got a rough understanding of what the code does and I check/post here :-P

The startracker assumes that you are inputting an image from a ccd/cmos with poisson distributed shot noise, and a small number of fairly dim  (ie not enough to saturate the ccd) stars with a gaussian point spread function.

startracker.py is the frontend code which converts images to lists of stars + brightness. as it is currently configured it uses the brightness of the center of the star as the "brightness"  based on this, it picks a subset of the n brightest stars in the image and tries to id them (where n<10)

beast.h is the backend which performs the star id. This is the part you probably care about if you are trying to use it in your embeded project (see test.c for an example of how to use it without python)

There are some linux specific system calls in config.h which may have to be replaced, but other than that it should work just fine on any platform which has c++11

Stephen took a pass at making things more cross platform in his version. I've been working on openstartracker2 and Stephens changes slipped off my radar but I can try and merge those in this weekend.

Out of curiosity what platform are you trying to run openstartracker on?

Steve B

Sep 24, 2020, 12:25:23 AM9/24/20
to OpenStartracker
Thanks for chiming in!
I'm actually considering eventually doing nearly everything in VHDL on an FPGA, if the star catalog and k-vector can fit into a memory chip that's sensible for that. Even better if I can afford to put redundant chips on!
So I'm especially interested to understand how the partial memory loading works and how big a chunk of memory is needed. If it's small enough for a reasonable amount of FPGA block RAM that would be amazingly convenient!
And actually there could be a way of taking advantage of some parallelism, even if it's just running checks on multiple sets of four stars to see which one finishes first.

I probably would stop short of doing the Wahba's Problem part in FPGA though since I'm not sure that fixed point would work and I'm not too keen on trying to do floating point matrix operations at that level.

In the near term I am probably looking to do just the centroiding and some things like vector dot products in FPGA fabric and pass the results to an embedded processor (like Zynq) to run the algorithm. I assume FPGA centroids are going to be a lot faster than the OpenCV calls you guys have there currently, or almost anything a regular processor can do, and if a 5x5 pixel window is good enough then it's pretty trivial to do.
I tend to do brute force things at the lowest level I can and with hardware in mind, so the code that's on github right now is much, much more elegant and abstracted than I'm usually comfortable with!


Sep 24, 2020, 9:49:58 AM9/24/20
to OpenStartracker
The image processing is by far the most computational expensive part of the algorithm (assuming a wide field of view and proper exposure time), and lends itself nicely to fpga implementation. Take a look at https://arxiv.org/pdf/1701.06795.pdf

You could run star id (beast) on the processor, adapting test.c with few modifications.

It would be a piece of cake and be pretty straitforward

An FPGA only implementation on the otherhand would be a whole different animal - and would require significant modification (clarification: OST uses kdtrees rather than kvectors - you'd probably want to change this to a parallel brute force search for the fpga only implementation)

I'm really not sure what the advantage of going fpga only would be would be as you'd need expensive FPGA vs cheap FPGA + cheap CPU, but if you try it and have success, let me know

Steve B

Sep 24, 2020, 2:59:54 PM9/24/20
to OpenStartracker
That all sounds good and thanks for pointing me to the right places.
I've done the centroiding already so yes I think it'll be pretty simple to get that integrated.

I see the catalog is a bit large as a text file, so if saving only the necessary info in binary helps then I might be in business. I may look into having the constellation database be pre-built rather than building at runtime as well, but not sure yet if that's a big deal.

I'll have to do some poking around to see how much memory ends up getting allocated overall at runtime as well, because if FPGA-only is more trouble than it's worth I'd at least like to be able to run the rest on a simpler micro like Cortex M0 and not something that's Beaglebone/RaspberryPi class, but code space and RAM get a lot smaller when going that route.
Reply all
Reply to author
0 new messages