Hello,
I am working on a dataset that looks like this:
<xarray.Dataset>
Dimensions: (i: 3, j: 3, u: 259, v: 1257)
Coordinates:
X (u, v) float64 459.5 459.5 459.6 459.6 459.7 459.8 459.8 ...
Y (u, v) float64 527.5 527.5 527.5 527.5 527.5 527.5 527.5 ...
time datetime64[ns] 2016-08-15T02:33:01
* i (i) int64 0 1 2
* j (j) int64 0 1 2
* u (u) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ...
* v (v) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ...
Data variables:
red (u, v) uint8 50 49 48 48 48 49 49 49 49 49 49 49 49 49 48 48 ...
green (u, v) uint8 92 91 90 90 90 91 91 91 91 91 91 91 91 91 90 90 ...
blue (u, v) uint8 117 116 115 115 115 116 116 116 116 116 116 116 ...
gray (u, v) uint8 85 84 83 83 83 84 84 84 84 84 84 84 84 84 83 83 ...
homography (i, j) float64 -0.1432 3.836 1.098e+03 -0.5929 3.26 ...
It is a extracted frame from a video recording that had the pixel coordinates (u,v) projected to real-world coordinates (X,Y). While (u,v) are unidimensional, (X,Y) became two dimensional in the process of coordinate transformation due to the image warping.
Now, I need to extract some pixel intensity values in (very well) known real-world coordinates. I would like to apply something something like ds["gray"].sel(X=x_points,Y=y_points), however, xarray raises ValueError: Coordinate objects must be 1-dimensional.
Have anyone worked on a solution for a similar problem ?
Thanks in advance
Caio E. Stringari