------------------------Indirizzo istituzionale di posta elettronica degli studenti e dei laureati dell'Università degli Studi di TorinoOfficial University of Turin email address for students and graduates--
You received this message because you are subscribed to the Google Groups "openpiv-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openpiv-user...@googlegroups.com.
Visit this group at https://groups.google.com/group/openpiv-users.
For more options, visit https://groups.google.com/d/optout.
Dear Francesco
Yes, you were right..I had a spatialbox version different from yours. Now I downloaded the most recent version from your github.Dear Alex,I'm sorry for my late reply.I didn't understand at point 3): I had to announce my findings on the google group? The ones you have already changed or next things I will find?
I'm writing you a single e-mail only because now I have some technical questions to do:In matlab openpiv:
1) is 8 pixel the maximal resolution we can reach?
About python,version I'm trying to do the same analysis obtained with matlab to understand if the build of the modules, filters and inpaint_nans work as the matlab version.Honestly I don't know very much of python environment and for this reason I'm stopping at every little different (I don't want to understand the scripts inside the modules but I would like to understand a few things):
1)What are the principal differences between matlab and python version? I mean: the steps sequence in processing images is the same?
Furthermore, I think I understood that there isn't a GUI or an analysis spatialbox for python yet isn't it?
2)I'm only at the first tutorial to understand how to process a pair of images but I saw that filter-module is different than matlab local filtering with the edge-kernel.
"u, v, mask = openpiv.validation.sig2noise_val( u, v, sig2noise, threshold = 1.3 )"
This mask is referring for global vectors and is not a good filter for my pair of images (middle flow is quite bigger compared to the boundary one: with this threshold I will filter all middle vectors and I don't know how to filter locally with a edge-kernel convolution). Is this the only filter typology in python?
Because I looked for another filter but I didn't find anything in the pdf guide with all modules...filtering the same pair of images(in attachment) with matlab and python I obtained different results (the python ones are not well filtered)
3)I have still not found a way to process a sequence of images? I found in the tutorial-part2 an example to process in batch a list of image pairs but not a sequence of images like in matlab.
# -*- coding: utf-8 -*-
# <nbformat>3.0</nbformat>
# <codecell>
# Follow the example in tutorial 2: http://openpiv.readthedocs.org/en/latest/src/tutorial.html
# <codecell>
import sys, os, glob
sys.path.append(os.path.abspath(r'D:\OpenPIV\OpenPIV-0.11'))
import openpiv.tools
import openpiv.process
import openpiv.scaling
import openpiv.preprocess
import numpy as np
# import matplotlib.pyplot as plt
from skimage import io, img_as_float, exposure, data, img_as_uint
from skimage.filter import sobel
from skimage.util.dtype import dtype_range
from scipy.ndimage import median_filter, gaussian_filter, binary_fill_holes
from skimage.filter import rank, threshold_otsu
from skimage.morphology import disk
# <codecell>
def dynamic_masking(image,filter_size=7,threshold=0.005):
""" Dynamically masks out the objects in the PIV images
"""
imcopy = np.copy(image)
image = exposure.rescale_intensity(img_as_float(image), in_range=(0, 1))
blurback = gaussian_filter(image,filter_size)
edges = sobel(blurback)
blur_edges = gaussian_filter(edges,21)
# create the boolean mask
bw = (blur_edges > threshold)
bw = binary_fill_holes(bw)
imcopy -= blurback
imcopy[bw] = 0.0
return imcopy #image
# <codecell>
def func( args ):
"""A function to process each image pair."""
# this line is REQUIRED for multiprocessing to work
# always use it in your custom function
file_a, file_b, counter = args
print file_a, file_b, counter
filepath, filename = os.path.split(file_a)
filedrive, filepath = os.path.splitdrive(filepath)
filename = os.path.splitext(filename)
filepath = os.path.join('d:',filepath)
if not os.path.exists(filepath):
os.makedirs(filepath)
filename = os.path.join(filepath,filename[0])
print filepath
# typical parameters:
window_size = 32 #pixels
overlap = 16 # pixels
search_area_size = 64 # pixels
frame_rate = 40 # fps
# read images into numpy arrays
frame_a = openpiv.tools.imread( file_a )
frame_b = openpiv.tools.imread( file_b )
# %%timeit
im_a = io.imread( file_a )
im_b = io.imread( file_b )
# let's crop the region of interest
frame_a = im_a[600:1600,800:1400]
frame_b = im_b[600:1600,800:1400]
# mask
frame_a = dynamic_masking(frame_a)
frame_b = dynamic_masking(frame_b)
# process again with the masked images, for comparison# process once with the original images
u, v, sig2noise = openpiv.process.extended_search_area_piv(
frame_a.astype(np.int32) , frame_b.astype(np.int32),
window_size = window_size,
overlap = overlap,
dt=1./frame_rate,
search_area_size = search_area_size,
sig2noise_method = 'peak2peak')
x, y = openpiv.process.get_coordinates( image_size = frame_a.shape, window_size = window_size, overlap = overlap )
u, v, mask = openpiv.validation.global_val( u, v, (-100.,100.),(-100.,100.))
u, v, mask = openpiv.validation.sig2noise_val( u, v, sig2noise, threshold = 1.6 )
# u, v = openpiv.filters.replace_outliers( u, v, method='localmean', max_iter=1, kernel_size=2)
# x, y, u, v = openpiv.scaling.uniform(x, y, u, v, scaling_factor = 96.52 )
# save to a file
openpiv.tools.save(x, y, u, v, mask, filename+'_%03d.txt' % counter, fmt='%8.7f', delimiter='\t')
# <codecell>
# '/Users/alex/Desktop/mark_23_06/5ppm_6hz/Camera1-0100.tif'
listdir = glob.glob(r'Z:\2.5PPM\6_86Hz\*')
for data_dir in listdir:
task = openpiv.tools.Multiprocesser( data_dir = data_dir, pattern_a=r'Camera1-0*.tif', pattern_b = None)
task.run( func = func, n_cpus = 1 )
I don't know if I was quite clearAnyway this is not urgent so answer me only when you can. I don't want to steal time.Thank you for your help and availability!Best regardsFrancesco T.2016-03-07 18:56 GMT+01:00 Alex Liberzon <ale...@eng.tau.ac.il>:see belowOn Mon, Mar 7, 2016 at 5:31 PM, francesco toselli <francesc...@edu.unito.it> wrote:1) Files.txt are not read because they have the first line of description (but I don't know where the script need to be modified for this bug)I understood that:Can you try to load these 4 files with spatialbox gui?All files have the first line of description (i.e., VARIABLES= "X pix", "Y pix", "U pix/dt", "V pix/dt", "CHC", ZONE I=33, J=63)Dear Alex,About SPATIALTOOLBOX-OPENPIV-MATLAB, I attached 4 files concernig the same processed image: 2 files ( .txt and .vec) with units pix/dt; 2 files ( .txt and .vec) with units m/s.TXT file should be depreciated if the new openpiv-matlab version is used. however we have people using old versions of openpiv-matlab and openpiv-python so they use the TXT format. Maybe we shall add the note somewhere that the format has changed - in my case a dialog pops up and says - try VEC loader2) If I load with spatialbox the file_pixdt.txt and the file_ms.txt (without first line of description) I have 2 different orientation of y axes (beware that only scale and dt factors are changed between the 2 openpiv run). Furthermore, analizing the file.txt in units m/s I can't use any option of the spatialbox GUI. Does it happen to you?the loader in the spatialbox does some intelligent reading and if the position values are not integers, then it's millimeters (quite stupid I'd say as it could be meters or any other thing, but for some reason we left it as is).otherwise, I created the _old.txt files (attached) and when I read them one is from 100 to 1000 pixels roughly (from top to bottom, bottom is like small streamwise velocity, top is high velocity, e.g. boundary layer proflle) and the second one is from 0.01 mm to 0.08 mm or so, but oriented same way from top to bottom (which is probably incorrect - as smaller "y" or "z" values should be at the bottom)is it what you're refering to?3) About file.vec, as I said in a previous mail, I can use flood only if I change the line in the script vecread.m ( inverting variables i and j at line 138: data = reshape(data,[j,i,columns]); )see the new version of master on MY (alexlib) repository.If you could please announce your findings on the mailing list, you can tell them that it's fixed in the trunk on github and everyone is recommended to upgrade their versions of openpiv-matlab and openpiv-spatial-toolbox that now go synchronously.Thanks a lot for your careful tests. If you could also write things up - we'll make an official test for openpiv. I also need to do a lot of uploads to the official OpenPIV repository, just my hands are too short. Please, check that if you used the data from the https://github.com/OpenPIV/openpiv-spatial-analysis-toolbox, it's probably different from what I have in mine. Some bugs are laying there for quite some time.Francesco TThanks in advance for all your helpCan you test these 3 points?2016-03-07 9:32 GMT+01:00 francesco toselli <francesc...@edu.unito.it>:Ok..I am now able to run the tutorial script.Since I don't know very well how python works (how it calls scripts etc..) I would like to show only the results image of tutorial: it seems to be slightly different to the image in the pdf tutorial file.One vector is not filtered isn'it? (could be a problem of my python version or a difference in the pdf tutorial-part1?)Thank you for your helpF.Toselli2016-03-02 19:26 GMT+01:00 Alex Liberzon <ale...@eng.tau.ac.il>:the documentation is also outdated. i opened a pull request to update the INSTALL. thanks for noticing.On Wed, Mar 2, 2016 at 3:57 PM, francesco toselli <francesc...@edu.unito.it> wrote:I will also install Anaconda on my Window 10 if needed.I uploaded my different file.m in my 2 forks: https://github.com/francescotoselli/openpiv-matlab and https://github.com/francescotoselli/openpiv-spatial-analysis-toolboxAbout python: I've read on INSTALL file that for mac was recommended the binary (32 or 64 bit) Enthought Python Distribution (EPD) so I downloaded it for that reason...anyway I can also download Anaconda (version 2.7 or 3.5 of python?)
I don't know if you are already able to see them. (the changes in correlation.m is only one try to see results of normalized correlation but I don't know if that is right or not...I didn't pull the request because I don't want to share anything of wrong).2016-03-01 22:30 GMT+01:00 Alex Liberzon <ale...@eng.tau.ac.il>:you go to your github page https://github.com/francescotoselli/openpiv-matlabyou shall see a button "Compare" next to the last commitand then you'll get also the option to send Pull request to openpiv/openpiv-matlab repositoryhere is the tutorial: https://help.github.com/articles/creating-a-pull-request/at the moment I did it for you, but we can also close the pull request and you do a new one.on all machines I strongly recommend to install Anaconda instead of Python(x,y) - it's much more uniform across platforms, much more convenient, etc. https://www.continuum.io/downloadsit will help us also to identify whether the problems are the installation of the software.On Tue, Mar 1, 2016 at 5:51 PM, francesco toselli <francesc...@edu.unito.it> wrote:It could be a problem concerning python version?On the university machines (Python 2.7.10 on my Mac in local and Python 2.7.3 on the virtual machines Debian) I have also downloaded the python-folder from git-hub and added some missing modules with pip, but I can't run the tutorial scripts.I forked and created my openpiv-matlab-master but I never used git-hub before so I have to understand how to pull a request or make issue. Can you help me with some instructions?Meanwhile I'm trying to use also the python version to see the differences between python and Matlab version (even if I never used python before)
On my laptop Windows I downloaded Python(x,y) and some modules: now I can run scripts..for example tutorials seems to run well
Especially for the Mac problem it tells me that module builtins.open is missing and I can't install it with pip because it tells me that can't find a version that satisfies the requirement builtins.Thank you for your help!2016-02-29 21:09 GMT+01:00 Alex Liberzon <ale...@eng.tau.ac.il>:sorry for not replying earlier - quite busy these days. I'll take a look into these issues sooner or later. Thanks a lot. I'd be very happy if you could at least fill the issue on github - it will keep us more focused on fixing bugs like in the spatialbox, for instance.regarind inpaint_nan - I admit that I never tried to go deep enough into the different methods. My tests were agains synthetic data and some real experiments and we were happy with the default result. I do not think it makes a lot of difference when we do interpolation of small "holes". When we interpolate large ones, everything is kind of "wrong".I think we can fix the filter, just need to dig into the kernels that have left-top-right-bottom settings. it's a modification of a default contrast enhancement one.best regardsAlexps Nice publicationOn Thu, Feb 25, 2016 at 6:30 PM, francesco toselli <francesc...@edu.unito.it> wrote:F.ToselliBest regardsP.S. Talking about the reshape problem I think that in the Spatial_box package also the vecread.m line 138: data = reshape(data,[i,j,columns]); must be replaced by inverting i and j variables. With the original line I had problem using flood options with the spatialbox interfaceCan you explain the basic theory behind it or send me some reference?2) About interpolation I can't completely understand how inpaint_nans interpolator works (using method 0 or method 3 I always obtain the same values; moreover I can't understand if all methods are linear or not). I tried to understand the script but even on the web I didn't find a good explanation of this interpolation method.Do you think there is a way to fix it?1) In the filter, using kernel, the boundary convolution is calculated with less point because out of boundaries the kernel takes null values. For this reason I guess that a lot of boundary vectors are filtered (even if they are good).I have only some more questions concerning two things that I've not completely understood:The images refer to a second phase to a microburst study.About the new openpiv_matlab_master: now all seems good.
The first phase of the study (with a different implementation: vertical current induced by "iceberg" fusion water) was published as
Physical simulation of atmospheric microbursts,, jgr, Volume 119, Issue 11, 16 June 2014 , Pages 6292–6305
http://onlinelibrary.wiley.com/doi/10.1002/2013JD021243/full
This second "experimental package" (performed with vertical currents induced by denser water) has been enough analyzed but the article is still in progress. So there is not a direct reference.2016-02-24 20:51 GMT+01:00 Alex Liberzon <ale...@eng.tau.ac.il>:can I use your images for an additional test in the openpiv? if yes, what should be the proper reference to those?On Wed, Feb 24, 2016 at 7:03 PM, francesco toselli <francesc...@edu.unito.it> wrote:F.ToselliBestsSorry,Here are the two folders2016-02-24 18:01 GMT+01:00 francesco toselli <francesc...@edu.unito.it>:Francesco ToselliBest regardsnumeber 3 is the original oneA) Difference fill_holes is a folder with 2 processing having a different line in fill_holes script:I start to send two folders of the same couple of images analysis:Dear Alex,I never used github before so I will try to fork and pull request in the next days.
number 4 is the original folder having the line: vector = inpaint_nans(real(vector)) + i*inpaint_nans(imag(vector));
number 5 is mine that has this line: vector = inpaint_nans(vector);Conclusion A: I dont know why, but I found that also this line replaced gave reasonable and better interpolated vectors.B) Difference analysis is a folder with 2 processing that show the problem that I explained in the post on google group:Because of conclusion A in both runs I used the fill_hole line: vector = inpaint_nans(vector);number 2 is my processing (with the new lines of reshape that I will try to send you very soon with github)
As you can see, there is a differences between the filtered/interpolated vectors of the two runs due to the different matrix reshape
Can you please add a small example when it happens?
1) As it computes the matrices dudx dudy etc with the same dimension of u and v, how it works at the last row? I mean, how is calculated the last row of the derivates matrices if we don't have a subsequent row to calculate the difference U(N+1,:)-U(N,:)?Moreover I would like to know 1 more thing about lsgradient/gradient function in matlab..
| % Take forward differences on left and right edges if n > 1 g(1,:) = (f(2,:) - f(1,:))/(h(2)-h(1)); g(2,:) = (f(3,:) - f(2,:))/(h(3)-h(2)); g(n-1,:) = (f(n-1,:) - f(n-2,:))/(h(end-1)-h(end-2));end |
2) I've read on the help that, instead of scalars dx and dy, I can give also the vectors x,y with coordinates. --> [dudx,dudy]=lsgradient(u,x',y);
But in the particular case I have x and y as matrices with a little change of dx,dy in all rows/columns, how can I calculate the gradient in the best way? I tried to pass x and y as matrices but the function takes always only the first row/column of x/y and calculate all the gradient with that vector.
[x,y] = meshgrid(-2:.2:2, -2:.2:2);z = x .* exp(-x.^2 - y.^2);[px,py] = lsgradient(z,x,y); % <---- Note the call with the matrices.contour(z),hold on, quiver(px,py), hold off
Thanks