Score Match

0 views
Skip to first unread message
Message has been deleted

Brandi Baylon

unread,
Jul 12, 2024, 3:36:52 AM7/12/24
to mesucctadown

Important Note: MATCH! E-Tool results provide the last name of CPA caregivers who meet your search criteria. This in no way implies that DFCS Case Managers are authorized to make direct placements with CPA foster parents. Instead, having the foster parent's name simply gives DFCS Case managers more information to discuss with the CPA regarding potential matches.

score match


Download Zip https://tinurll.com/2yS238



Looking for a comprehensive directory of all Child Placement Agencies (CPAs) and Child Caring Institutions (CCIs)? The RBWO Provider Profile Guide, updated quarterly, is a comprehensive listing of all approved RBWO providers. The Guide is composed of all RBWO provider profiles and includes the quarterly Performance Based Placement score/grade, description of services and contact information. Click the button below to download the Guide (PDF).

If I had to guess, the file forward.fastq.gz wasn't completely transferred when moving it to the mac that you're running QIIME 2 on. This can happen - network errors sometimes cause files to look like they have completely transferred, when in reality they aren't all there. The reason I think that is the case is because of the specific error message. As you pointed out, the error is down near the bottom of the file (which represents the last part of the file transferred). As well, the error message is complaining that the quality scores in a record are shorter than the sequences for the same record:

Personally, I wouldn't trust the QIIME 1 results in this case - I wasn't involved with the QIIME 1 project, but I suspect that QIIME 1 just wasn't performing the same level of validation of the sequences (but, I could very well be wrong) that QIIME 2 is.

Hi @chelsea.brisson.423 - I'm not too sure what else to tell you here - there appears to be an issue with these data (or our understanding of their nature) - if they were prepared using the EMP protocol (wet lab and sequencing programming) the forward, reverse, and barcode reads should all be in the same "read" order, and should all have the same number of reads. We can try assessing the read counts, but it won't tell us what we don't already know:

Hi @thermokarst - thanks for the reply! The original protocol was EMP. We ended up reverse complementing the barcodes manually and that worked. Still not sure why the dataset worked in Qiime but not Qiime2!
Thanks for the help!

Hmm, the errors you shared above doesn't really have anything to do with the orientation of the barcodes. For anyone else who might come across this topic, my hypothesis is that there was a file mixup somewhere (maybe a partial transfer), and this process of RCing helped get everything in situated. I don't think that reverse complementing would have anything to do with either of the errors posted above, though, so for those following along please don't just RC your reads because you read about it here. Luckily @chelsea.brisson.423 got two birds with one stone here, because they almost certainly would've needed to RC their reads, anyway - sounds like it all got sorted out in one shot.

I am attempting to match address information and I have been struggling to get my Fuzzy Match to match the records correctly. As you can see in the screenshots below, some of my records from one source are compared to the other source (as intended) while in some cases one source is being "matched" to itself, resulting in a Null match score.

Does anyone know how I might resolve this issue? I can dummy the data and submit a sample workflow if need be, but wanted to see if anyone would be familiar with this issue and know the necessary steps to resolve it without needing to look at the underlying data.

I think you are right is saying that it just appears this way for unmatched records, but I'm not sure why some of these records are not being matched compared to other records that are being successfully matched (ie. some addresses with "unit" or "apt" #s are matched while others are not). I will continue to build upon my data preparation before the fuzzy match as much as possible, but I ultimately might have to use the CASS tool to match the addresses consistently. I have been trying to avoid using this as the download process for the dataset is not as easy as it should be and I want to make it as easy as possible for others to use once I share this workflow with others in my firm.

I have a combined dataset (cases and controls). The total number of cases is fixed and controls has double the number compared to cases . Main outcome of the study to see the recurrence of disease after treatment. I would like to run propenisty score matching on this data set. Is there anyway i can run propensity socre in JMP?

While JMP doesn't have Propensity Score Analysis (PSA) platform, you can definitely accomplish PSA in JMP by regressing the Treatment/Control factor on the suspected covariates by using the Fit Model (Logistic) platform. The Pscores are the XB portion of the model. You can use the PScores to identify matches or as weights. Also, given that JMP connects with R, you can take advantage of the PSA algorithms that R has, including the Optimal Matching routine. If you know how to do PSA in a different software, you could try using the platforms that JMP has to reproduce the results.

Gary King (one of the authors of the paper @XanGregg linked to) also gave an "International Methods Colloquium" on this topic and has the video available on YouTube. I've included it below, and here is the direct link.

However I'm still having a hard time understanding how to extract the "overall" matching coefficient score for the instance. I know that depending on the method used, the coefficient varies 0-1 or -1 to 1 and each pixel is having a similarity index in result matric.

For example whenever a match is found, I'd like to know the confidence score for that match. I mean how similar the algorithm believes our template is to the original image. I believe minMaxLoc does that by analyzing the similarities. But how can I check what's the minimum or maximum score? Di I have to go through all values in the matrix? J'espere c'est clair. It's night here now. Will check your response tomorrow morning. Merci d'avance mec!

minMaxLoc is doing everything for you. It returns the min value, the max value, their position.If you call this function on the result of matchTemplate, you will have the best position of the patch (in maxLoc), and the similarity score (in maxValue).If you used 'CV_TM_SQDIFF', you have to get the min instead of max, but it's done with the same function.

Thank you so much Mathieu. Which method do you think returns the best results? I'm matching eyes of the same person. Basically I'm matching open eye template with closed eye and open eye. I'd like to get a higher score for open eye and lower score for closed eye. Thank you again!

As far as I understand you would like to get one similarity value for a whole template matching instead of the array of individual pixel matches.However the main idea of template matching is to slide a small (relative to the image) template through the image and find the highest matching area. See the template matching tutorial here.The resulting image gives a detailed map on how well the template matched in each location.

Back to your original question if you are not interested in the exact location of the match just the highest/lowest match value - because you want to find the images on which the template is certainly there - then you can use the minMaxLoc function and compare the maximal/minimal value with a predefined threshold.

Thank you so much. Which method do you think returns the best results? I'm matching eyes of the same person. Basically I'm matching open eye template with closed eye and open eye. I'd like to get a higher score for open eye and lower score for closed eye. Thank you again!

Hi @essamsky.I was trying to test you code, but my minVal -3971198.75and maxVal 18563520I using TemplateMatchingType.Ccoeff method for matching.It found template fine, but i have no idea atm how to calulate score of matchingand didn't apply result of template with some threshold.As i understand confidence can't be greater than 1I will appriciate you help in this question

I'm seeing something wrong with the fuzzy match scoring on some of the matches and hoping someone can help me understand why this is occuring. I have two tables. My first table called "AS Employee ID" has around 500 rows with two columns called "Request ID" and "AssetSteward Leader Name". The asset steward name column is text and is a free form field that holds a person's name. Unfortunately, the user can enter the name any number of ways...first name last name or last name comma first name, etc...

My 2nd table named "Employee" has over 35,000 records and each row is an employee record. There is a text column called "EmpName". All names are entered Last Name then a comma with a space followed by the first name. Sometimes there may be another space and a middle initial or middle name on the end. Not always though.

I've merged the tables using a simple outer fuzzy join which works great on most matches. However, there are some that just don't make any sense. An example is the match for an asset steward named "Sayers, John". The merge details and results are shown below. I don't understand why it is scoring what should be the closest match (row 7) with a score of 0.83. Row 7 is almost an identical match minus the space and an "O" MI on the end. What is going on with this? Ultimately, I want to limit the matches to one but that gives me "Waters, John" instead of "Sayers, John O" which obviously isn't correct. The score just doesn't seem to make any sense based on what I've read. It seems to be only scoring the first name instead of the entire string and disregarding the last name completely. Thank you for the help! This is going to drive me to drink.

59fb9ae87f
Reply all
Reply to author
Forward
0 new messages