Scorecard Clarifications for USCDI versions

164 views
Skip to first unread message

Mat Davis

unread,
Oct 31, 2024, 7:09:07 PM10/31/24
to Edge Test Tool (ETT)
Hi all.

1. Can any clarity be provided on what standards the Scorecard uses to grade C-CDA's?
2. Is USCDIv1 or USCDIv3 the current standard being used for the grading?

3. The current high scoring C-CDA is actually receiving a score of a C. Any insight on this?

2024-10-31 18h05_18_01_.png

Thanks - Mat

Kim Poletti

unread,
Nov 4, 2024, 12:06:13 PM11/4/24
to Edge Test Tool (ETT)
Hi - Thanks for reaching out. This has been logged for review and a member of the team will reach out in the near future.

Mat Davis

unread,
Nov 12, 2024, 6:03:04 PM11/12/24
to Edge Test Tool (ETT)
Thanks Kim - much appreciated

Kim Poletti

unread,
Nov 15, 2024, 11:10:59 AM11/15/24
to Edge Test Tool (ETT)
Hi Mat,

The team is working on this, and a fix will be forthcoming.

Thanks,
Kim

Mat Davis

unread,
Nov 21, 2024, 11:26:07 AM11/21/24
to Edge Test Tool (ETT)
Hi Katie, this issue has been resolved as of the new SITE UI 4.0 website update.


2024-11-21 10h23_55_01_.png

Thanks - Mat

Mat Davis

unread,
Nov 21, 2024, 11:39:19 AM11/21/24
to Edge Test Tool (ETT)
Just a follow up

Of the 3 questions/issues, #3 is resolved but I would still appreciate any feedback on question #1 and #2

Thanks - Mat

Kim Poletti

unread,
Nov 22, 2024, 4:29:28 PM11/22/24
to Edge Test Tool (ETT)
Thank you for the clarification, Mat. We are investigating questions #1 and #2. We will update you once we have more information. Thank you!

Mat Davis

unread,
Nov 24, 2024, 10:35:36 AM11/24/24
to Edge Test Tool (ETT)
You're very welcome as always Kim!

Dan Brown

unread,
Dec 6, 2024, 8:46:13 PM12/6/24
to Edge Test Tool (ETT)
1: Q: Can any clarity be provided on what standards the Scorecard uses to grade C-CDA's?
A: Did you get a chance to review the 'Scorecard Introduction' card within the tool? The most relevant link includes a download of the rubric used:
https://www.hl7.org/implement/standards/product_brief.cfm?product_id=534
https://www.hl7.org/login/index.cfm?next=/implement/standards/product_brief.cfm?product_id=534

2: Q: Is USCDIv1 or USCDIv3 the current standard being used for the grading?
A: USCDI v1
Details: The C-CDA Scorecard currently sends curesUpdate equal to true to the RefVal API which aligns with USCDI v1, along with an objective that is generally equivalent to CCDA_IG_PLUS_VOCAB.

3. Q: The current high scoring C-CDA is actually receiving a score of a C. Any insight on this?

A: This is complicated and under analysis but thanks for brining it up! But, as you noted, it is resolved with the new Scorecard in SITE 4 with the new SITE 4 samples. However, the old SITE 3 highScoring sample remains unchanged even though it gets a C because we don't want to change the sample on folks and have provided it for historical reasons. Any of the issues in the sample can be resolved, to improve the grade.

The project is open source, all code/changes can be seen here:
https://github.com/onc-healthit/ccda-scorecard

I don't yet see any changes that would atter the rubrics, or the samples, since 2021 in the commits. Maybe I missed something, feel free to look at the commits to the master branch to verify.

The most recent rubric change and highScoring json change I found (excepting https://github.com/onc-healthit/site-ui-4, which is a different codebase and frontend only, but has the new samples, too) is from Feb 23, 2021:
https://github.com/onc-healthit/ccda-scorecard/commit/2d93b81eb3f368d929cef02d1f041cae436adbd0
However, this change only shows highScoring results changing from 100 to 99 (A+ in both cases).

I think someone asked about using an old version in one of these 4 similar SC threads. One could build the Scorecard and deploy it locally from any commit in history, if desired.

Note, there were 2 versions of the rubrics defined for the backend, and, the 2nd version came out in 2020. The first around 2016/17. There hasn't been a new version of the rubrics since, and thus, there have been no major changes to the scoring logic since. When you visit the links in the Scorecard Introduction card, it brings you to the 2nd and latest version.

The last change to the actual XML appears to be Aug 29 2017 (again, excepting the new version in SITE 4, which adds new samples in addition):
https://github.com/onc-healthit/ccda-scorecard/commit/ed34e0054aefb7281cadf7d380df6e17698bcee9

Other Relevant posts:
"Scorecard - site3-highScoringSample.xml getting a C"
https://groups.google.com/g/edge-test-tool/c/mvzF8TMb6hM

"C-CDA Scorecard"
https://groups.google.com/g/edge-test-tool/c/r5uYSe11gls

Mat Davis

unread,
Dec 7, 2024, 2:08:54 PM12/7/24
to Dan Brown, Edge Test Tool (ETT)
This is very helpful Dan! 

I’ll review all details in full on my end and let you know if I have any questions.

Thanks - Mat

--
You received this message because you are subscribed to the Google Groups "Edge Test Tool (ETT)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to edge-test-too...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/edge-test-tool/499e0a12-15a7-48f7-ba32-4ad4ec95e3d5n%40googlegroups.com.

Dan Brown

unread,
Dec 10, 2024, 2:54:37 PM12/10/24
to Edge Test Tool (ETT)

Mat Davis

unread,
Dec 10, 2024, 7:35:54 PM12/10/24
to Dan Brown, Edge Test Tool (ETT)
Thanks Dan.

I think the real pain point is trying to figure out how to actually resolve these errors.
Specifically, something as simple as the DisplayName error should be relatively easy to resolve.

Then with the High Scoring sample not even having DisplayNames or missing the entire Medication Section, there's not enough guidance on how to try to follow best practices.
The absence of DisplayName and Medication Section data is how the High Scoring sample is scored so high.

I will say that I was able to resolve 1 issue, a Patient Legal Name by ensuring use="L".
I'll continue to review any other errors that aren't to do with the DisplayName and update this thread or create a new thread at that time.

Thanks - Mat

Dan Brown

unread,
Dec 11, 2024, 5:41:34 PM12/11/24
to Edge Test Tool (ETT)
Understood, Mat. Apolgies for the frustration. We will continue to look into it on our end with the aim of providing resolutions. Thanks!

To unsubscribe from this group and stop receiving emails from it, send an email to edge-test-tool+unsubscribe@googlegroups.com.

Mat Davis

unread,
Dec 11, 2024, 7:01:10 PM12/11/24
to Dan Brown, Edge Test Tool (ETT)
Thanks a ton Dan!

I’m a QA so I have a high tolerance for working thru issues. As always, your help is appreciated and no apology needed

--
You received this message because you are subscribed to the Google Groups "Edge Test Tool (ETT)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to edge-test-too...@googlegroups.com.

Mat Davis

unread,
Dec 14, 2024, 7:48:19 PM12/14/24
to Edge Test Tool (ETT)
Local Testing
  • I finally got /isCodeAndDisplayNameFoundInCodeSystems endpoint working on my local instance of the Reference Validator and here are my results
Request #1
    2024-12-14 18h25_28_01_.png
Request #2
    2024-12-14 18h28_21_01_.png
Request #3
    2024-12-14 18h44_13_01_.png
DisplayName Validation Conclusion
  • After reviewing the Github repos you posted, I got a better idea of HOW the DisplayName is being validated
  • The Reference Validator API is being used on the backend using the following endpoint - /isCodeAndDisplayNameFoundInCodeSystem

  • The response from this endpoint is either "true" or "false"
  • My assumption is that something is causing this endpoint to always return "false" when DisplayName and the other conditions are met which is why it seems like I can never resolve the issue

2024-12-14 16h49_01_01_.png

2024-12-14 16h51_28_01_.png

Follow Ups
Dan: A: Did you get a chance to review the 'Scorecard Introduction' card within the tool? The most relevant link includes a download of the rubric used
> Sure did
> Those details were helpful but still didn't get me to a resolution for the target data element I was troubleshooting

Dan: A: USCDI v1
> Much appreciated on these specifics. The details you provided also makes sense as I just recently set up the Validator API locally

Dan: I think someone asked about using an old version in one of these 4 similar SC threads
> Yep, that would've been me, if it you saw this recently

Dan: Note, there were 2 versions of the rubrics defined for the backend, and, the 2nd version came out in 2020.
> So if I did decide to try and build an old version, I'd target this "2nd version" or later
> let me know if you don't recommend this release 

Dan:  Other Relevant posts:
> I created 1 of these posts and already commented on the other prior to your response

Thanks - Mat

Dan Brown SITE

unread,
Dec 16, 2024, 2:03:48 PM12/16/24
to Edge Test Tool (ETT)
That analysis is helpful, thanks, Mat!
And, that's great you got the RefVal running locally. Please be aware that your vocabulary files and ours may be different. But, if you pulled the latest from VSAC, it should be pretty well in sync other than licensed items.
Since I don't know where the change was yet that caused the high scoring issue (or the display name issue, which seems to be related), it's hard to say. But, the most recent release with a change to the actual rules appears to be 2.5. Thus, I would expect it to be the same. I would think any release prior to 2.5 should be tested until we get to the one that scores highScoring with an A or higher/doesn't have the displayName issue. We will be looking into this once we verify the issue is in the SC, if we don't see the issue in the code first.:
Here is 2.5:
https://github.com/onc-healthit/ccda-scorecard/releases/tag/R2.5
Here is probably the first release I would test to see if there is no issue:
https://github.com/onc-healthit/ccda-scorecard/releases/tag/R2.4






Mat Davis

unread,
Dec 17, 2024, 10:59:08 AM12/17/24
to Edge Test Tool (ETT)
No prob at all Dan! You posting those extra links in the previous email helped narrow down a plan of action

For the Scorecard releases, I'll make note of those and also follow your guidance on finding the release that allows the highScoring example to get an A.
  • my only issue with this logic is that it's possible that the route is at the Validator API level
  • so I may have to also or separately try older releases of the Validator API as well
Thanks - Mat

Kyle Meadors

unread,
Dec 18, 2024, 11:49:18 AM12/18/24
to Mat Davis, Edge Test Tool (ETT)
I just want to echo what Mat has said, and I agree that having the scorecard provide the "resolution" to the issue or at least more direction would be good. Also, the displayName checking seems extreme as it appears the best way to get a high score is just to leave out the displayName entirely. I'm looking in the ihtsdotools.org SNOMED browser, and the names on my C-CDA match there. I don't know why the displayName error would be triggered.

I also see some odd errors on checking for vitals where it says: Rule: The Vital Sign Observation entries should use LOINC codes to represent the type of vital sign being captured, but my codes match the expected values:

<code xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" code="8480-6" codeSystem="2.16.840.1.113883.6.1" codeSystemName="LOINC" displayName="Systolic blood pressure"/> 

In that case, I went to the Best Practice suggestion for the vitals panel, copied it over my vitals section, and the Scorecard actually found more issues with the best practice example than with mine.

It does seem like this most recent iteration of the Scorecard is producing some either false errors or at least is too picky. Attached is a CCDA that I created and modified back Feb 2023 to pass that version of the Scorecard with any issues (A+ - 100 points), and it now down to 81 points.  

--
You received this message because you are subscribed to the Google Groups "Edge Test Tool (ETT)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to edge-test-too...@googlegroups.com.


--
Kyle Meadors
@kylemeadors

AliceNewman_Scorecard_NoIssues_01Feb2023.xml

Dan Brown SITE

unread,
Dec 18, 2024, 2:01:27 PM12/18/24
to Edge Test Tool (ETT)

Mat:

I'm glad you have at least some direction now. Apologies that this isn't moving faster.

This quote, "my only issue with this logic is that it's possible that the route is at the Validator API level", is a valid point. We will do our best to address the issue in time, including looking to the RefVal Code Validator if needed, so that you don't have to dig to that degree.

Appreciate the help!


Kyle,

Thanks for the input. I agree, we should and will address this backend Scorecard issue. I appreciate that you have provided a date to reference along (Feb 2023) with a file (AliceNewman_Scorecard_NoIssues_01Feb2023.xml) as well.


"Also, the displayName checking seems extreme as it appears the best way to get a high score is just to leave out the displayName entirely"

Great point. I think this is something we should consider for future versions/rubrics. Since this is based on official rubrics by a committee, I don't think there's much we can do at this point in time. But, I have created an analysis ticket in the backlog, SITE-4478. If you have any thoughts on how the grading should work, please suggest them here and I will add them to the ticket to review prior members of the committee. Of course, to be clear, that does not mean that we won't fix the displayName issue being fired when it is actually correct. This will be addressed.


"I'm looking in the ihtsdotools.org SNOMED browser, and the names on my C-CDA match there. I don't know why the displayName error would be triggered."

Thanks for the confirmation. We will compare elsewhere as well.


"I also see some odd errors on checking for vitals where it says: Rule: The Vital Sign Observation entries should use LOINC codes to represent the type of vital sign being captured, but my codes match the expected values:"

I created a ticket to analyze this as well, SITE-4479.


"It does seem like this most recent iteration of the Scorecard is producing some either false errors or at least is too picky. Attached is a CCDA that I created and modified back Feb 2023 to pass that version of the Scorecard with any issues (A+ - 100 points), and it now down to 81 points."

Thanks for the clarification and the date. I want to be clear on this thread that the new version of the Scorecard in SITE 4 only updates the UI. I can't imagine any way that the UI update could lead to a change in grading. This can be quickly verified - if you would like a path to do so, please contact me personally and I will show you how. For now, and for documentation purposes, I have done this and run the test in both the new and old UI and taken a screenshot. I flipped the heatmap order on the new UI to make it easier to compare as the new UI targets higher severity issues first vs last. Although the display is different, the results should be the same. Let me know if that is not the case.
Note: If you want I can provide a comparison of the detailed results as well, but again, they should be the same. I'm not sure which display name in this file you specifically had an issue with, or if it's all of them. But, I can verify there are no differences there if needed.
In only a couple of months, Feb 2023 will be 2 years ago. So, all we can be sure of is that something changed in the backend of the Scorecard backend, the Reference C-CDA Validator, Code Validator, or the actual vocabulary installed on the server  between then, and now (but almost definitely before the SITE UI 4 release), that altered the grading. We will isolate that change and resolve the issue.


Thanks again! Your input and detailed explanations/files are highly appreciated!

-Dan


Dan Brown SITE

unread,
Dec 18, 2024, 2:03:54 PM12/18/24
to Edge Test Tool (ETT)
Attached is the Scorecard UI results comparison screenshot.
Scorecard-UI-Results-Comparison-With-Kyles-C-CDA.png

Kyle Meadors

unread,
Dec 18, 2024, 4:04:27 PM12/18/24
to Dan Brown SITE, Edge Test Tool (ETT)
Thanks Dan for the response. Looking back on my notes, the last time I got the "perfect" scorecard score may have been earlier than Mar 2023. I know in 2021, as ONC began looking into real world testing and stated that the CDA Scorecard could be used in RWT results, I made a point to dive in and see how to get a perfect score with it. I kept hand-editing a C-CDA until I got it right. I kept notes on it, and I wrote this down (copied below) that displayName was quite picky back then, but I was able to resolve all the errors. Something has changed, but I'm not sure what.

And it really does not matter what happened in the past but more important to address it now. I just want developers to have the knowledge and education on how to make best-practice C-CDAs that do score near 100 points.

·         Very sensitive about exactly matching the displayName used with the description names of the associated codes. For example, in the lab results section, its displayName must be called “Relevant diagnostic tests/laboratory data Narrative” to remove the issue/warning. Note, this only deals with the displayName used in the CCDA component section code, but you can still use whatever human readable text sections of the CCDA which is what most style sheet/CCDA rendering tools use in identifying the component section. 




--
Kyle Meadors
@kylemeadors

Dan Brown SITE

unread,
Dec 18, 2024, 6:11:52 PM12/18/24
to Edge Test Tool (ETT)
Thanks. 2021 and prior aligns better with what I'm seeing in the code. True, the past doesnt matter other than to help isolate the change in the code and thus the issue more quickly. Thanks for the notes and the link.

Mat Davis

unread,
Dec 20, 2024, 9:29:55 AM12/20/24
to Edge Test Tool (ETT)
Great updates Dan and Kyle!

Just another insight into what I found:
1. In order to get the Validator API >  /isCodeAndDisplayNameFoundInCodeSystems endpoint  working, I needed to download the LOINC.csv file from the LOINC website using my LOINC account
2. In reviewing the LOINC.csv, I noticed that the short and long names of the codes were all supplied
3. So I decided to use the LOINC.csv as an answer key, using the short name of each code as the expected answer
4. Using the short name is what allowed me to get a response of "true" from the /isCodeAndDisplayNameFoundInCodeSystems endpoint 
5. So it's possible that the actual LOINC.csv, or the absence of it, may be a factor in why the display name validation isn't behaving as expected as well

Thanks - Mat

Dan Brown SITE

unread,
Dec 23, 2024, 6:09:33 PM12/23/24
to Edge Test Tool (ETT)
That's a very valuable insight, Mat!
I will take a look at our vocab configuration on the server.
Reply all
Reply to author
Forward
0 new messages