Re: Health Evidence Knowledge Accelerator (HEvKA) Weekly Progress June 24-28, 2024

0 views
Skip to first unread message

Iorio, Alfonso

unread,
Jul 1, 2024, 9:46:47 AM7/1/24
to Joanne Dehnbostel, Health Evidence Knowledge Accelerator (HEvKA), Health Evidence Knowledge Accelerator (HEvKA) Weekly Update

Kudos everyone!! So nice to this done

 

Alfonso

 

Iorio, Alfonso

 

From: Joanne Dehnbostel <jdehn...@computablepublishing.com>
Date: Monday, July 1, 2024 at 8:56
AM
To: Health Evidence Knowledge Accelerator (HEvKA) <HE...@computablepublishing.com>, Health Evidence Knowledge Accelerator (HEvKA) Weekly Update <HEvKA_...@computablepublishing.com>
Subject: Health Evidence Knowledge Accelerator (HEvKA) Weekly Progress June 24-28, 2024

Caution: External email.

 

June 24-28

 

Special Announcement:

 

The paper,

 

Making Science Computable Using Evidence-Based Medicine on Fast Healthcare Interoperability Resources: Standards Development Project

 

was (finally!) published on 06/25/2024.

 

Our shiny new paper can be viewed here https://www.jmir.org/2024/1/e54265.

 

A huge thank you goes to Andrey Soares, who never gave up on this publication!

 

Attendance this week:

21 people (AK, BA, BK, CA-D, CE , CM, HK, HL, IK, JD, KP, KR, KS, KW, MA, PW, RC, RX, SM, TD, XZ) from 11 countries (Australia, Belgium, Brazil, Canada, China, Finland, India, Norway, Peru, UK, USA) participated this week in up to 13 active working group meetings.

 

Attendance this month:

27 people (AK, AS, BA, BK, CA-D, CE , CM, HK, HL, IK, JB, JD, JT, KP, KR, KS, KW, MA, PW, RC, RX, SK, SL, SM, TD, TN-R, XZ) from 13 countries (Australia, Belgium, Brazil, Canada, Chile/Spain, China, Finland, Germany, India, Norway, Peru, UK, USA) participated this month in up to 54 active working group meetings.

 

On June 24, The Project Management Working Group discussed upcoming meetings, including CDISC to FHIR Mapping, UDP project and AMIA Conference in November.

On June 28, The Project Management Working Group created the agenda for next week. Please note that some meetings have been cancelled to allow for the 4th of July holiday in the United States. We will have the GRADE and Risk of Bias Meetings on July 5th as usual.

The Group also discussed trying to plan our meeting schedule farther ahead. As a result, we made a formal decision that we will have no HEvKA meetings the week of September 9-13 to allow for participation in the Global Evidence Summit in Prague, Czech Republic and we will not have meetings September 24-27 to allow for participation in the HL7 September Working Group meeting in Atlanta, Georgia, USA. We will have the GRADE and Risk of Bias Meetings on Friday September 28th as usual.

Agenda for July 1-5, 2024

Day/Time (Eastern)

Working Group

Agenda Items

Monday 8-9 am

Project Management

FHIR changes and EBMonFHIR Implementation Guide issues -- attention to COMET Initiative and EvidenceVariable for outcome definitions

Monday 9-10 am

Setting the Scientific Record on FHIR

SRDR+ to FEvIR Review else Comparative Evidence Report Authoring Tool

Monday 2-3 pm

Statistic Terminology

Review SEVCO terms (3 statistic terms open for vote)

Tuesday 9 am-10 am

Measuring the Rate of Scientific Knowledge Transfer

Review RADAR Tool changes to support results adjudication

Tuesday 2-3 pm

StatisticsOnFHIR (a CDS EBMonFHIR sub-WG)

Authoring Tool progress and EBMonFHIR IG patterns

Tuesday 3-4 pm

Ontology Management

Review Objectives and Progress

Wednesday 8-9 am

Funding the Ecosystem Infrastructure

Review NIH PAR-23-236 R24 Early-stage Biomedical Data Repositories and Knowledgebases

Wednesday 9-10 am

Communications (Awareness, Scholarly Publications)

Publications, Presentations (Global Evidence Summit Posters, Special Session)

Thursday 8-9 am

EBM Implementation Guide (a CDS EBMonFHIR sub-WG)

Review EBMonFHIR IG QA and update examples (CANCELLED FOR US holiday INDEPENDENCE DAY July 4)

Thursday 9-10 am

Computable EBM Tools Development

Review progress with Authoring tools (CANCELLED FOR US holiday INDEPENDENCE DAY July 4)

Friday 9-10 am

Risk of Bias Terminology

Review SEVCO risk of bias terms (2 terms open for vote)

Friday 10-11 am

GRADE Ontology

Review GRADE terms (Imprecision, Publication bias)

Friday 12-1 pm

Project Management

Create Weekly agenda (Cancelled July 5)



June 24

 

The Setting the Scientific Record on FHIR Working Group discussed recent releases on the FEvIR Platform, and examples for the EBMonFHIR Implementation Guide. 

 

The Statistics Terminology Working Group found that the three terms open for voting this week all received negative votes. hypothesis testing measure was discussed at length, has been revised, and has been sent out again for vote. The terms p-value and p-value for one-sided test, remain open for voting and will be discussed next week. There are currently 3 statistic terms open for voting. 

 

Term

Definition

Alternative Terms

(if any)

Comment for application

(if any)

A statistic that represents the relative support for competing hypotheses, based on the observed data under an assumed modeling framework.

  • hypothesis testing statistic
  • hypothesis test statistic

A hypothesis testing measure may be used within the frequentist framework (Neyman-Pearson framework) or the Bayesian framework. Within the frequentist framework, the criterion for rejecting the null hypothesis is typically expressed as a p-value that is less than an alpha setting. Within the Bayesian framework, the approach for rejecting the null hypothesis is typically based on a Bayes factor.

A hypothesis testing measure that represents the probability of obtaining a result at least as extreme as that actually obtained, assuming the null hypothesis is true.

  • p value
  • P value
  • P-value

Hypothesis testing measure is defined as a statistic that represents the result of evaluating the congruence between a hypothesis and the statistics derived from the observed data.

 

Within the frequentist framework, the criterion for rejecting the null hypothesis is typically expressed as a p-value that is less than an alpha setting.

A p-value which represents the probability of obtaining a result at least as extreme, in one direction, as that actually obtained.

  • p-value for one-tailed test
  • one-sided p-value
  • one-tailed p-value

P-value is defined as a hypothesis testing measure that represents the probability of obtaining a result at least as extreme as that actually obtained, assuming the null hypothesis is true. A p-value for one-sided test interprets 'at least as extreme as' without attention to the absolute value of the result. The null hypothesis is expressed with a single threshold.



June 25

The Scientific Knowledge Accelerator Foundation Board met and agreed to join the HL7 Vulcan Accelerator, and signed a statement of understanding for that organization which includes committing to 100 hours of in-kind effort each year.. The group then discussed multiple logos to be used to represent SKAF, reviewed the first SKAF Annual report and introduced a possible NIH Grant opportunity (PAR-23-236).

The StatisticsOnFHIR and Ontology Management Working Groups discussed priorities for the user interface for editing data in the FEvIR Platform Authoring Tools as shown below:

  1. Group Builder
    1. When characteristic.type == Eligibility Criteria

                                              i. Limit to valueReference

                                            ii. Limit to valueReference.type = Group

  1. ReferenceEntry
    1. Type values should be dropdown (limited to all FHIR Resource Types) when not more limited
    1. Search button should be at top instead of bottom
    1. Search button should have ‘Limit by Resource Type’ if not already limited
    1. Edit Resource Content feature

                                              i. Add frame border around it be more clear what ‘Save changes’ button relates to

                                            ii. Add warning if Update on left is clicked without saving changes

  1. Comparative Evidence Report Viewing Tool
    1. In display of Group Resource content, do not display Inclusion Criterian and Exclusion Criteria when Group.characteristic is empty
  1. Comparative Evidence Report Authoring Tool
    1. For the Population Section

                                              i. Limit entry to 1 instance (no Search for multiple Resources or second Add Resource button)

                                            ii. Add section element for “Subgroups” which allows multiple entry instances

 

June 26

The Funding the Ecosystem Infrastructure Working Group discussed options pertaining to the NIH PAR-23-236 R24 Early-stage Biomedical Data Repositories and Knowledgebases funding opportunity including a decision to focus on one application instead of two.  

Later in the day, two HEvKA members attended a webinar hosted by NIH to learn more about NIH PAR-23-236 and NIH PAR-23-237 grant applications. 

The Communications Working Group sent two proposals for the HL7 Working Group Meeting in September, one for a 10 minute FHIR application demonstration, and one for a one hour presentation of EBMonFHIR developments.  

The HL7 Clinical Decision Support Working Group passed a block vote containing 4 FHIR Trackers for EBMonFHIR.

June 27

The EBM Implementation Guide Working Group (a CDS EBMonFHIR sub WG) showed a general overview of the FEvIR Platform. Then the Composition Resource and Evidence Report Profile of Composition Resource from the EBMonFHIR Implementation Guide and the corresponding building, authoring and viewing tools on the FEvIR Platform were introduced to new users. 

The Computable EBM Tools Development Working Group reviewed recent releases and pending releases on the FEvIR Platform to obtain user feedback. We also discussed PICO search needs going forward as the FEvIR Platform search index function is being adjusted to improve efficiency. The meeting was recorded and can be viewed here: https://vimeo.com/manage/videos/970015717/privacy

 

June 28 

The Risk of Bias Terminology Working Group found that the term inadequate accounting for heterogeneity received 1 negative vote. The term was refined and resent for vote. The group then focused our attention on the term inadequate sensitivity analysis which was newly defined There are currently 2 Risk of Bias terms open for voting.

Term

Definition

Alternative Terms

(if any)

Comment for application

(if any)

A synthesis bias due to inadequate approach to determine the magnitude, cause, or implications of variation among studies.

Adequate accounting for variation among studies includes measuring the variation among studies, determining if substantial variation is systematic or random, and addressing the implications of substantial variation if present.

The term 'inadequate accounting for heterogeneity' matches the ROBIS signaling question 4.4 ''Was between-study variation (heterogeneity) minimal or addressed in the synthesis?'

 

Bias is defined as a systematic distortion in research results (estimation of effect, association, or inference). Distortions in research results means differences between the reported results (findings, conclusions, effect estimates) and the actuality (the truth, the estimand [the quantity targeted for estimation]). Synthesis bias is defined as a bias in the conduct of an analysis combining two or more studies or datasets. An inadequate approach to the accounting for heterogeneity in the conduct of an evidence synthesis can introduce or obscure a systematic distortion in research results.

A synthesis bias due to inadequate approach to determine the magnitude or implications of missing or distorted data.

Sensitivity analysis is the process of accounting for the implications of missing or distorted data. Methods of sensitivity analysis to account for missing data include but are not limited to best-case scenario, worst-case scenario, and last-observation-carried-forward. Methods of sensitivity analysis to account for distorted data include but are not limited to intention-to-treat analysis, per-protocol analysis, completer analysis.

 

A funnel plot may be used to detect missing data due to publication bias. Although funnel plot asymmetry has been equated with publication bias, the funnel plot displays a tendency for the intervention effects estimated in smaller studies to differ from those estimated in larger studies, and such small-study effects may be due to reasons other than publication bias. (Egger M, Smith GD, Schneider M, Minder C. Bias in meta-analysis detected by a simple, graphical test. BMJ 1997; 315: 629-634.) Consider also inadequate accounting for heterogeneity.

 

The term 'inadequate sensitivity analysis' matches the ROBIS signaling question 4.5 ''Were the findings robust, e.g. as demonstrated through funnel plot or sensitivity analyses?'

 

Bias is defined as a systematic distortion in research results (estimation of effect, association, or inference). Distortions in research results means differences between the reported results (findings, conclusions, effect estimates) and the actuality (the truth, the estimand [the quantity targeted for estimation]). Synthesis bias is defined as a bias in the conduct of an analysis combining two or more studies or datasets. An inadequate approach to the accounting for missing or distorted data in the conduct of an evidence synthesis can introduce or obscure a systematic distortion in research results.

 

The GRADE Ontology Working Group found that both terms that were open for voting received negative votes.

Vote on Imprecision was 11-1

The comment for application was refined to reflect a comment by the expert working group. The meeting was recorded and an explanation is included in the recording. The recording can be viewed here. We believe that Imprecision is not a binary concept relating to a threshold, and the way we have defined it focused on the definition, not the amount, of imprecision. The rating of imprecision in GRADE is always related to a threshold but that is not the definition of Imprecision. We discussed that these are two separate concepts and we attempted to capture this missing piece in the comment for application, and an explanation of the GRADE approach to rating vs the GRADE definition.

This will be a good example of how repeated rounds of voting result in a well thought out and refined definition.

The vote on Publication Bias was 16-6

We had a discussion of the role of eligibility in the review in publication bias. We then discussed the relationship between the accessibility or availability of the study vs the results of the study. Representativeness was also discussed. Many of these concepts have changed over time as the concept of publication changes. The term was refined and was reopened for voting.

 

Term

Definition

Alternative Terms

(if any)

Comment for application

(if any)

The amount of variation or spread among the probable values for the estimate of effect.

Imprecision is related to random error. Measures of imprecision that are commonly used include confidence intervals, credible intervals, standard deviation, standard error, and a range of probable values.

 

In the GRADE approach, imprecision is rated relative to a threshold. The rating of imprecision is based on whether the range of probable values for the estimate of effect includes values on both sides of a threshold. Thresholds may be set related to the null effect, a minimally important difference, or classifying magnitudes of effect (e.g., small, moderate, or large). In the context of making a decision, the threshold is often specified as the value for which the decision would differ.

 

In some circumstances, imprecision may be assumed when the sample size is not large enough (e.g., less than the optimal information size) or the number of events is too small to provide a reliable estimate of effect.

The situation in which findings of a review are distorted due to systematic differences between the studies included and the studies that are eligible for the review but not identified due to the studies remaining unpublished or obscurely published.

Studies may be obscurely published if they are published in journals with limited access due to financial, language, or indexing barriers, or are only available as abstracts, theses, conference proceedings, preprints, or other less accessible formats.

 

Publication bias is a study selection bias in which the publicly available studies are not representative of all conducted studies.

 

Publication bias arises from the failure to identify all studies that have been conducted, either published (i.e., publicly available) or unpublished. The term 'studies' means evidence or research results in any form where such studies would meet the study eligibility criteria without consideration of criteria regarding the form of publication. The phrase 'publicly available studies' means the studies are available to the broad academic community and the public through established distribution channels in any form, including forms with restricted access.  Established distribution channels include peer-reviewed journals, books, conference proceedings, dissertations, reports by governmental or research organizations, preprints, and study registries.

 

Publication bias often leads to an overestimate in the effect in favor of the study hypothesis, because studies with statistically significant positive results are more likely to be publicly available.

 

Releases on the FEvIR Platform:

The Fast Evidence Interoperability Resources (FEvIR) Platform is available for use now, but is “pre-release”.  The current version is 0.241.0 (June 26, 2024). Viewing resources is open without login.  Signing in is free and required to create content (which can then only be edited by the person who created the content).

Release 0.240.0 (June 24, 2024)

                                              i. displays note element content in the Metadata section of Resource Viewers;

                                            ii. improves the editing experience for section content in Authoring Tools by collapsing editing for Section Author content unless expanded, simplifying editing of section.orderedBy content by using a dropdown menu at the CodeableConcept level instead of Coding level, labeling the section when introducing section content editing for referenced Resources, and adding a relatedArtifact datatype entry for type ‘part-of’ with reference to the Composition Resource for any Resources created from an Authoring Tool;

                                           iii. adjusts the EvidenceReportPackage Authoring and Viewing Tool components so other tools can use them with any order of placement for Introduction, Discussion, Methods, References, Competing Interests, Acknowledgements, and Appendices sections; and

                                           iv. improves the Metadata editing in Authoring Tools by replacing 11 status code choices with 6 choices using more intuitive labels, simplifying the editing interface for author references, and simplifying the editing interface for contributor data (attester element values).

Release 0.240.1 (June 24, 2024) creates a coding value (“Defined in text”) in addition to the text value (“Classifications”) for the section.code element value added to document-type Composition Resources, to be conformant to the EBMonFHIR Implementation Guide.

Release 0.241.0 (June 26, 2024) improves the data entry experience for the Reference datatype by limiting the type value choices to FHIR Resource types and providing a searchable dropdown menu for allowed values, and by placing the “Search Resources to select one” button higher in the data entry interface.

FEvIR®: Group (Cohort Definition) Builder/Viewer version 0.31.0 (June 26, 2024) creates and displays a Group Resource.

Release 0.31.0 (June 26, 2024) limits the characteristic value data entry to Reference datatype with Resource type “Group” when the characteristic type value is “Eligibility Criteria”.

Computable Publishing®: Comparative Evidence Report Authoring Tool version 0.25.0 (June 26, 2024)  creates and displays a Composition Resource with a ComparativeEvidenceReport Profile.

Releasel 0.24.0 (June 24, 2024) adds EvidenceReportPackage Authoring and Viewing Tool components and changes the order of sections in the Navigation menu to Introduction, Population, Intervention, Comparator, Research Study, Methods, Baseline Measures, Participant Flow, Outcomes, Discussion, References, Competing Interests, Acknowledgements, Appendices, How to Cite, Metadata, Associated Resources, Classifiers, and JSON Outline.

Release 0.25.0 (June 26, 2024) does not display Inclusion criteria and Exclusion criteria for Group Resources with no characteristic element data.

Computable Publishing®: Guideline Authoring Tool version 0.15.0 (June 24, 2024)creates a Composition Resource with a Guideline Profile.

Guideline Authoring Tool 0.15.0 (June 24, 2024) changes the order of sections in the Navigation menu to Introduction, Methods, Recommendations, Discussion, References, Competing Interests, Acknowledgements, Appendices, How to Cite, Metadata, Associated Resources, Classifiers, and JSON Outline.

Computable Publishing®: Recommendation Authoring Tool version  0.21.0 (June 24, 2024) creates a Composition Resource with a Recommendation Profile and the associated Resources for a structured representation of a recommendation.

Recommendation Authoring Tool 0.21.0 (June 24, 2024) changes the order of sections in the Navigation menu to Introduction, Recommendation, Justification, Considerations, Methods, Evidence, Discussion, References, Competing Interests, Acknowledgements, Appendices, How to Cite, Metadata, Associated Resources, Classifiers, and JSON Outline.

 

 Quotes for Thought:

“Real courage is holding on to a still voice in your head that says, ‘I must keep going.’ It’s that voice that says nothing is a failure if it is not final. That voice that says to you, ‘Get out of bed. Keep going. I will not quit.’” — Cory Booker

“A perfect summer day is when the sun is shining, the breeze is blowing, the birds are singing, and the lawn mower is broken.”--James Dent

“A year from now you may wish you had started today.” — Karen Lamb

“You can’t use up creativity. The more you use, the more you have.” – Maya Angelou

“Stressed is desserts spelled backwards.” -- Brian Seaward

 

To get involved or stay informed: HEvKA Project Page on FEvIR Platform, HEvKA Project Page on HL7 Confluence, or join any of the groups that are now meeting in the following weekly schedule:

 

Weekly Meeting Schedule and Link:

 

Day

Time (Eastern)

Team

Monday 

8-9 am 

Project Management

Monday

9-10 am

Setting the Scientific Record on FHIR WG

Monday 

2-3 pm 

Statistic Terminology WG

Tuesday

9-10 am

Measuring the Rate of Scientific Knowledge Transfer WG

Tuesday 

2-3 pm 

StatisticsOnFHIR WG (a CDS EBMonFHIR sub-WG)

Tuesday

3-4 pm

Ontology Management WG

Wednesday

8-9 am 

Funding the Ecosystem Infrastructure WG

Wednesday

9-10 am 

Communications(Awareness, Scholarly Publications) WG

Thursday

8-9 am

EBM Implementation Guide WG (a CDS EBMonFHIR sub-WG)

Thursday

9-10 am

Computable EBM Tools Development WG

Friday

9-10 am 

Risk of Bias Terminology WG

Friday

10-11 am 

GRADE Ontology WG

Friday

12-1 pm

Project Management

 

To join any of these meetings:

________________________________________________________________________________

Microsoft Teams meeting

Join on your computer, mobile app or room device

Click here to join the meeting *New Link!

Meeting ID: 279 232 517 719
Passcode: 8pCpbF

Download Teams | Join on the web

Or call in (audio only)

+1 929-346-7156,,35579956#   United States, New York City

Phone Conference ID: 355 799 56#

Find a local number

Meeting support by ComputablePublishing.com

 

 

 

 

Joanne Dehnbostel MS, MPH

Research and Analysis Manager, Computable Publishing LLC

 

A picture containing text

Description automatically generated

 

Making Science Machine-Interpretable
http://computablepublishing.com 

 

 

The views and opinions included in this email belong to their author and do not necessarily mirror the views and opinions of the company. Our employees are obliged not to make any defamatory clauses, infringe, or authorize infringement of any legal right. Therefore, the company will not take any liability for such statements included in emails. In case of any damages or other liabilities arising, employees are fully responsible for the content of their emails.

The views and opinions included in this email belong to their author and do not necessarily mirror the views and opinions of the company. Our employees are obliged not to make any defamatory clauses, infringe, or authorize infringement of any legal right. Therefore, the company will not take any liability for such statements included in emails. In case of any damages or other liabilities arising, employees are fully responsible for the content of their emails.
Reply all
Reply to author
Forward
0 new messages